by Solomon
Using the Systeme API can be challenging due to its pagination settings and low rate limit. This requires a bit more knowledge about API requests than a beginner might have. This template provides preconfigured HTTP Request nodes to help you work more efficiently. Pagination settings, item limits, and rate limits are all configured for you, making it easier to get started. How to configure Systeme.io credentials The Systeme API uses the Header Auth method. So create a Header Auth credential in your n8n with the name "X-API-Key". . Check out my other templates 👉 https://n8n.io/creators/solomon/
by Lucas Perret
This workflow allows to scrape Google Maps data in an efficient way using SerpAPI. You'll get all data from Gmaps at a cheaper cost than Google Maps API. Add as input, your Google Maps search URL and you'll get a list of places with many data points such as: phone number website rating reviews address And much more. Full guide to implement the workflow is here: https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
by Davide
This workflow is designed to process PDF documents using Mistral's OCR capabilities, store the extracted text in a Qdrant vector database, and enable Retrieval-Augmented Generation (RAG) for answering questions. Here’s how it functions: Once configured, the workflow automates document ingestion, vectorization, and intelligent querying, enabling powerful RAG applications. Benefits End-to-End Automation** No manual interaction is needed: documents are read, processed, and made queryable with minimal setup. Scalable and Modular** The workflow uses subflows and batching, making it easy to scale and customize. Multi-Model Support** Combines Mistral for OCR, OpenAI for embeddings, and Gemini for intelligent answering—taking advantage of the strengths of each. Real-Time Q\&A** With RAG integration, users can query document content through natural language and receive accurate responses grounded in the PDF data. Light or Full Mode** Users can choose to index full page content or only summarized text, optimizing for either performance or richness. How It Works PDF Processing with Mistral OCR: The workflow starts by uploading a PDF file to Mistral's API, which performs OCR to extract text and metadata. The extracted content is split into manageable chunks (e.g., pages or sections) for further processing. Vector Storage in Qdrant: The extracted text is converted into embeddings using OpenAI's embedding model. These embeddings are stored in a Qdrant vector database, enabling efficient similarity searches for RAG. Question-Answering with RAG: When a user submits a question via a chat interface, the workflow retrieves relevant text chunks from Qdrant using vector similarity. A language model (Google Gemini) generates answers based on the retrieved context, providing accurate and context-aware responses. Optional Summarization: The workflow includes an optional summarization step using Google Gemini to condense the extracted text for faster processing or lighter RAG usage. Set Up Steps To deploy this workflow in n8n, follow these steps: Configure Qdrant Database: Replace QDRANTURL and COLLECTION in the "Create collection" and "Refresh collection" nodes with your Qdrant instance details. Ensure the Qdrant collection is configured with the correct vector size (e.g., 1536 for OpenAI embeddings) and distance metric (e.g., Cosine). Set Up Credentials: Add credentials for: Mistral Cloud API (for OCR processing). OpenAI API (for embeddings). Google Gemini API (for chat and summarization). Google Drive (if sourcing PDFs from Drive). Qdrant API (for vector storage). PDF Source Configuration: If using Google Drive, specify the folder ID in the "Search PDFs" node. Alternatively, modify the workflow to accept PDFs from other sources (e.g., direct uploads or external APIs). Customize Text Processing: Adjust chunk size and overlap in the "Token Splitter" node to optimize for your document type. Choose between raw text or summarized content for RAG by toggling between the "Set page" and "Summarization Chain" nodes. Test the RAG: Trigger the workflow manually or via a chat message to verify OCR, embedding, and Qdrant storage. Use the "Question and Answer Chain" node to test query responses. Optional Sub-Workflows: The workflow supports execution as a sub-workflow for batch processing (e.g., handling multiple PDFs). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Yulia
💡 What it is for This workflow helps to automatically discover undocumented API endpoints by analysing JavaScript files from the website's HTML code. When building automation for platforms without public APIs, we face a significant technical barrier. In a perfect world, every service would offer well-documented APIs with clear endpoints and authentication methods. But the reality is different. Before we resort to complex web scraping, let's analyse the architecture of the platform and check whether it makes internal API calls. We will examine JavaScript files embedded in the HTML source code to find and extract potential API endpoints. ⚙️Key Features To discover hidden API endpoints, we can apply two major approaches: 1. Predefined regex extraction: manually insert a fixed regex with the necessary conditions to extract endpoints. Unlike LLM, which creates a custom regex for each JS file, we provide a generic expression to capture all URL strings. We do not want to accidentally miss important API endpoints. 2. AI-supported extraction: ask LLMs to examine the structure of the JavaScript code. The 1st model will: capture potential API endpoints create a detailed description of each identified endpoint with methods and query parameters the 2nd LLM connected to the AI Agent will generate a regex for each JS file individually based on the output of the 1st model. In addition to pure endpoint extraction, we supplement our analysis with: AI regex validation:** the AI Agent calls a validation tool to iteratively improve its regex based on the reference data. Results comparison:** side-by-side analysis of API endpoints extracted with a predefined regex against AI-supported results. ✅Requirements: OpenRouter API access: for AI-powered analysis (Gemini + Claude models by default) Minimal setup: simply configure the target URL and run Platforms: JS files must be accessible and have embedded standard API endpoints patterns (/api/, /v1/, etc.) 💪Use Cases 📚 API documentation: create complete endpoint descriptions for internal APIs 🚀 Automation & integration projects: find the APIs you need when official documentation is missing 🛠 Web scraping projects: discover data access patterns 🔍 Security research: map attack surfaces and explore unprotected endpoints 🎉Extracted the endpoints, what now? To execute API requests, we often need additional information such as query parameters or JSON body data: One way to find out exactly how the request is being made on the platform is to navigate to the Network tab in the Dev Tools console while interacting with the platform. Look for anything that resembles API requests and review the request/response headers, payload and query parameters. Alternatively, you can also check the JS file and the page source code for the required values. ✨Inspiration As a guitarist who also builds workflows, I wanted to automate communication with the booking platform I use in my music project. While trying to connect to the platform from n8n, I ran into a challenge: no public APIs. Fortunately, I found out that the platform I work with was built as a modern web app with client-side JavaScript that contained information about the API structure. This led me to the topic of hidden API endpoints and eventually to this workflow. It is part of my music booking project which I presented at the n8n Community Meetup in Berlin on 22 May 2025.
by Jonathan
Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
by digi-stud.io
Airtable Hierarchical Record Fetcher Description This n8n workflow retrieves an Airtable record along with its related child records in a hierarchical structure. It can fetch up to 3 levels of linked records and assembles them into a comprehensive JSON object, making it ideal for complex data relationships and nested record structures. Features Multi-level Record Fetching**: Retrieves parent record, linked child records (level 2), and optionally grandchild records (level 3) API Call Optimization**: Uses Airtable's filterByFormula to minimize API calls by fetching multiple related records in single requests Selective Level 3 Fetching**: Only fetches level 3 records for specified linked fields to optimize performance Rich Text Processing**: Converts Airtable's pseudo-markdown rich text fields to HTML format Hierarchical JSON Output**: Organizes all data in a structured, nested JSON format Flexible Configuration**: Customizable depth and field selection per execution Input Parameters The workflow accepts a JSON array with the following structure: [ { "base_id": "appN8nPMGoLNuzUbY", "table_id": "tblLVOwpYIe0fGQ52", "record_id": "reczMh1Pp5l94HdYf", "level_3": [ "fldRaFra1rLta66cD", "fld3FxCaYk8AVaEHt" ], "to_html": true } ] Parameter Details | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | base_id | string | Yes | Airtable base identifier | | table_id | string | Yes | Airtable table identifier for the main record | | record_id | string | Yes | Airtable record identifier to fetch | | level_3 | array | No | Array of field IDs from level 2 records for which to fetch level 3 children | | to_html | boolean | No | Convert rich text fields from pseudo-markdown to HTML (default: false). This requires marked npm package. | Output Structure The workflow returns a hierarchical JSON object with the following structure: { "id": "recXXXXXXX", "field_1": ..., "field_2": ..., "level2_child": [ { "id": "recXXXXXXX", "field_a": ..., "field_b": ..., "level3_child": [ { "id": "recXXXXXXX", "field_y": ..., "field_z": ..., }, ... ] }, ... ] } `
by Caio Garvil
Automate Colombian Cashflow Data Extraction to Google Sheets with AI Who’s it for This workflow is designed for finance professionals, accountants, small business owners in Colombia, or anyone needing to automate the extraction of invoice data and its entry into Google Sheets. It's particularly useful for handling Colombian tax and legal specifics. How it works / What it does This workflow automates the process of extracting critical data from invoices and receipts (PDFs and JPEGs) and organizing it in a Google Sheet: Triggers:** The workflow initiates when a new file is created or an existing file is updated in a designated Google Drive folder. File Handling:** It first downloads the detected file. Routing:** A "Switch" node intelligently routes the file based on its extension – one path for PDFs and another for JPEGs. Data Extraction:** For PDF files, it directly extracts all text content from the document. For JPEG image files, it utilizes an AI Agent (Azure OpenAI) to process the image and extract its textual content. AI-Powered Reasoning:** Two "Reasoning Agent" nodes (Azure OpenAI Chat Models) act as a specialized "Colombian Tax and Legal Extraction Agent". They parse the extracted text from invoices to pull out structured data in JSON format, including: Vendor name. Modification date. Line items with detailed description, sub_total, iva_value, total_amount, category, and sub_category. Specific Colombian tax fields like Retefuente and ReteICA. The number of items generated. Output Parsing:** A "Structured Output Parser" node ensures that the AI's output strictly adheres to a predefined JSON schema, guaranteeing consistent data formatting. Data Preparation:** "Edit Field" nodes ensure the AI's extracted data is in a valid format. Item Splitting:** "Split data" nodes separate the 'items' array from the AI's output, allowing each individual line item from the invoice to be processed as a separate entry for the Google Sheet. Google Sheet Integration:** Finally, "Fill Template" nodes append the fully processed invoice data (per line item) into your designated Google Sheet. How to set up Google Drive Credentials: Ensure you have configured your Google Drive OAuth2 API credentials in n8n. Azure OpenAI Credentials: Set up your Azure OpenAI API credentials, ensuring access to models like gpt-4o. Or you can simply use your traditional OpenAI or others LLMs. Google Sheets Credentials: Configure your Google Sheets OAuth2 API credentials. Google Drive Folder ID: In the "1a. Updated file trigger" and "1b. Created file trigger" nodes, update the folderToWatch parameter with your specific Google Drive Folder ID. Google Sheet ID and Sheet Name: In the "8. Fill Template" and "8. Fill Template1" nodes, update the documentId and sheetName parameters with your specific Google Sheet ID and the name of the sheet where data should be appended. Requirements An active n8n instance. A Google Drive account for file uploads. A Google Sheets account for data storage. An Azure OpenAI account with access to chat models (e.g., gpt-4o) for the "Azure OpenAI Chat Model" nodes and "Extract Data Agent". How to customize the workflow AI Extraction Prompts:** Modify the prompt instructions in the "5. Reasoning Agent" and "5. Reasoning Agent1" nodes if you need to extract different data points or change the output format. Google Sheet Column Mappings:** Adjust the columns mapping in the "8. Fill Template" and "8. Fill Template1" nodes to match your specific Google Sheet headers and data requirements. File Types:** Extend the "3. Route" node to handle additional file types (e.g., DOCX, PNG) by adding new conditions and corresponding extraction nodes.
by Solomon
Enhance your data analysis by connecting an AI Agent to your dataset, using n8n tools. This template teaches you how to build an AI Data Analyst Chatbot that is capable of pulling data from your sources, using tools like Google Sheets or databases. It's designed to be easy and efficient, making it a good starting point for AI-driven data analysis. You can easily replace the current Google Sheets tools for databases like Postgres or MySQL. How It Works The core of the workflow is the AI Agent. It's connected to different data retrieval tools, to get data from Google Sheets (or your preferred database) in many different ways. Once the data is retrieved, the Calculator tool allows the AI to perform mathematical operations, making your data analysis precise. Who is this template for Data Analysts & Researchers:** Pull data from different sources and perform quick calculations. Developers & AI Enthusiasts:** Learn to build your first AI Agent with easy dataset access. Business Owners:** Streamline your data analysis with AI insights and automate repetitive tasks. Automation Experts:** Enhance your automation skills by integrating AI with your existing databases. How to Set Up You can find detailed instructions in the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Omer Fayyaz
AI Recipe Generator from Pantry Items using FatSecret API This workflow creates an intelligent WhatsApp cooking assistant that transforms pantry ingredients into personalized recipe suggestions using AI and the FatSecret Recipes API What Makes This Different: AI-Powered Recipe Discovery** - Uses Google Gemini AI to understand user intent and dietary preferences Smart Ingredient Analysis** - Automatically extracts ingredients, dietary restrictions, and cooking constraints FatSecret API Integration** - Leverages comprehensive recipe database with nutritional information WhatsApp Native Experience** - Seamless chat interface for recipe discovery Contextual Memory** - Remembers conversation context for better user experience Intelligent Parameter Mapping** - AI automatically maps user requests to API parameters Key Benefits of AI-Driven Architecture: Natural Language Understanding** - Users can describe what they have in plain English Personalized Recommendations** - Considers dietary restrictions, time constraints, and preferences Eliminates Manual Search** - No need to manually input specific ingredients or filters Scalable Recipe Database** - Access to thousands of recipes through FatSecret API Conversational Interface** - Natural chat flow instead of form-based inputs Smart Context Management** - Remembers previous requests for better follow-up suggestions Who's it for This template is designed for food delivery services, meal planning apps, nutritionists, cooking enthusiasts, and businesses looking to provide intelligent recipe recommendations. It's perfect for companies who want to engage customers through WhatsApp with personalized cooking assistance, helping users discover new recipes based on available ingredients and preferences. How it works / What it does This workflow creates an intelligent WhatsApp cooking assistant that transforms simple ingredient lists into personalized recipe suggestions. The system: Receives WhatsApp messages through webhook triggers Analyzes user input using Google Gemini AI to extract ingredients, dietary needs, and preferences Maps user requests to FatSecret API parameters automatically Searches recipe database based on extracted criteria (ingredients, calories, time, cuisine, etc.) Processes API results to format recipe suggestions with images and nutritional info Maintains conversation context using memory buffer for better user experience Sends formatted responses back to users via WhatsApp Key Innovation: AI-Powered Parameter Extraction - Unlike traditional recipe apps that require users to fill out forms or select from predefined options, this system understands natural language requests and automatically maps them to the appropriate API parameters, making recipe discovery as simple as texting a friend. How to set up 1. Configure WhatsApp Business API Set up WhatsApp Business API credentials Configure webhook endpoints for message reception Set up phone number ID and recipient handling Ensure proper message sending permissions 2. Configure Google Gemini AI Set up Google Gemini (PaLM) API credentials Ensure proper API access and quota limits Configure the AI model for recipe-related conversations Test the AI's understanding of cooking terminology 3. Configure FatSecret API Set up FatSecret OAuth2 API credentials Ensure access to the Recipes Search v3 endpoint Configure proper authentication and rate limiting Test API connectivity and response handling 4. Set up Memory Management Configure the memory buffer for conversation context Set appropriate session key mapping for user identification Adjust context window length based on expected conversation depth Test memory persistence across multiple messages 5. Test the Integration Send test messages through WhatsApp to verify end-to-end functionality Test various ingredient combinations and dietary restrictions Verify recipe suggestions are relevant and properly formatted Check that context memory works across multiple interactions Requirements WhatsApp Business API** account with webhook capabilities Google Gemini AI** API access for natural language processing FatSecret API** credentials for recipe database access n8n instance** with proper webhook and HTTP request capabilities Active internet connection** for real-time API interactions How to customize the workflow Modify Recipe Search Parameters Adjust the number of results returned (currently set to 5) Add more filtering options (cuisine types, cooking methods, difficulty levels) Implement pagination for browsing through more recipe options Add sorting preferences (newest, oldest, calorie-based, popularity) Enhance AI Capabilities Train the AI on specific dietary restrictions or cuisine preferences Add support for multiple languages Implement recipe rating and review integration Add nutritional goal tracking and meal planning features Expand Recipe Sources Integrate with additional recipe APIs (Spoonacular, Edamam, etc.) Add support for user-generated recipes Implement recipe bookmarking and favorites Add shopping list generation from selected recipes Improve User Experience Add recipe step-by-step instructions Implement cooking timer and progress tracking Add recipe sharing capabilities Implement user preference learning over time Business Features Add recipe monetization options Implement affiliate marketing for ingredients Add restaurant delivery integration Implement meal kit subscription services Key Features Natural language processing** - Understands cooking requests in plain English Intelligent parameter mapping** - AI automatically extracts search criteria Comprehensive recipe database** - Access to thousands of recipes via FatSecret API WhatsApp native interface** - Seamless chat experience for recipe discovery Contextual memory** - Remembers conversation history for better recommendations Dietary restriction support** - Handles allergies, preferences, and special diets Nutritional information** - Provides calorie counts and macro details Image integration** - Shows recipe photos when available Technical Architecture Highlights AI-Powered Processing Google Gemini integration** - Advanced natural language understanding Smart parameter extraction** - Automatic mapping of user requests to API calls Contextual memory** - Conversation history management for better user experience Intelligent fallbacks** - Graceful handling of unclear or incomplete requests API Integration Excellence FatSecret Recipes API** - Comprehensive recipe database with nutritional data OAuth2 authentication** - Secure and reliable API access Parameter optimization** - Efficient API calls with relevant search criteria Response processing** - Clean formatting of recipe suggestions WhatsApp Integration Webhook-based triggers** - Real-time message reception Message formatting** - Clean, readable recipe presentations User identification** - Proper session management for multiple users Error handling** - Graceful fallbacks for failed operations Performance Optimizations Efficient API calls** - Single request per user message Memory management** - Optimized conversation context storage Response caching** - Reduced API calls for repeated requests Scalable architecture** - Handles multiple concurrent users Use Cases Food delivery platforms** requiring recipe recommendation engines Meal planning services** needing ingredient-based recipe discovery Nutrition and wellness apps** requiring dietary-specific suggestions Cooking schools** offering personalized recipe guidance Grocery stores** helping customers plan meals around available ingredients Restaurant chains** providing recipe inspiration for home cooking Health coaches** offering personalized meal suggestions Social cooking communities** sharing recipe ideas and inspiration Business Value Customer Engagement** - Interactive recipe discovery increases user retention Personalization** - AI-driven recommendations improve user satisfaction Operational Efficiency** - Automated recipe suggestions reduce manual support Revenue Generation** - Recipe recommendations can drive ingredient sales Brand Differentiation** - AI-powered cooking assistant sets services apart Data Insights** - User preferences provide valuable market intelligence Scalability** - Handles multiple users simultaneously without performance degradation This template revolutionizes recipe discovery by combining the power of AI natural language processing with comprehensive recipe databases, creating an intuitive WhatsApp experience that makes cooking inspiration as simple as having a conversation with a knowledgeable chef friend.
by Thibaud
Get a personalized list of garage sales happening today, based on your current location, directly in Telegram each morning! This n8n workflow integrates Home Assistant and Brocabrac.frto: Automatically detect your location every day Scrape and parse garage sale listings from Brocabrac Filter for high-quality and nearby events Send a neatly formatted message to your Telegram account Perfect for treasure hunters and second-hand enthusiasts who want to stay in the loop with zero effort!
by Wolfgang Renner
🧠 Business Card Scanner – Automate Contact Extraction This workflow automates the process of extracting contact details from business cards (PDF or image) and saving them directly into an n8n Data Table. No more manual data entry — just upload a card and let AI do the rest. ⚙️ How It Works Upload the business card via a web form (PDF or image). The uploaded file is converted to Base64 for processing. The Base64 data is sent to the Mistral OCR API, which extracts text from the image. The OCR output is parsed into JSON. An AI Agent (OpenAI GPT-4o-mini) interprets the extracted text and converts it into structured business card information (e.g., name, company, email, phone). The Structured Output Parser validates and aligns the data with a predefined schema. The workflow upserts (inserts or updates) the contact details into an n8n Data Table named business cards, using the email address as the unique identifier. ✅ Result: Seamless digitization of business cards into structured, searchable contact data. 🧩 Prerequisites Before importing the workflow, make sure you have the following: n8n Instance with access to the Data Table feature OpenAI Platform account and API key (configured in n8n) Mistral AI account and API key (configured in n8n) 🛠️ Setup Steps Import the Workflow Download and import the JSON file into your n8n instance. Create a Data Table Name it business_cards (or adjust the workflow accordingly). Add the following fields: firstname name company jobdescription phone mobil email street postcode place web Configure API Credentials Mistral OCR API → Add your API key under HTTP Bearer Auth. OpenAI API → Add your API key under OpenAI Credentials. Model: gpt-4o-mini (recommended for speed and low cost). Activate the Web Form Trigger Enable the trigger node to make the business card upload form accessible via a public URL. Test the Workflow Upload a sample business card. Confirm that extracted contact data automatically appears in your Data Table. 💡 Example JSON Output { "firstname": "Anna", "name": "Müller", "company": "NextGen Tech GmbH", "jobdescription": "Head of Marketing", "email": "anna.mueller@nextgen.tech", "phone": "+49 821 1234567", "mobil": "+49 170 9876543", "street": "Schillerstraße 12", "postcode": "86150", "place": "Augsburg", "web": "https://nextgen.tech" }
by Ludwig
How it Works As n8n instances scale, teams often lose track of sub-workflows—who uses them, where they are referenced, and whether they can be safely updated. This leads to inefficiencies like unnecessary copies of workflows or reluctance to modify existing ones. This workflow solves that problem by: Fetching all workflows and identifying which ones execute others. Verifying that referenced subworkflows exist. Building a caller-subworkflow dependency graph for visibility. Automatically tagging sub-workflows based on their parent workflows. Providing a chart visualization to highlight the most-used sub-workflows. Set Up Steps Estimated time: ~10–15 minutes Set up n8n API credentials to allow access to workflows and tags. Replace instance_url with your n8n instance URL. Run the workflow to analyze dependencies and generate the graph. Review and validate assigned tags for sub-workflows. (Optional) Enable pie chart visualization to see the most-used sub-workflows. This workflow is essential for enterprise teams managing large n8n instances, preventing workflow duplication, reducing uncertainty around dependencies, and allowing safe, informed updates to sub-workflows.