by DataAnts
Dynamically Run SuiteQL Queries in NetSuite via HTTP Webhook in n8n > Important: This template uses a NetSuite community node, so it only works on self-hosted n8n. Cloud-based n8n instances currently do not support community nodes. Summary This workflow template allows you to dynamically run SuiteQL queries in NetSuite by sending an HTTP request to an n8n Webhook node. Once triggered, the workflow uses token-based authentication to execute your SuiteQL query and returns the results as JSON. This makes it easy to integrate real-time NetSuite data into dashboards, reporting tools, or other applications. Who Is This For? Developers & Integrators**: Easily embed NetSuite data retrieval into custom apps or internal tools. Enterprises & Consultants**: Integrate dynamic reporting or data enrichment from NetSuite without manual exports. System Administrators**: Automate routine queries and reduce manual intervention. Use Cases & Benefits 1. Dynamic Data Access Send any SuiteQL query on demand instead of hardcoding queries or manually running reports. 2. Seamless Integration Quickly pull NetSuite data into front-end systems (like Excel dashboards, custom web apps, or internal tools) by calling the webhook endpoint. 3. Simplified Reporting Automate data extraction and formatting, reducing the need for manual exports and improving efficiency. How It Works Trigger: An HTTP request to the webhook node initiates the workflow. Input Processing: The workflow reads the SuiteQL query from the incoming request parameter (suiteql). Query Execution: The NetSuite node uses your token-based authentication credentials to run the SuiteQL query. Response: Results are returned as JSON in the HTTP response, ready for further processing or immediate consumption. Prerequisites & Setup NetSuite Community Node This workflow requires the NetSuite community node. Make sure your self-hosted n8n instance supports community nodes. NetSuite Token-Based Authentication Enable TBA in NetSuite. Obtain the required consumer key, consumer secret, token ID, and token secret. n8n Webhook Copy the auto-generated webhook URL (e.g. http://<your-n8n-domain>/webhook/unique-id) from the Webhook node. Usage Send an HTTP GET or POST request to the webhook with your SuiteQL query. For example: curl "http://<your-n8n-domain>/webhook/unique-id?suiteql=SELECT%20*%20FROM%20account%20LIMIT%2010" The workflow will execute the query and return JSON data. Customization Change the Query**: Simply adjust the suiteql parameter in your HTTP request to run different SuiteQL statements. Data Transformation**: Insert nodes (e.g., Function, Set, or Format) to modify or reformat the data before returning it. Extend Integration**: Chain additional nodes to push the retrieved data to other services (Google Sheets, Slack, custom dashboards, etc.). Additional Notes Remember that this template is only compatible with self-hosted n8n because it uses a community node. - [netsuite community node](https://www.npmjs.com/package/n8n-nodes-netsuite ) If you have questions, suggestions, or need support, contact us at support@dataants.org.
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by Mal Chia
Who’s it for This workflow is perfect for HR teams, recruiters, or hiring managers who collect applicant information via a web form and want to automatically forward both candidate details and attached resumes into a dedicated Telegram channel or group. It streamlines manual email checks, speeding up review and collaboration. How it works On form submission: A Form Trigger node captures all applicant fields (name, age, WhatsApp number, education, desired role, availability date, expected salary, resume file, and additional comments). Date & Time: Formats the “fastest start date” into a human-readable string. Edit Fields: A Set node renames and reshapes incoming JSON into clear key/value pairs. If Have Resume: An If node routes submissions with an attached resume to one branch (sending both info and document) and submissions without a resume to a simpler info-only branch. Merge: Combines branches so both message types terminate in a single unified flow. Send a Resume & Send a Info: Two Telegram nodes post Markdown-formatted messages (and the PDF resume when available) to your specified Telegram chat. How to set up Install and enable the n8n-nodes-base.formTrigger and n8n-nodes-base.telegram community nodes (preview). Copy this JSON into your n8n instance (Workflow → Import from clipboard). Create environment variables for credentials: TELEGRAM_BOT_TOKEN TELEGRAM_CHAT_ID In each Telegram node, reference these variables instead of hard-coding ({{$env.TELEGRAM_BOT_TOKEN}}, {{$env.TELEGRAM_CHAT_ID}}). Requirements n8n version ≥ 0.200.0 Community nodes: Form Trigger, Telegram A Telegram bot with chat permissions A hosted form endpoint or embedded form at path /mmc-newjob How to customize the workflow Form fields: Edit the **Form Trigger node’s formFields.values to add or remove fields. Telegram formatting: Tweak captions under **Send a Resume and Send a Info to adjust the MarkdownV2 styling. Conditional logic: Modify the **If Have Resume node to branch on other criteria (e.g., education level). Styling: Update the customCss section in **Form Trigger to match your brand’s look. Good to know Community nodes may be in preview; test thoroughly before production. Webhook URLs change when you rename the workflow—revisit your form’s embed or webhook settings after renaming. Consider adding an Error Trigger node to capture failures and notify your team.
by InfraNodus
This template can be used to find the content gaps in PDF documents using the InfraNodus knowledge graph / GraphRAG text representation and then generate ideas / questions / AI prompts that bridge those gaps based on optimizing the knowledge graph's structure. Simply upload several PDF files (research papers, corporate or market reports, etc) and generate an idea in seconds. The template is useful for: generating ideas / questions for research generating content ideas based on competitors' discourse finding blind spots in any discourse and generating ideas that address them. avoiding the generic bias of LLM models and focusing on what's important in your particular context What are Content Gaps and Knowledge Graphs? Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions and ideas that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to generate ideas or research questions / prompts (if you use the InfraNodus question module instead). 4) Step 6: The ideas are then shown to the user in the same web form. Optionally, you can hook this template to your own workflow and send the idea / question generated to your own AI model / agent for further processing. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by David Olusola
🎯 JavaScript Master Class - Interactive Code Tutorial 📚 How It Works This tutorial is designed as a self-paced learning experience where you explore working JavaScript code examples. Unlike traditional tutorials, you learn by examining real implementations and understanding how they work. 🔍 The Learning Method: Execute first - See the workflow in action Open each node - This is where the real learning happens! Study the code - Read JavaScript implementations and comments Understand the flow - See how data transforms between nodes Experiment - Modify code to test your understanding 🎮 The "Game" Concept: It's not a real game - it's a gamified learning experience Uses RPG elements (XP, levels, achievements) to make learning engaging Simulates progression through 3 difficulty levels Main learning happens when you open nodes and read the code!** 🚀 Setup Steps Step 1: Import the Template Copy the JSON template provided Open your n8n instance Create a new workflow Press Ctrl+A (or Cmd+A on Mac) to select all Press Ctrl+V (or Cmd+V) to paste the JSON Click "Save" and name it: JavaScript Master Class - Interactive Tutorial Step 2: Execute the Workflow Click "Test workflow" or "Execute workflow" Watch it run through all nodes automatically See the final results and progression simulation Step 3: Start Learning (The Important Part!) Now the real learning begins - you must open each node manually: 🔍 For Each Code Node: Double-click the node to open it Read the JavaScript code carefully Study the comments - they explain key concepts Understand the logic - how input becomes output Note the techniques used in each challenge 📖 For Each Sticky Note: Read the explanations and context Understand the learning objectives Note the skills being taught 🎯 Learning Path Level 1: Data Warrior (Beginner) 📂 Open Node: 🎲 Level 1: Data Warrior Focus:** Data deduplication using filter() and findIndex() Key Skills:** Array methods, duplicate detection What to Study:** How the deduplication algorithm works Level 2: API Ninja (Intermediate) 📂 Open Node: ⚔️ Level 2: API Ninja Focus:** Data transformation and validation Key Skills:** String manipulation, validation logic, error handling What to Study:** How to clean and validate messy API data Level 3: Automation Master (Advanced) 📂 Open Node: 🏆 Final Boss: Automation Master Focus:** Complex workflow processing Key Skills:** Task orchestration, priority sorting, error handling What to Study:** How to build robust automation systems 💡 Learning Tips 🔍 Active Exploration: Don't just run it** - open every single node! Read all comments** - they contain key insights Compare approaches** - see how complexity increases Try modifications** - change values and see what happens 📝 Study Techniques: Take notes** on patterns you see Copy interesting code** snippets for reference Try to explain** each function to yourself Test your understanding** by modifying the code 🧪 Experimentation: Change filter conditions** in Level 1 Modify validation rules** in Level 2 Adjust workflow logic** in Level 3 Break something** and fix it - great for learning! ⚠️ Important Notes 🎮 "Game" Reality Check: This is NOT an interactive game where you make choices It's a code tutorial with game-like progression themes The "game" runs automatically when executed Real learning happens when you manually open and study each node** 📚 Educational Value: Primary learning:** Understanding JavaScript implementations Secondary learning:** n8n workflow patterns Bonus learning:** Problem-solving approaches 🔧 Technical Requirements: Working n8n instance Basic JavaScript knowledge helpful but not required Willingness to explore and experiment 🎯 Success Metrics You'll know you're learning when you can: ✅ Explain how each deduplication algorithm works ✅ Identify the validation patterns used ✅ Understand the workflow orchestration logic ✅ Modify the code to handle different scenarios ✅ Apply these patterns to your own projects 🤔 Next Steps After completing this tutorial: Apply the patterns to your own workflows Experiment with variations Build something using these techniques Share your learnings with the community Remember: The magic happens when you open each node and study the code! 🔍
by InfraNodus
This template can be used to generate research ideas from PDF scientific papers based on the content gaps found in text using the InfraNodus knowledge graph GraphRAG knowledge graph representation. Simply upload several PDF files (research papers, corporate or market reports, etc) and the template will generate a research question, which will then be sent as an AI prompt to the InfraNodus GraphRAG system that will extract the answer from the documents. As a result, you find the gap in a collection of research papers and bridge it in a few seconds . The template is useful for: advancing scientific research generating AI prompts that drive research further finding the right questions to ask to bridge blind spots in a research field avoiding the generic bias of LLM models and focusing on what's important in your particular context Using Content Gaps for Generating Research Questions Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to research questions, which are then used as AI prompts. 4) Step 6: The research questino is sent to the InfraNodus GraphRAG system that represents the PDF documents you submitted as a knowledge graph and then uses the research question generated to come up with an answer based on the content you uploaded. 4) Step 7: The ideas are then shown to the user in the same web form. Optionally, you can derive the answers from a different set of papers, so the question is generated from one batch, but the answer is generated from another. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Zach @BrightWayAI
Who's it for Content creators, researchers, educators, and digital marketers who need to discover high-quality YouTube training videos on specific topics. Perfect for building curated learning resource lists, competitive research, or content inspiration. What it does This workflow automatically searches YouTube using multiple search queries, filters for quality content, scores videos by relevance, and exports the top results to Google Sheets. It processes hundreds of videos and delivers only the most valuable educational content ranked by custom relevance criteria. The workflow searches for videos using 10 different AI automation-related queries (easily customizable), filters out low-quality content like shorts and clickbait, then ranks results based on title keywords, view counts, and engagement metrics. How it works Multi-query search: Searches YouTube with an array of related queries to get comprehensive coverage Content filtering: Removes shorts, spam, and low-quality videos using regex patterns Quality assessment: Filters videos based on view count, likes, and publication date Relevance scoring: Assigns scores based on title keywords and engagement metrics Result ranking: Sorts videos by relevance score and limits to top 50 results Export to Sheets: Delivers clean, organized data to Google Sheets with all metadata Requirements YouTube Data API v3 credentials from Google Cloud Console Google Sheets credentials for n8n workspace A Google Sheets document to receive the results How to set up Enable YouTube Data API v3 in your Google Cloud Console Add YouTube OAuth2 credentials to your n8n workspace Add Google Sheets credentials to your n8n workspace Create a Google Sheet and update the Google Sheets node with your document ID Customize search queries in the "Set Query" node for your topic Adjust filtering criteria in the Filter nodes based on your quality requirements How to customize the workflow Search topics: Modify the query array in the "Set Query" node to research any topic: [ "Python tutorial", "JavaScript course", "React beginner guide", // Add your queries here ] Quality thresholds: Adjust minimum views, likes, and date ranges in the "Filter for Quality" node Relevance scoring: Customize keyword weightings in the "Relevance Score" node to match your priorities Result limits: Change the number of final results in the "Limit" node (default: 50) Output format: Modify the "Set Fields" node to include additional YouTube metadata like duration, thumbnails, or category information The workflow is designed to be easily adaptable for any research topic while maintaining high content quality standards.
by InfyOm Technologies
✅ What problem does this workflow solve? If you're using a self-hosted n8n instance, there's no built-in version history or undo for your workflows. If a workflow is accidentally modified or deleted, there's no way to roll back. This backup workflow solves that problem by automatically syncing your workflows to Google Drive, giving you version control and peace of mind. ⚙️ What does this workflow do? ⏱ Runs on a set schedule (e.g., daily or every 12 hours). 🔍 Fetches all workflows from your self-hosted n8n instance. 🧠 Detects changes to avoid duplicate backups. 📁 Creates a dedicated folder for each workflow in Google Drive. 💾 Uploads new or updated workflow files in JSON format. 🗃️ Keeps backup history organized by date. 🔄 Allows for easy restore by importing backed-up JSON into n8n. 🔧 Setup Instructions 1. Google Drive Setup Connect your Google Drive account using the Google Drive node in n8n. Choose or create a root folder (e.g., n8n-workflow-backups) where backups will be stored. 2. n8n API Credentials Generate a Personal Access Token from your self-hosted n8n instance: Go to Settings → API in your n8n dashboard. Copy the token and use it in the HTTP Request node headers as: Authorization: Bearer <your_token> 3. Schedule the Workflow Use the Cron node to schedule this workflow to run at your desired frequency (e.g., once a day or every 12 hours). 🧠 How it Works Step-by-Step Flow: Scheduled Trigger The workflow begins on a timed schedule using the Cron node. Fetch All Workflows Uses the n8n API (/workflows) to retrieve a list of all existing workflows. Loop Through Workflows For each workflow: A folder is created in Google Drive using the workflow name. The workflow’s last updated timestamp is checked against Google Drive backups. Smart Change Detection If the workflow has changed since the last backup: A new .json file is uploaded to the corresponding folder. The file is named with the last updated date of the workflow (YYYY-MM-DD-HH-mm-ss.json) to maintain a versioned history. If no change is detected, the workflow is skipped. 🗂 Google Drive Folder Organization Backups are neatly organized by workflow and version: /n8n-workflow-backups/ ├── google-drive-backup-KqhdMBHIyAaE7p7v/ │ ├── 2025-07-15-13-03-32.json │ ├── 2025-07-14-03-08-12.json ├── resume-video-avatar-KqhdMBHIyAaE8p8vr/ │ ├── 2025-07-15-23-05-52.json Each folder is named after the workflow's name+id and contains timestamped versions. 🔧 Customization Options 📅 Change Backup Frequency Adjust the Cron node to run backups daily, weekly, or even hourly based on your needs. 📤 Use a Different Storage Provider You can swap out Google Drive for Dropbox, S3, or another cloud provider with minimal changes. 🧪 Add Workflow Filtering Only back up workflows that are active or match specific tags by filtering results from the n8n API. ♻️ How to Restore a Workflow from Backup Go to the Google Drive backup folder for the workflow you want to restore. Download the desired .json file (based on the date). Open your self-hosted n8n instance. Click Import Workflow from the sidebar menu. Upload the JSON file to restore the workflow. > You can choose to overwrite an existing workflow or import it as a new one. 👤 Who can use this? This template is ideal for: 🧑💻 Developers running self-hosted n8n 🏢 Teams managing large workflow libraries 🔐 Anyone needing workflow versioning, rollback, or disaster recovery 💾 Productivity enthusiasts looking for automated backups 📣 Tip Consider enabling version history in Google Drive so you get even more fine-grained backup recovery options on top of what this workflow provides! 🚀 Ready to use? Just plug in your n8n token, connect Google Drive, and schedule your backups. Your workflows are now protected!
by Javier Hita
Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features 🔍 Keyword-Based Google Maps Scraping**: Search for any business type in any location using natural language queries 📧 Contact Information Extraction**: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) 🚫 Smart Duplicate Prevention**: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries 📊 Google Sheets Integration**: Seamless data storage with automatic organization and structure 🌍 Multi-Location Support**: Process multiple cities, regions, or countries in a single workflow execution ⚡ Rate Limiting & Error Handling**: Built-in delays and error handling ensure reliable, uninterrupted execution 💰 Cost Optimization**: Intelligent batching and duplicate prevention minimize API usage and costs 📱 Comprehensive Data Collection**: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication** for Local Business Data API Google Sheets OAuth2** for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings → Credentials → Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOUR_API_KEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings → Credentials → Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | country_iso_code | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: business_id, name, phone_number, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. JSON Architect - Dynamically Generate JSON Output Formats for Any AI Agent Overview Version: 1.0 The JSON Architect Workflow is designed to instruct AI agents on the required JSON structure for a given context and create the appropriate JSON output format. This workflow ensures that the generated JSON is validated and tested, providing a reliable JSON output format for use in various applications. ✨ Features Dynamic JSON Generation**: Automatically generate the JSON format based on the input requirements. Validation and Testing**: Validate the generated JSON format and test its functionality, ensuring reliability before output. Iterative Improvement**: If the generated JSON is invalid or fails testing, the workflow will attempt to regenerate it until successful or until a defined maximum number of rounds is reached. Structured Output**: The final output is the generated JSON output format, making it easy to integrate with other systems or workflows. 👤 Who is this for? This workflow is ideal for developers, data scientists, and businesses that require dynamic JSON structures for the responses of AI agents. It is particularly useful for those involved in procedural generation, data interchange formats, configuration management and machine learning model input/output. 💡 What problem does this solve? The workflow addresses the challenge of generating optimal JSON structures by automating the process of creation, validation, and testing. This approach ensures that the JSON format is appropriate for its intended use, reducing errors and enhancing the overall quality of data interchange. Use-Case examples: 🔄 Data Interchange Formats 🛠️ Procedural Generation 📊 Machine Learning Model Input/Output ⚙️ Configuration Management 🔍 What this workflow does The workflow orchestrates a process where AI agents generate, validate, and test JSON output formats based on the provided input. This approach leads to a more refined and functional JSON output parser. 🔄 Workflow Steps Input & Setup: The initial input is provided, and the workflow is configured with necessary parameters. Round Start: Initiates the round of JSON construction, ensuring the input is as expected. JSON Generation & Validation: Generates and validates the JSON output format according to the input. JSON Test: Verifies whether the generated JSON output format works as intended. Validation or Test Fails: If the JSON fails validation or testing, the process loops back to the Round Start for correction. Final Output: The final output is generated based on successful JSON construction, providing a cohesive response. 📌 Expected Input input**: The input that requires a proper JSON structure. max_rounds**: The maximum number of rounds before stopping the loop if it fails to produce and test a valid JSON structure. Suggested: 10. rounds**: The initial number of rounds. Default: 0. 📦 Expected Output input**: The original input used to create the JSON structure. json_format_name**: A snake_case identifier for the generated JSON format. Useful if you plan to reuse it for multiple AI agents or Workflows. json_format_usage**: A description of how to use the JSON output format in an input. Meant to be used by AI agents receiving the JSON output format in their output parser. json_format_valid_reason**: The reason provided by the AI agents explaining why this JSON format works for the input. json_format_structure: The JSON format itself, intended for application through the **Advanced JSON Output Parser custom node. json_format_input: The **input after the JSON output format ( json_format_structure ) has been applied in an AI agent's output parser. 📌 Example An example that includes both the input and the final output is provided in a note within the workflow. ⚙️ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-advanced-output-parser**: 1.0.1 Running n8n via**: Podman 4.3.1 Operating System**: Linux ⚡ Requirements to Use/Setup 🔐🔧 Credentials & Configuration Obtain the necessary LLM API key and permissions to utilize the workflow effectively. This workflow is dependent on a custom node for dynamically inputting JSON output formats called n8n-nodes-advanced-output-parser. You can find the repository here. Warning: As of 2025-07-09, the custom node creator has warned that this node is not production-ready. Beware when using it in production environments without being aware of its readiness. ⚠️ Notes, Assumptions & Warnings This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the JSON generation process. Ambiguous inputs may lead to invalid or irrelevant JSON output formats. ℹ️ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.
by Jah coozi
AI Medical Symptom Checker & Health Assistant A responsible, privacy-focused health information assistant that provides general health guidance while maintaining strict safety protocols and medical disclaimers. ⚠️ IMPORTANT DISCLAIMER This tool provides general health information only and is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult qualified healthcare providers for medical concerns. 🚀 Key Features Safety First Emergency Detection**: Automatically identifies emergency situations Immediate Escalation**: Provides emergency numbers for critical cases Clear Disclaimers**: Every response includes medical disclaimers No Diagnosis**: Never attempts to diagnose conditions Professional Referral**: Always recommends consulting healthcare providers Core Functionality Symptom Information**: General information about common symptoms Wellness Guidance**: Health tips and preventive care Medication Reminders**: General medication information Multi-Language Support**: Serve diverse communities Privacy Protection**: No data storage, anonymous processing Resource Links**: Connects to trusted health resources 🎯 Use Cases General Health Information: Learn about symptoms and conditions Pre-Appointment Preparation: Organize questions for doctors Wellness Education: General health and prevention tips Emergency Detection: Immediate guidance for critical situations Health Resource Navigation: Find appropriate care providers 🛡️ Safety Protocols Emergency Keywords Detection Chest pain, heart attack, stroke Breathing difficulties Severe bleeding, unconsciousness Allergic reactions, poisoning Mental health crises Response Guidelines Never diagnoses conditions Never prescribes medications Always includes disclaimers Encourages professional consultation Provides emergency numbers when needed 🔧 Setup Instructions Configure OpenAI API Add your API key Set temperature to 0.3 for consistency Review Legal Requirements Check local health information regulations Customize disclaimers as needed Implement required data policies Emergency Contacts Update emergency numbers for your region Add local health resources Include mental health hotlines Test Thoroughly Verify emergency detection Check disclaimer display Test various symptom queries 💡 Example Interactions General Symptom Query: User: "I have a headache for 3 days" Bot: Provides general headache information, self-care tips, when to see a doctor Emergency Detection: User: "Chest pain, can't breathe" Bot: EMERGENCY response with immediate action steps and emergency numbers Wellness Query: User: "How can I improve my sleep?" Bot: General sleep hygiene tips and healthy habits information 🏥 Integration Options Healthcare Websites**: Embed as support widget Telemedicine Platforms**: Pre-consultation tool Health Apps**: General information module Insurance Portals**: Member resource Pharmacy Systems**: General drug information 📊 Compliance & Privacy HIPAA Considerations**: No PHI storage GDPR Compliant**: No personal data retention Anonymous Processing**: Session-based only Audit Trails**: Optional logging for compliance Data Encryption**: Secure transmission 🚨 Limitations Cannot diagnose medical conditions Cannot prescribe treatments Cannot replace emergency services Cannot provide specific medical advice Should not delay seeking medical care 🔒 Best Practices Always maintain clear disclaimers Never minimize serious symptoms Encourage professional consultation Keep information general and educational Update emergency contacts regularly Review and update health information Monitor for misuse Maintain audit trails where required 🌍 Customization Options Add local emergency numbers Include regional health resources Translate to local languages Integrate with local health systems Add specific disclaimers Customize for specific populations Start providing responsible health information today!
by Oneclick AI Squad
This guide walks you through setting up an automated workflow that compares live flight fares across multiple booking platforms (e.g., Skyscanner, Akasa Air, Air India, IndiGo) using API calls, sorts the results by price, and sends the best deals via email. Ready to automate your flight fare comparison process? Let’s get started! What’s the Goal? Automatically fetch and compare live flight fares from multiple platforms using scheduled triggers. Aggregate and sort fare data to identify the best deals. Send the comparison results via email for review or action. Enable 24/7 fare monitoring with seamless integration. By the end, you’ll have a self-running system that delivers the cheapest flight options effortlessly. Why Does It Matter? Manual flight fare comparison is time-consuming and often misses the best deals. Here’s why this workflow is a game-changer: Zero Human Error**: Automated data fetching and sorting ensure accuracy. Time-Saving Automation**: Instantly compare fares across platforms, boosting efficiency. 24/7 Availability**: Monitor fares anytime without manual effort. Cost Optimization**: Focus on securing the best deals rather than searching manually. Think of it as your tireless flight fare assistant that always finds the best prices. How It Works Here’s the step-by-step magic behind the automation: Step 1: Trigger the Workflow Set Schedule Node**: Triggers the workflow at a predefined schedule to check flight fares automatically. Captures the timing for regular fare updates. Step 2: Process Input Data Set Input Data Node**: Sets the input parameters (e.g., origin, destination, departure date, return date) for flight searches. Prepares the data to be sent to various APIs. Step 3: Fetch Flight Data Skyscanner API Node**: Retrieves live flight fare data from Skyscanner using its API endpoint. Akasa Air API Node**: Fetches live flight fare data from Akasa Air using its API endpoint. Air India API Node**: Collects flight fare data directly from Air India’s API. IndiGo API Node**: Gathers flight fare data from IndiGo’s API. Step 4: Merge API Results Merge API Data Node**: Combines the flight data from Skyscanner and Akasa Air into a single dataset. Merge Both API Data Node**: Merges the data from Air India and IndiGo with the previous dataset. Merge All API Results Node**: Consolidates all API data into one unified result for further processing. Step 5: Analyze and Sort Compare Data and Sorting Price Node**: Compares all flight fares and sorts them by price to highlight the best deals. Step 6: Send Results Send Response via Email Node**: Sends the sorted flight fare comparison results to the user via email for review or action. How to Use the Workflow? Importing this workflow in n8n is a straightforward process that allows you to use this pre-built solution to save time. Below is a step-by-step guide to importing the Flight Fare Comparison Workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: The workflow is shared as a JSON file or code snippet (provided earlier or exported from another n8n instance). Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n: Open your n8n instance (via n8n Cloud or self-hosted). Navigate to Workflows: Go to the Workflows tab in the n8n dashboard. Open a New Workflow: Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code (provided earlier) into the text box. Click Import to load the workflow. Option 2: Import via JSON File: In the n8n editor, click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Setup Notes API Credentials**: Configure each API node (Skyscanner, Akasa Air, Air India, IndiGo) with the respective API keys and endpoints. Check the API provider’s documentation for details. Email Integration**: Authorize the Send Response via Email node with your email service (e.g., Gmail SMTP settings or an email API like SendGrid). Input Customization**: Adjust the Set Input Data node to include specific origin/destination pairs and date ranges as needed. Schedule Configuration**: Set the desired frequency in the Set Schedule node (e.g., daily at 9 AM IST). Example Input Send a POST request to the workflow (if integrated with a webhook) with: { "origin": "DEL", "destination": "BOM", "departureDate": "2025-08-01", "returnDate": "2025-08-07" } Optimization Tips Error Handling**: Add IF nodes to manage API failures or rate limits. Rate Limits**: Include a Wait node if APIs have strict limits. Data Logging**: Add a node (e.g., Google Sheets) to log all comparisons for future analysis. This workflow transforms flight fare comparison into an automated, efficient process, delivering the best deals directly to your inbox!