by Greg Lopez
Workflow Information 📌 Purpose 🎯 The intention of this workflow is to integrate New Shopify Orders into MS Dynamics Business Central: Point-of-Sale (POS):** POS orders will be created in Business Central as Sales Invoices given no fulfillment is expected. Web Orders:** This type of orders will be created as Business Central Sales Orders. How to use it 🚀 Edit the "D365 BC Environment Settings" node with your own account values (Company Id, Tenanant Id, Tax & Discount Items). Go to the "Shopify" node and edit the connection with your environment. More help here. Go to the "Lookup Customers" node to edit the Business Central connection details with your environment settings. Set the required filters on the "Shopify Order Filter" node. Edit the "Schedule Trigger" node with the required frequency. Useful Workflow Links 📚 Step-by-step Guide/ Integro Cloud Solutions Business Central REST API Documentation Video Demo Need Help? Contact me at: ✉️greg.lopez@integrocloudsolutions.com 📥 https://www.linkedin.com/in/greg-lopez-08b5071b/
by The { AI } rtist
Ghost + Sendy Integration Está es una integración del CMS Ghost hacia Sendy Sendy ( www.sendy.co ) Ghost ( www.ghost.org ) Con esta integración podrás importar los miembros del CMS Ghost en su nueva versión que incluye la parte de Membresía hacía el Software de newsletter sendy. Está integración además nos avisa si se ha registrado un nuevo miembro via telegram. Para realizar esta customización es necesaria la creación de una custom integration en Ghost. Para ello desde el panel de Administración vamos a CUSTOM INTEGRATIONS / + Add custom Integration Una vez allí nos solicitará un nombre le ponemos el que queramos y añadimos un nuevo Hook: En Target URL debe ir La url que nos genera nuestro webhook dentro de n8n: Pegamos la URL y acamos de rellenar los datos del HTTP REQUEST1 con los datos de nuestra lista rellenando los campos. api_key list Que encontaras en tú instalación de Sendy Por último faltara añadir las credenciales de Telegram de Nuestro BOT ( https://docs.n8n.io/credentials/telegram/ ) e indicar el grupo o usuario donde queremos que notifique. Saludos,
by mohamed ali
This workflow creates an automatic self-hosted URL shortener. It consists of three sub-workflows: Short URL creation for extracting the provided long URL, generating an ID, and saving the record in the database. It returns a short link as a result. Redirection for extracting the ID value, validating the existence of its correspondent record in the database, and returning a redirection page after updating the visits (click) count. Dashboard for calculating simple statistics about the saved record and displaying them on a dashboard. Read more about this use case and how to set up the workflow in the blog post How to build a low-code, self-hosted URL shortener in 3 steps. Prerequisites A local proxy set up that redirects the n8n.ly domain to your n8n instance An Airtable account and credentials Basic knowledge of JavaScript, HTML, and CSS Nodes Webhook nodes trigger the sub-workflows on calls to a specified link. IF nodes route the workflows based on specified query parameters. Set nodes set the required values returned by the previous nodes (id, longUrl, and shortUrl). Airtable nodes retrieve records (values) from or append records to the database. Function node calculates statistics on link clicks to be displayed on the dashboard, as well as its design. Crypto node generates a SHA256 hash.
by Boriwat Chanruang
Template Detail This template automates the process of converting a list of addresses into their latitude and longitude (LatLong) coordinates using Google Sheets and the Google Maps API. It's designed for businesses, developers, and analysts who need accurate geolocation data for use cases like delivery routing, event planning, or market analysis. What the Template Does Fetch Address Data: Retrieves addresses from a Google Sheet. Google Maps API Integration: Sends each address to the Google Maps API and retrieves the corresponding LatLong coordinates. Update Google Sheets: Automatically updates the same Google Sheet with the LatLong data for each address. Enhancements Google Sheets Template: Provide a pre-configured Google Sheets template that users can copy. Example link: Google Sheets Template. Columns required: Address: Column to input addresses. LatLong: Column for the latitude and longitude results. Updated Workflow Structure Trigger: A manual trigger node starts the workflow. Retrieve Data from Google Sheets: Fetch addresses from a Google Sheet. Send to Google Maps API: For each address, retrieve the LatLong coordinates directly via the Google Maps API. Update Google Sheets: Write the LatLong results back into the Google Sheet. Steps to Use Prepare Google Sheet: Copy the provided Google Sheets template and add your addresses to the Address column. Configure Google Cloud API: Enable the Maps API for your Google Cloud project. Generate an API key with the required permissions. Run the Workflow: Start the workflow in n8n; it will process the addresses automatically. Updated LatLong data will appear in the corresponding Google Sheet. Review the Results: Use the enriched LatLong data for mapping or analysis.
by Lucas Peyrin
Check Online Version ! [https://n8n-tools.streamlit.app/](https://n8n-tools.streamlit.app/ ) Who is it for? This workflow is perfect for n8n users who want to maintain clean and organized workflows without manually repositioning nodes. Whether you're building complex workflows or sharing them with a team, maintaining visual clarity is essential for efficiency and collaboration. This template automates the positioning process, saving time and ensuring consistent layout standards. How does it work? The template is divided into two parts: Positioning Engine: A webhook node kicks off the process by receiving a workflow ID. Using the provided workflow ID, an n8n API node fetches the workflow details. The fetched workflow is sent to a processing webhook that calculates optimized positions for the nodes. Finally, an n8n API node updates the workflow with the newly positioned nodes, ensuring a clean and professional layout. Reusable Positioning Block: This is an HTTP Request node that can be seamlessly integrated into any workflow you create. When triggered, it sends the current workflow for automatic positioning via the first part of this template. How to set it up? Enable n8n API Access: Ensure that your n8n instance has API access enabled with the appropriate credentials. Input Your n8n API URL and Credentials: Open the template, locate the n8n API nodes, and update them with your instance API key. Update the URL of the 'Magic Positioning' Http Request node to point to your n8n instance webhook URL. Embed the Reusable Block: Add the provided HTTP Request node to any of your workflows to instantly connect to the auto-positioning engine.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Description This workflow automates the process of scraping local business data from Google Maps and enriching it using AI to generate lead profiles. It's designed to help sales, marketing, and outreach teams collect high-quality B2B leads from Google Maps and enrich them with contextual insights without manual data entry. Overview This workflow scrapes business listings from Google Maps, extracts critical information like name, category, phone, address, and website using Bright Data, and passes the results to Google Gemini to generate enriched summaries and lead insights such as company description, potential services offered, and engagement score. The data is then structured and stored in spreadsheets for outreach. Tools Used n8n: The core automation engine to manage flow and trigger actions. Bright Data: Scrapes business information from Google Maps at scale with proxy rotation and CAPTCHA-solving. Google Gemini: Enriches the raw scraped data with smart business summaries, categorization, and lead scoring. Google Sheets : For storing and acting upon the enriched leads. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Set Up Bright Data: Insert your Bright Data credentials and configure the Google Maps scraping proxy endpoint. Configure Gemini API: Add your Google Gemini API key (or use via Make.com plugin). Customize the Inputs: Choose your target location, business category, and number of results per query. Choose Storage: Connect to your preferred storage like Google Sheets. Test and Deploy: Run a test scrape and enrichment before deploying for bulk runs. Use Cases Sales Teams: Auto-generate warm B2B lead lists with company summaries and relevance scores. Marketing Agencies: Identify local business prospects for SEO, web development, or ads services. Freelancers: Find high-potential clients in specific niches or cities. Business Consultants: Collect and categorize local businesses for competitive analysis or partnerships. Recruitment Firms: Identify and score potential company clients for talent acquisition. Connect with Me Email: ranjancse@gmail.com LinkedIn: https://www.linkedin.com/in/ranjan-dailata/ Get Bright Data: Bright Data (Supports free workflows with a small commission) #n8n #automation #leadscraping #googlemaps #brightdata #leadgen #b2bleads #salesautomation #nocode #leadprospecting #marketingautomation #googlemapsdata #geminiapi #googlegemini #aiworkflow #scrapingworkflow #businessleads #datadrivenoutreach #crm #workflowautomation #salesintelligence #b2bmarketing
by Harshil Agrawal
This workflow handles the incoming call from Twitter and sends the required response for verification. On registering the webhook with the Twitter Account Activity API, Twitter expects a signature in response. Twitter also randomly ping the webhook to ensure it is active and secure. Webhook node: Use the displayed URL to register with the Account Activity API. Crypto node: In the Secret field, enter your API Key Secret from Twitter. Set node: This node generates the response expected by the Twitter API. Learn more about connecting n8n with Twitter in the Getting Started with Twitter Webhook article.
by digi-stud.io
Airtable Hierarchical Record Fetcher Description This n8n workflow retrieves an Airtable record along with its related child records in a hierarchical structure. It can fetch up to 3 levels of linked records and assembles them into a comprehensive JSON object, making it ideal for complex data relationships and nested record structures. Features Multi-level Record Fetching**: Retrieves parent record, linked child records (level 2), and optionally grandchild records (level 3) API Call Optimization**: Uses Airtable's filterByFormula to minimize API calls by fetching multiple related records in single requests Selective Level 3 Fetching**: Only fetches level 3 records for specified linked fields to optimize performance Rich Text Processing**: Converts Airtable's pseudo-markdown rich text fields to HTML format Hierarchical JSON Output**: Organizes all data in a structured, nested JSON format Flexible Configuration**: Customizable depth and field selection per execution Input Parameters The workflow accepts a JSON array with the following structure: [ { "base_id": "appN8nPMGoLNuzUbY", "table_id": "tblLVOwpYIe0fGQ52", "record_id": "reczMh1Pp5l94HdYf", "level_3": [ "fldRaFra1rLta66cD", "fld3FxCaYk8AVaEHt" ], "to_html": true } ] Parameter Details | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | base_id | string | Yes | Airtable base identifier | | table_id | string | Yes | Airtable table identifier for the main record | | record_id | string | Yes | Airtable record identifier to fetch | | level_3 | array | No | Array of field IDs from level 2 records for which to fetch level 3 children | | to_html | boolean | No | Convert rich text fields from pseudo-markdown to HTML (default: false). This requires marked npm package. | Output Structure The workflow returns a hierarchical JSON object with the following structure: { "id": "recXXXXXXX", "field_1": ..., "field_2": ..., "level2_child": [ { "id": "recXXXXXXX", "field_a": ..., "field_b": ..., "level3_child": [ { "id": "recXXXXXXX", "field_y": ..., "field_z": ..., }, ... ] }, ... ] } `
by Solomon
Using the Systeme API can be challenging due to its pagination settings and low rate limit. This requires a bit more knowledge about API requests than a beginner might have. This template provides preconfigured HTTP Request nodes to help you work more efficiently. Pagination settings, item limits, and rate limits are all configured for you, making it easier to get started. How to configure Systeme.io credentials The Systeme API uses the Header Auth method. So create a Header Auth credential in your n8n with the name "X-API-Key". . Check out my other templates 👉 https://n8n.io/creators/solomon/
by Lucas Perret
This workflow allows to scrape Google Maps data in an efficient way using SerpAPI. You'll get all data from Gmaps at a cheaper cost than Google Maps API. Add as input, your Google Maps search URL and you'll get a list of places with many data points such as: phone number website rating reviews address And much more. Full guide to implement the workflow is here: https://lempire.notion.site/Scrape-Google-Maps-places-with-n8n-b7f1785c3d474e858b7ee61ad4c21136?pvs=4
by Sean Lon
AI-Powered Tech Radar Advisor This project is built on top of the famous open source ThoughtWorks Tech Radar. You can use this template to build your own AI-Powered Tech Radar Advisor for your company or group of companies. Target Audience This template is perfect for: Tech Audit & Governance Leaders:** Those seeking to build a tech landscape AI platform portal. Tech Leaders & Architects:** Those aiming to provide modern AI platforms that help others understand the rationale behind strategic technology adoption. Product Managers:** Professionals looking to align product innovation with the company's current tech trends. IT & Engineering Teams:** Teams that need to aggregate, analyze, and visualize technology data from multiple sources efficiently. Digital Transformation Experts:** Innovators aiming to leverage AI for actionable insights and strategic recommendations. Data Analysts & Scientists:** Individuals who want to combine structured SQL analysis with advanced semantic search using vector databases. Developers:** Those interested in integrating RAG chatbot functionality with conversation storage. 1. Description Tech Constellation is an AI-powered Tech Radar solution designed to help organizations visualize and steer their technology adoption strategy. It seamlessly ingests data from a Tech Radar Google Sheet—converting it into both a MySQL database and a vector index—to consolidate your tech landscape in one place. The platform integrates an interactive AI chat interface powered by four specialized agents: AI Agent Router:** Analyzes and routes user queries to the most suitable processing agent. SQL Agent:** Executes precise SQL queries on structured data. RAG Agent:** Leverages semantic, vector-based search for in-depth insights. Output Guardrail Agent:** Validates responses to ensure they remain on-topic and accurate. This powerful template is perfect for technology leaders, product managers, engineering teams, and digital transformation experts looking to make data-driven decisions aligned with strategic initiatives across groups of parent-child companies. 2. Features Data Ingestion A Google Sheet containing tech radar data is used as the primary source. The data is ingested and converted into a MySQL database. Simultaneously, the data is indexed into a vector database for semantic (vector-based) search. Interactive AI Chat Chat Integration:** An AI-powered chat interface allows users to ask questions about the tech radar. Customizable AI Agents:** AI Agent Router: Determines the query type and routes it to the appropriate agent. SQL Agent: Processes queries using SQL on structured data. RAG Agent: Performs vector-based searches on document-like data. Output Guardrail Agent: Validates queries and ensures that the responses remain on-topic and accurate. Usage Examples Tell me, is TechnologyABC adopted or on hold, and why? List all the tools that are considered part of the strategic direction for company3 but are not adopted. Project Links & Additional Details GitHub Repository (Frontend Interface Source Code):** github.com/dragonjump/techconstellation Try It:** https://scaler.my
by Davide
This workflow is designed to process PDF documents using Mistral's OCR capabilities, store the extracted text in a Qdrant vector database, and enable Retrieval-Augmented Generation (RAG) for answering questions. Here’s how it functions: Once configured, the workflow automates document ingestion, vectorization, and intelligent querying, enabling powerful RAG applications. Benefits End-to-End Automation** No manual interaction is needed: documents are read, processed, and made queryable with minimal setup. Scalable and Modular** The workflow uses subflows and batching, making it easy to scale and customize. Multi-Model Support** Combines Mistral for OCR, OpenAI for embeddings, and Gemini for intelligent answering—taking advantage of the strengths of each. Real-Time Q\&A** With RAG integration, users can query document content through natural language and receive accurate responses grounded in the PDF data. Light or Full Mode** Users can choose to index full page content or only summarized text, optimizing for either performance or richness. How It Works PDF Processing with Mistral OCR: The workflow starts by uploading a PDF file to Mistral's API, which performs OCR to extract text and metadata. The extracted content is split into manageable chunks (e.g., pages or sections) for further processing. Vector Storage in Qdrant: The extracted text is converted into embeddings using OpenAI's embedding model. These embeddings are stored in a Qdrant vector database, enabling efficient similarity searches for RAG. Question-Answering with RAG: When a user submits a question via a chat interface, the workflow retrieves relevant text chunks from Qdrant using vector similarity. A language model (Google Gemini) generates answers based on the retrieved context, providing accurate and context-aware responses. Optional Summarization: The workflow includes an optional summarization step using Google Gemini to condense the extracted text for faster processing or lighter RAG usage. Set Up Steps To deploy this workflow in n8n, follow these steps: Configure Qdrant Database: Replace QDRANTURL and COLLECTION in the "Create collection" and "Refresh collection" nodes with your Qdrant instance details. Ensure the Qdrant collection is configured with the correct vector size (e.g., 1536 for OpenAI embeddings) and distance metric (e.g., Cosine). Set Up Credentials: Add credentials for: Mistral Cloud API (for OCR processing). OpenAI API (for embeddings). Google Gemini API (for chat and summarization). Google Drive (if sourcing PDFs from Drive). Qdrant API (for vector storage). PDF Source Configuration: If using Google Drive, specify the folder ID in the "Search PDFs" node. Alternatively, modify the workflow to accept PDFs from other sources (e.g., direct uploads or external APIs). Customize Text Processing: Adjust chunk size and overlap in the "Token Splitter" node to optimize for your document type. Choose between raw text or summarized content for RAG by toggling between the "Set page" and "Summarization Chain" nodes. Test the RAG: Trigger the workflow manually or via a chat message to verify OCR, embedding, and Qdrant storage. Use the "Question and Answer Chain" node to test query responses. Optional Sub-Workflows: The workflow supports execution as a sub-workflow for batch processing (e.g., handling multiple PDFs). Need help customizing? Contact me for consulting and support or add me on Linkedin.