by please-open.it
Intro This workflow needs a user to authenticate by using an openid connect provider in order to call the webhook. If the user is not authenticated, it starts a login process by using an Authorization Code with PKCE https://datatracker.ietf.org/doc/html/rfc7636, a standard way to authenticate users with openid connect. Then, after the user logs in, the webhook is refreshed and gets the user's token from a cookie. With this token, all details about the user are requested through the userinfo endpoint on the identity provider. How to set up with Keycloak Keycloak Keycloak is an open source identity and access management solution. Feel free to get a demo realm at https://please-open.it or get your own Keycloak server up and running. After creating a realm, go to "Realm Settings" and click on "OpenID Endpoint Configuration" Retrieve authorization_endpoint, token_endpoint and userinfo_endpoint values. Set those variables in the "Set variables" node. In Keycloak, create a new client (name it as you want) Disable the client authentication, check only "standard flow" : At the third step, put the webhook url in "valid redirect URIs", fill "Web origins" with a "+". You're done, open the webhook and it asks you to authenticate. Usage User informations The userinfo node returns this structure about the user has logged in : [ { "sub":"73a6543f-f420-4fa6-9811-209e903c348b", "email_verified":true, "preferred_username": "mathieu.passenaud@please-open.it", "email": "mathieu.passenaud@please-open.it" } ] I can use those infos in my workflow for custom operations. APIs calls the "code" node returns me a cookie named "n8n-custom-auth" which is the access_token returned by the identity provider. This access_token can be used to call APIs connected to this identity provider (for example, we call userinfo API with this token). Example : asks a user to log in with his Google account then call an API (Gmail, drive...) with his own token. How it works We published a blog post about this flow, how it works and how you can use it : https://blog.please-open.it/n8n-openid-client/
by Angel Menendez
Who is this for? This workflow is designed for teams using Slack for communication and ServiceNow for incident management. It simplifies incident lookup by enabling team members to fetch incident details directly within Slack via a Slash Command. What problem is this workflow solving? Manually switching between Slack and ServiceNow to retrieve incident details can be time-consuming and disrupt workflow efficiency. This workflow bridges the two platforms, providing instant access to critical incident information in Slack, saving time, and improving response efficiency. What this workflow does? The workflow listens for a Slash Command in Slack that includes an incident ID, extracts the ID from the incoming payload, queries ServiceNow for the corresponding incident details, and sends a formatted response back to Slack. Depending on the query result, it can: Display incident details (e.g., ID, description, severity, and priority). Notify the user if no matching incident is found. Alert the user if thereโs an issue connecting to ServiceNow. Setup Slack Setup: Create a Slash Command in Slack with the appropriate endpoint URL. Configure the command to send a POST request to the webhook endpoint of this workflow. For details on how to setup the Slack app using Slash commands and n8n, check out this video. ServiceNow Setup: Create or use an existing account with the necessary permissions to access incident data. Configure the ServiceNow node with your ServiceNow credentials. n8n Workflow Activation: Deploy and activate the workflow in your n8n instance. Ensure all nodes are properly configured and connected. How to customize this workflow to your needs Modify Incident Query Parameters:** Adjust the query logic in the Search For Incident in ServiceNow node to include additional filters or data points based on your organizationโs needs. Slack Response Customization:** Customize the Slack response template to display additional incident details or to match your teamโs tone and style. Error Handling:** Enhance the error handling nodes to include more detailed logs or send alerts to a dedicated Slack channel.
by Tony Duffy
. IOT device control with MQTT and webhook This workflow is for users wanting a practical example of how to control IOT systems using the MQTT protocol in an an n8n environment. The template provides typical n8n MQTT and Webhook node implementation and configuration settings necessary to set IOT device inputs and outputs. How it works A webpage with IOT control 'on and 'off' buttons is presented to the user. When a button is selected on the webpage the value is sent via a webhook to trigger the active workflow. The workflow set node then prepares the received value into a message payload. It then passes the message to the MQTT node for publishing the topic with the payload to a cloud based MQTT broker. A remote ESP32 micro-controller subscribes to the broker and reads the payload contained in the topic. The ESP32 will then toggle the GPIO pin depending on the topic payload value. The IOT control webpage The webpage is a simple HTML page containing the clickable 'on' and 'off' buttons. It also has the get webhook URL that sends the selected value to the n8n workflow in this case running locally. The URL webhook format is http://localhost:5678/webhook/pin-control?value=action The webpage code IOT-control.html IOT device The IOT device is an ESP32 micro-controller running on a remote network. To keep it simple GPIO2 is selected as the control output. In this case when the received value is "on" GPIO2 goes high a led will turn on in the ESP32. It will go off when the received value is "off". The program for the ESP32 IOT control is 'main.py' . You will require a micropython interpreter to be uploaded to the ESP32 for the program to run automatically. The code can be easily edited and modified to accommodate any further attached IOT devices. The ESP32 main.py code main.py How to customise this workflow to your needs ESP32 You will need a working ESP32 installed with a micro-python interpreter. The code main.py is provided. The main.py program can loaded and edited with a python IDE. I used Thonny for this example. Use a free MQTT broker to get started. I used "broker.emqx.io" in the code. IOT Control Webpage The webpage contains HTML and can be easily edit to enhance functionality. The embedded webhook is configured for n8n production mode. http://localhost:5678/webhook/pin-control?value=action If you want to run the page in test mode you will use the following URL. http://localhost:5678/webhook-test/pin-control?value=action n8n workflow. The workflow is a good demonstration of how to control IOT devices using n8n. Following these steps will give a good insight for microcontroller automation.
by Yaron Been
Automated pipeline to collect and analyze investor data from Crunchbase, tracking investment patterns, funding history, and portfolio companies for market analysis and lead generation. ๐ What It Does Investor Profiling**: Collects comprehensive data on investors and VC firms Investment Pattern Analysis**: Tracks funding history and investment preferences Portfolio Monitoring**: Keeps tabs on investor portfolios and new investments Data Enrichment**: Enhances raw data with additional context and metrics ๐ฏ Perfect For Startup founders seeking investors Market research analysts Investment professionals Business development teams Competitive intelligence โ๏ธ Key Benefits โ Comprehensive investor profiles โ Real-time investment tracking โ Market trend analysis โ Data-driven investment decisions โ Time-saving automation ๐ง What You Need Crunchbase API access n8n instance Storage solution (database or spreadsheet) ๐ Data Points Collected Investor/Firm details Investment history Portfolio companies Funding rounds participated in Investment focus areas Contact information (when available) ๐ ๏ธ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide ๐บ Watch Tutorial ๐ผ Get Expert Support ๐ง Direct Help Transform your investor research with automated data collection and analysis. Spend less time gathering data and more time making strategic decisions.
by Yulia
This n8n workflow demonstrates how to create an agent using LangChain and SQLite. The agent can understand natural language queries and interact with a SQLite database to provide accurate answers. ๐ช ๐ Setup Run the top part of the workflow once. It downloads the example SQLite database, extracts from a ZIP file and saves locally (chinook.db). ๐ฃ๏ธ Chatting with Your Data Send a message in a chat window. Locally saved SQLite database loads automatically. User's chat input is combined with the binary data. The LangChain Agend node gets both data and begins to work. The AI Agent will process the user's message, perform necessary SQL queries, and generate a response based on the database information. ๐๏ธ ๐ Example Queries Try these sample queries to see the AI Agent in action: "Please describe the database" - Get a high-level overview of the database structure, only one or two queries are needed. "What are the revenues by genre?" - Retrieve revenue information grouped by genre, LangChain agent iterates several time before producing the answer. The AI Agent will store the final answer in its memory, allowing for context-aware conversations. ๐ฌ Read the full article: ๐ https://blog.n8n.io/ai-agents/
by Jimleuk
This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
by Lucas Peyrin
How it works This workflow demonstrates a fundamental pattern for securing a webhook by requiring an API key. It acts as a gatekeeper, checking for a valid key in the request header before allowing the request to proceed. Incoming Request: The Secured Webhook node receives an incoming POST request. It expects an API key to be sent in the x-api-key header. API Key Verification: The Check API Key node takes the key from the incoming request's header. It then makes an internal HTTP request to a second webhook (Get API Key) which acts as a mock database. This second webhook retrieves a list of registered API keys (from the Registered API Keys node) and filters it to find a match for the key that was provided. Conditional Response: If a match is found, the API Key Identified node routes the execution to the "success" path, returning a 200 OK response with the identified user's ID. If no match is found, it routes to the "unauthorized" path, returning a 401 Unauthorized error. This pattern separates the public-facing endpoint from the data source, which is a good security practice. Set up steps Setup time: ~2 minutes This workflow is designed to be a self-contained example. Set up Credentials: This workflow uses "Header Auth" for its internal communication. Go to Credentials and create a new Header Auth credential. You can use any name and value (e.g., Name: X-N8N-Auth, Value: my-secret-password). Select this credential in all four webhook/HTTP Request nodes. Add Your API Keys: Open the Registered API Keys node. This is your mock database. Edit the array to include the user_id and api_key pairs you want to authorize. Activate the workflow. Test it: Use the Test Secure Webhook node to send a request. Try it with a valid key from your list to see the success response. Change the x-api-key header to an invalid key to see the 401 Unauthorized error. For Production: Replace the mock database part of this workflow (the Get API Key webhook and Registered API Keys node) with a real database node like Supabase, Postgres, or Baserow to look up keys.
by Pavel Duchovny
Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
by Yaron Been
Automated pipeline that exports technology stack data from BuiltWith to Google Sheets for analysis, reporting, and team collaboration. ๐ What It Does Extracts technology stack data Organizes data in Google Sheets Updates automatically on schedule Supports multiple company tracking Enables easy data sharing ๐ฏ Perfect For Sales teams Market researchers Business analysts Competitive intelligence Technology consultants โ๏ธ Key Benefits โ Centralized technology database โ Easy data analysis โ Team collaboration โ Historical tracking โ Custom reporting ๐ง What You Need BuiltWith API access Google account n8n instance Google Sheets setup ๐ Data Exported Company information Web technologies Hosting details Analytics tools Marketing technologies Contact information ๐ ๏ธ Setup & Support Quick Setup Start exporting in 15 minutes with our step-by-step guide ๐บ Watch Tutorial ๐ผ Get Expert Support ๐ง Direct Help Transform raw technology data into actionable business intelligence with automated exports.
by Marketing Canopy
Automate Sports Betting Data with TheOddsAPI This workflow enables you to create and update a table using TheOddsAPI for sports betting data. It automatically pulls upcoming Ice Hockey games at the start of the day and updates the table with results at the end of the day. You can modify it to retrieve odds and game data for any sport. This setup is particularly useful for sports betting applications, such as tracking the results of a predictive model. It leverages scheduled triggers to activate HTTP requests, which then create or update fields in Airtable by matching on the game ID. Prerequisites Before implementing this workflow, ensure you have the following: TheOddsAPI Account & API Key Sign up at TheOddsAPI and obtain an API key. Ensure you have the correct API permissions to access sports odds and results. Airtable Account & API Key Create an account at Airtable and set up a database. Obtain an API key from the Account Settings page. API Access & Rate Limits Review TheOddsAPIโs rate limits and ensure your account tier allows for scheduled API calls. Confirm that Airtable API limits align with your expected data retrieval frequency. Step-by-Step Guide to Integrating TheOddsAPI 1. Schedule API Requests Set up a trigger to automatically pull upcoming Ice Hockey games at the start of each day. 2. Fetch Data from TheOddsAPI Retrieve the latest sports betting data, including game details and odds, using TheOddsAPI. 3. Store Data in Airtable Insert or update records in Airtable by matching game IDs, ensuring data accuracy. Sample Airtable Template Column Setup for Ice Hockey (Table can adjust depending on sport and data needs. Reference TheOddsAPI for more documentation.) Game ID** Sport** League** Game Date (UTC)** Home Team** Away Team** Completed** (Boolean: TRUE/FALSE for game completion status) Scores** (JSON or String for final scores) Last Update** (Timestamp of the latest update) 4. Schedule an End-of-Day Update Configure another trigger to fetch final game results at the end of the day. 5. Update Records in Airtable Modify existing Airtable records with final scores and game outcomes for complete tracking. 6. Customize for Other Sports Adjust API parameters to retrieve data for different sports and betting odds, making the system flexible for multiple use cases. This structured workflow automates sports betting data collection and updates, ensuring accurate and real-time tracking of odds and game results. By integrating TheOddsAPI with Airtable, you can build scalable applications for predictive sports analytics and betting insights.
by Jaruphat J.
Who is this for? This workflow is ideal for businesses, accountants, and finance teams who receive bank slip images via LINE and want to automate the extraction of transaction details. It eliminates manual data entry and speeds up financial tracking. What problem does this workflow solve? Many businesses receive bank transfer slips via LINE from customers, but manually recording transaction details into spreadsheets is time-consuming and error-prone. This workflow automates the entire process, extracting structured data from the bank slips and storing it in Google Sheets for seamless record-keeping. What this workflow does: Receives bank slip images from LINE BOT Extracts transaction details (sender, receiver, amount, transaction ID) using SpaceOCR Automatically logs extracted data into Google Sheets Works with Standard Bank Slips & PromptPay transactions Eliminates manual data entry and reduces errors Setup Instructions: 1. Prerequisites A LINE BOT with Messaging API enabled A SpaceOCR API Key (Get from https://spaceocr.com/) A Google Sheets account to store extracted data An n8n instance running (Cloud or Self-hosted) 2. Setup Google Sheets Create a Google Sheet with the following structure: A (Date) B (Time) C (Sender) D (Receiver) E (Bank Name) F (Amount) G (Transaction ID) Ensure your Google Sheets API is enabled and connected to n8n. For an example of the required format, check this Google Sheets template: Google Sheets Template 3. Configure n8n Workflow 1. Webhook Node (Receives bank slip from LINE BOT) Set method:* Set Path:* 2. HTTP Request (Download Image from LINE Message) Retrieves image URL from the LINE message payload 3. SpaceOCR Node (Extract Text from Bank Slip) Input:* API Key:* #### 4. Google Sheets Node (Save Transaction Data) Select your Google Sheet Map extracted data (sender, receiver, amount, etc.) to the respective columns 4. Deploy & Test Activate the workflow in n8n Set Webhook URL in LINE Developer Console Send a test bank slip image to the LINE BOT Check Google Sheets for extracted transaction data
by n8n custom workflows
Introduction The namesilo Bulk Domain Availability workflow is a powerful automation solution designed to check the registration status of multiple domains simultaneously using the Namesilo API. This workflow efficiently processes large lists of domains by splitting them into manageable batches, adhering to API rate limits, and compiling the results into a convenient Excel spreadsheet. It eliminates the tedious process of manually checking domains one by one, saving significant time for domain investors, web developers, and digital marketers. The workflow is particularly valuable during brainstorming sessions for new projects, when conducting domain portfolio audits, or when preparing domain acquisition strategies. By automating the domain availability check process, users can quickly identify available domains for registration without the hassle of navigating through multiple web interfaces. Who is this for? This workflow is ideal for: Domain investors and flippers who need to check multiple domains quickly Web developers and agencies evaluating domain options for client projects Digital marketers researching domain availability for campaigns Business owners exploring domain options for new ventures IT professionals managing domain portfolios Users should have basic familiarity with n8n workflow concepts and a Namesilo account to obtain an API key. No coding knowledge is required, though understanding of domain name systems would be beneficial. What problem is this workflow solving? Checking domain availability one-by-one is a time-consuming and tedious process, especially when dealing with dozens or hundreds of potential domains. This workflow solves several key challenges: Manual Inefficiency: Eliminates the need to individually search for each domain through registrar websites. Rate Limiting: Handles API rate limits automatically with built-in waiting periods. Data Organization: Compiles availability results into a structured Excel file rather than scattered notes or multiple browser tabs. Bulk Processing: Processes up to 200 domains per batch, with the ability to handle unlimited domains across multiple batches. Time Management: Frees up valuable time that would otherwise be spent on repetitive manual checks. What this workflow does Overview The workflow takes a list of domains, processes them in batches of up to 200 domains per request (to comply with API limitations), checks their availability using the Namesilo API, and compiles the results into an Excel spreadsheet showing which domains are available for registration and which are already taken. Process Input Setup: The workflow begins with a manual trigger and uses the "Set Data" node to collect the list of domains to check and your Namesilo API key. Domain Processing: The "Convert & Split Domains" node transforms the input list into batches of up to 200 domains to comply with API limitations. Batch Processing: The workflow loops through each batch of domains. API Integration: For each batch, the "Namesilo Requests" node sends a request to the Namesilo API to check domain availability. Data Parsing: The "Parse Data" node processes the API response, extracting information about which domains are available and which are taken. Rate Limit Management: A 5-minute wait period is enforced between batches to respect Namesilo's API rate limits. Data Compilation: The "Merge Results" node combines all the availability data. Output Generation: Finally, the "Convert to Excel" node creates an Excel file with two columns: Domain and Availability (showing "Available" or "Unavailable" for each domain). Setup Import the workflow: Download the workflow JSON file and import it into your n8n instance. Get Namesilo API key: Create a free account at Namesilo and obtain your API key from https://www.namesilo.com/account/api-manager Configure the workflow: Open the "Set Data" node Enter your Namesilo API key in the "Namesilo API Key" field Enter your list of domains (one per line) in the "Domains" field Save and activate: Save the workflow and run it using the manual trigger. How to customize this workflow to your needs Modify domain input format**: You can adjust the code in the "Convert & Split Domains" node if your domain list comes in a different format. Change batch size**: If needed, you can modify the batch size (currently set to 200) in the "Convert & Split Domains" node to accommodate different API limitations. Adjust wait time**: If you have a premium API account with different rate limits, you can modify the wait time in the "Wait" node. Enhance output format**: Customize the "Convert to Excel" node to add additional columns or formatting to the output file. Add domain filtering**: You could add a node before the API request to filter domains based on specific criteria (length, keywords, TLDs). Integrate with other services**: Connect this workflow to domain registrars to automatically register available domains that meet your criteria.