by n8n Team
This template shows how to sync data from one service to another. Specifically, in this example we're saving a new qualified lead from a Postgres database to a Google Sheets file. Setup instructions are located inside the workflow template.
by please-open.it
Intro This workflow needs a user to authenticate by using an openid connect provider in order to call the webhook. If the user is not authenticated, it starts a login process by using an Authorization Code with PKCE https://datatracker.ietf.org/doc/html/rfc7636, a standard way to authenticate users with openid connect. Then, after the user logs in, the webhook is refreshed and gets the user's token from a cookie. With this token, all details about the user are requested through the userinfo endpoint on the identity provider. How to set up with Keycloak Keycloak Keycloak is an open source identity and access management solution. Feel free to get a demo realm at https://please-open.it or get your own Keycloak server up and running. After creating a realm, go to "Realm Settings" and click on "OpenID Endpoint Configuration" Retrieve authorization_endpoint, token_endpoint and userinfo_endpoint values. Set those variables in the "Set variables" node. In Keycloak, create a new client (name it as you want) Disable the client authentication, check only "standard flow" : At the third step, put the webhook url in "valid redirect URIs", fill "Web origins" with a "+". You're done, open the webhook and it asks you to authenticate. Usage User informations The userinfo node returns this structure about the user has logged in : [ { "sub":"73a6543f-f420-4fa6-9811-209e903c348b", "email_verified":true, "preferred_username": "mathieu.passenaud@please-open.it", "email": "mathieu.passenaud@please-open.it" } ] I can use those infos in my workflow for custom operations. APIs calls the "code" node returns me a cookie named "n8n-custom-auth" which is the access_token returned by the identity provider. This access_token can be used to call APIs connected to this identity provider (for example, we call userinfo API with this token). Example : asks a user to log in with his Google account then call an API (Gmail, drive...) with his own token. How it works We published a blog post about this flow, how it works and how you can use it : https://blog.please-open.it/n8n-openid-client/
by Tony Duffy
. IOT device control with MQTT and webhook This workflow is for users wanting a practical example of how to control IOT systems using the MQTT protocol in an an n8n environment. The template provides typical n8n MQTT and Webhook node implementation and configuration settings necessary to set IOT device inputs and outputs. How it works A webpage with IOT control 'on and 'off' buttons is presented to the user. When a button is selected on the webpage the value is sent via a webhook to trigger the active workflow. The workflow set node then prepares the received value into a message payload. It then passes the message to the MQTT node for publishing the topic with the payload to a cloud based MQTT broker. A remote ESP32 micro-controller subscribes to the broker and reads the payload contained in the topic. The ESP32 will then toggle the GPIO pin depending on the topic payload value. The IOT control webpage The webpage is a simple HTML page containing the clickable 'on' and 'off' buttons. It also has the get webhook URL that sends the selected value to the n8n workflow in this case running locally. The URL webhook format is http://localhost:5678/webhook/pin-control?value=action The webpage code IOT-control.html IOT device The IOT device is an ESP32 micro-controller running on a remote network. To keep it simple GPIO2 is selected as the control output. In this case when the received value is "on" GPIO2 goes high a led will turn on in the ESP32. It will go off when the received value is "off". The program for the ESP32 IOT control is 'main.py' . You will require a micropython interpreter to be uploaded to the ESP32 for the program to run automatically. The code can be easily edited and modified to accommodate any further attached IOT devices. The ESP32 main.py code main.py How to customise this workflow to your needs ESP32 You will need a working ESP32 installed with a micro-python interpreter. The code main.py is provided. The main.py program can loaded and edited with a python IDE. I used Thonny for this example. Use a free MQTT broker to get started. I used "broker.emqx.io" in the code. IOT Control Webpage The webpage contains HTML and can be easily edit to enhance functionality. The embedded webhook is configured for n8n production mode. http://localhost:5678/webhook/pin-control?value=action If you want to run the page in test mode you will use the following URL. http://localhost:5678/webhook-test/pin-control?value=action n8n workflow. The workflow is a good demonstration of how to control IOT devices using n8n. Following these steps will give a good insight for microcontroller automation.
by Yaron Been
Automated pipeline to collect and analyze investor data from Crunchbase, tracking investment patterns, funding history, and portfolio companies for market analysis and lead generation. 🚀 What It Does Investor Profiling**: Collects comprehensive data on investors and VC firms Investment Pattern Analysis**: Tracks funding history and investment preferences Portfolio Monitoring**: Keeps tabs on investor portfolios and new investments Data Enrichment**: Enhances raw data with additional context and metrics 🎯 Perfect For Startup founders seeking investors Market research analysts Investment professionals Business development teams Competitive intelligence ⚙️ Key Benefits ✅ Comprehensive investor profiles ✅ Real-time investment tracking ✅ Market trend analysis ✅ Data-driven investment decisions ✅ Time-saving automation 🔧 What You Need Crunchbase API access n8n instance Storage solution (database or spreadsheet) 📊 Data Points Collected Investor/Firm details Investment history Portfolio companies Funding rounds participated in Investment focus areas Contact information (when available) 🛠️ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Transform your investor research with automated data collection and analysis. Spend less time gathering data and more time making strategic decisions.
by Batu Öztürk
🚀 Transform LinkedIn Post Reactions into Content Ideas with Airtable 📝 Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtable—ready for brainstorming, inspiration, or publication planning. ⚙️ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. 💡 Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. 🛠 Prerequisites Before using this template, make sure you have: ✅ A RapidAPI account and access to the linkedin-api8 endpoint. ✅ Your RapidAPI key and the target LinkedIn username. ✅ An Airtable account with a base/table set up. 🧰 Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! 📋 Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | ✅ | Generated from author info | | Description | Long text | ✅ | Contains post content | | Source | URL | ✅ | Link to the original post | | Type | Single select | ✅ | Value: Linkedin
by Yulia
This n8n workflow demonstrates how to create an agent using LangChain and SQLite. The agent can understand natural language queries and interact with a SQLite database to provide accurate answers. 💪 🚀 Setup Run the top part of the workflow once. It downloads the example SQLite database, extracts from a ZIP file and saves locally (chinook.db). 🗣️ Chatting with Your Data Send a message in a chat window. Locally saved SQLite database loads automatically. User's chat input is combined with the binary data. The LangChain Agend node gets both data and begins to work. The AI Agent will process the user's message, perform necessary SQL queries, and generate a response based on the database information. 🗄️ 🌟 Example Queries Try these sample queries to see the AI Agent in action: "Please describe the database" - Get a high-level overview of the database structure, only one or two queries are needed. "What are the revenues by genre?" - Retrieve revenue information grouped by genre, LangChain agent iterates several time before producing the answer. The AI Agent will store the final answer in its memory, allowing for context-aware conversations. 💬 Read the full article: 👉 https://blog.n8n.io/ai-agents/
by Derek Cheung
Purpose of workflow: The purpose of this workflow is to automate scraping of a website, transforming it into a structured format, and loading it directly into a Google Sheets spreadsheet. How it works: Web Scraping: Uses the Jina AI service to scrape website data and convert it into LLM-friendly text. Information Extraction: Employs an AI node to extract specific book details (title, price, availability, image URL, product URL) from the scraped data. Data Splitting: Splits the extracted information into individual book entries. Google Sheets Integration: Automatically populates a Google Sheets spreadsheet with the structured book data. Step by step setup: Set up Jina AI service: Sign up for a Jina AI account and obtain an API key. Configure the HTTP Request node: Enter the Jina AI URL with the target website. Add the API key to the request headers for authentication. Set up the Information Extractor node: Use Claude AI to generate a JSON schema for data extraction. Upload a screenshot of the target website to Claude AI. Ask Claude AI to suggest a JSON schema for extracting required information. Copy the generated schema into the Information Extractor node. Configure the Split node: Set it up to separate the extracted data into individual book entries. Set up the Google Sheets node: Create a Google Sheets spreadsheet with columns for title, price, availability, image URL, and product URL. Configure the node to map the extracted data to the appropriate columns.
by Naveen Choudhary
Description This workflow automates the process of scraping Google Events data using SerpApi and organizing it in Google Sheets for analysis and tracking. Who's it for Event organizers** who need to monitor competitor events in their area Marketing teams** tracking local events for partnership opportunities Researchers** collecting event data for analysis Business owners** monitoring industry events and conferences How it works The workflow searches Google Events using SerpApi's Google Events engine, processes the returned data, and saves it to a Google Sheets spreadsheet. It handles pagination automatically to collect multiple events and flattens the nested API response into a structured format. What it does Configures search parameters - Sets the search query, total events to fetch, and pagination settings Fetches events via SerpApi - Makes paginated requests to Google Events API with proper rate limiting Processes and flattens data - Transforms nested event data into a flat structure with all relevant fields Saves to Google Sheets - Appends the processed events to a Google Sheets document for easy analysis Requirements SerpApi account** with API key (Get one here) Google Sheets API access** (OAuth2 credentials) Google Sheets document** - Make a copy of this template sheet How to set up Configure SerpApi credentials in the HTTP Request node Set up Google Sheets OAuth2 authentication Update the Google Sheets document ID in the final node to point to your copy Modify search parameters in the "Set Search Parameters" node: Change query to your desired search terms Adjust total_events (10 events per page) Set start position for pagination Run the workflow using the manual trigger How to customize the workflow Search terms**: Modify the query in the Set node (e.g., "conferences in New York", "music events Los Angeles") Event count**: Adjust total_events to fetch more or fewer events Output format**: Modify the Google Sheets column mapping to include/exclude specific fields Rate limiting**: Adjust the requestInterval in the HTTP Request node if needed Scheduling**: Replace the Manual Trigger with a Schedule Trigger for automated runs Output data includes Event title, description, and direct link Start date and timing information Venue and address details Ticket information and pricing Event location map links Event images Original search query for tracking Note: This workflow respects SerpApi rate limits with built-in delays between requests and processes up to 10 events per API call efficiently.
by Jimleuk
This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
by PollupAI
LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Fresh LinkedIn Profile Data API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria**: Modify the filter step to include additional conditions for processing profiles. API Configuration**: Adjust API parameters to retrieve specific fields or extend usage. Output Format**: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling**: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.
by Pavel Duchovny
Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
by Yaron Been
Automated pipeline that exports technology stack data from BuiltWith to Google Sheets for analysis, reporting, and team collaboration. 🚀 What It Does Extracts technology stack data Organizes data in Google Sheets Updates automatically on schedule Supports multiple company tracking Enables easy data sharing 🎯 Perfect For Sales teams Market researchers Business analysts Competitive intelligence Technology consultants ⚙️ Key Benefits ✅ Centralized technology database ✅ Easy data analysis ✅ Team collaboration ✅ Historical tracking ✅ Custom reporting 🔧 What You Need BuiltWith API access Google account n8n instance Google Sheets setup 📊 Data Exported Company information Web technologies Hosting details Analytics tools Marketing technologies Contact information 🛠️ Setup & Support Quick Setup Start exporting in 15 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Transform raw technology data into actionable business intelligence with automated exports.