by Hiroshi
What this workflow does This workflow in n8n demonstrates how to send a message in Lark using a Lark bot. It begins with a manual trigger and then retrieves the necessary Lark token via a POST request. The token is used to authenticate and send a message to a specific chat using the Lark API. The input node provides the required app_id, app_secret, chat_id, and message content. After obtaining the token, the message is sent with the Lark API's message/v4/send/ endpoint. Who This Is For This n8n workflow is ideal for organizations, teams, and developers who need to automate message sending within Lark, especially those managing notifications, alerts, or team reminders. It can help users reduce manual messaging tasks by leveraging a Lark bot to deliver messages at specific intervals or based on particular conditions, enhancing team communication and responsiveness. Setup Fill the Input node with your values Exchange the bearer token in the Send Message node with your token Author: Hiroshi
by Yaron Been
Automated pipeline to collect and analyze investor data from Crunchbase, tracking investment patterns, funding history, and portfolio companies for market analysis and lead generation. ๐ What It Does Investor Profiling**: Collects comprehensive data on investors and VC firms Investment Pattern Analysis**: Tracks funding history and investment preferences Portfolio Monitoring**: Keeps tabs on investor portfolios and new investments Data Enrichment**: Enhances raw data with additional context and metrics ๐ฏ Perfect For Startup founders seeking investors Market research analysts Investment professionals Business development teams Competitive intelligence โ๏ธ Key Benefits โ Comprehensive investor profiles โ Real-time investment tracking โ Market trend analysis โ Data-driven investment decisions โ Time-saving automation ๐ง What You Need Crunchbase API access n8n instance Storage solution (database or spreadsheet) ๐ Data Points Collected Investor/Firm details Investment history Portfolio companies Funding rounds participated in Investment focus areas Contact information (when available) ๐ ๏ธ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide ๐บ Watch Tutorial ๐ผ Get Expert Support ๐ง Direct Help Transform your investor research with automated data collection and analysis. Spend less time gathering data and more time making strategic decisions.
by n8n Team
This workflow creates an Asana task when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the task in Asana. Prerequisites Zendesk account and Zendesk credentials. Asana account and Asana credentials. Asana workspace to create tasks in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new task in Asana. The Asana GID is then saved in one of the ticket's fields (in setup we call this "Asana GID"). The next time a comment is added to the ticket, the workflow retrieves the Asana GID from the ticket's field and adds the comment to the task in Asana. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the โEndpoint URLโ field, and the โRequest methodโ should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as โNew ticketsโ. Under โConditionsโ in โMeet ALL of the following conditionsโ, add โStatus is Newโ. Under โActionsโ, select โNotify active webhookโ and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Asana GID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as โAsana GIDโ. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Batu รztรผrk
๐ Transform LinkedIn Post Reactions into Content Ideas with Airtable ๐ Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtableโready for brainstorming, inspiration, or publication planning. โ๏ธ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. ๐ก Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. ๐ Prerequisites Before using this template, make sure you have: โ A RapidAPI account and access to the linkedin-api8 endpoint. โ Your RapidAPI key and the target LinkedIn username. โ An Airtable account with a base/table set up. ๐งฐ Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! ๐ Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | โ | Generated from author info | | Description | Long text | โ | Contains post content | | Source | URL | โ | Link to the original post | | Type | Single select | โ | Value: Linkedin
by Yulia
This n8n workflow demonstrates how to create an agent using LangChain and SQLite. The agent can understand natural language queries and interact with a SQLite database to provide accurate answers. ๐ช ๐ Setup Run the top part of the workflow once. It downloads the example SQLite database, extracts from a ZIP file and saves locally (chinook.db). ๐ฃ๏ธ Chatting with Your Data Send a message in a chat window. Locally saved SQLite database loads automatically. User's chat input is combined with the binary data. The LangChain Agend node gets both data and begins to work. The AI Agent will process the user's message, perform necessary SQL queries, and generate a response based on the database information. ๐๏ธ ๐ Example Queries Try these sample queries to see the AI Agent in action: "Please describe the database" - Get a high-level overview of the database structure, only one or two queries are needed. "What are the revenues by genre?" - Retrieve revenue information grouped by genre, LangChain agent iterates several time before producing the answer. The AI Agent will store the final answer in its memory, allowing for context-aware conversations. ๐ฌ Read the full article: ๐ https://blog.n8n.io/ai-agents/
by Aditya Gaur
Who is this template for? This template is designed for developers, DevOps engineers, and automation enthusiasts who want to streamline their GitLab merge request process using n8n, a low-code workflow automation tool. It eliminates manual intervention by automating the merging of GitLab branches through API calls. How it works ? Trigger the workflow: The workflow can be triggered by a webhook, a scheduled event, or a GitLab event (e.g., a new merge request is created or approved). Fetch Merge Request Details: n8n makes an API call to GitLab to retrieve merge request details. Check Merge Conditions: The workflow validates whether the merge request meets predefined conditions (e.g., approvals met, CI/CD pipelines passed). Perform the Merge: If all conditions are met, n8n sends a request to the GitLab API to merge the branch automatically. Setup Steps 1. Prerequisites An n8n instance (Self-hosted or Cloud) A GitLab personal access token with API access A GitLab repository with merge requests enabled 2. Create the n8n Workflow Set up a trigger: Choose a trigger node (Webhook, Cron, or GitLab Trigger). Fetch merge request details: Add an HTTP Request node to call GET /merge_requests/:id from GitLab API. Validate conditions: Check if the merge request has necessary approvals. Ensure CI/CD pipelines have passed. Merge the request: Use an HTTP Request node to call PUT /merge_requests/:id/merge API. 3. Test the Workflow Create a test merge request. Check if the workflow triggers and merges automatically. Debug using n8n logs if needed. 4. Deploy and Monitor Deploy the workflow in production. Use n8nโs monitoring features to track execution. This template enables seamless GitLab merge automation, improving efficiency and reducing manual work! Note: Never hard code API token or secret in your https request.
by Naveen Choudhary
Description This workflow automates the process of scraping Google Events data using SerpApi and organizing it in Google Sheets for analysis and tracking. Who's it for Event organizers** who need to monitor competitor events in their area Marketing teams** tracking local events for partnership opportunities Researchers** collecting event data for analysis Business owners** monitoring industry events and conferences How it works The workflow searches Google Events using SerpApi's Google Events engine, processes the returned data, and saves it to a Google Sheets spreadsheet. It handles pagination automatically to collect multiple events and flattens the nested API response into a structured format. What it does Configures search parameters - Sets the search query, total events to fetch, and pagination settings Fetches events via SerpApi - Makes paginated requests to Google Events API with proper rate limiting Processes and flattens data - Transforms nested event data into a flat structure with all relevant fields Saves to Google Sheets - Appends the processed events to a Google Sheets document for easy analysis Requirements SerpApi account** with API key (Get one here) Google Sheets API access** (OAuth2 credentials) Google Sheets document** - Make a copy of this template sheet How to set up Configure SerpApi credentials in the HTTP Request node Set up Google Sheets OAuth2 authentication Update the Google Sheets document ID in the final node to point to your copy Modify search parameters in the "Set Search Parameters" node: Change query to your desired search terms Adjust total_events (10 events per page) Set start position for pagination Run the workflow using the manual trigger How to customize the workflow Search terms**: Modify the query in the Set node (e.g., "conferences in New York", "music events Los Angeles") Event count**: Adjust total_events to fetch more or fewer events Output format**: Modify the Google Sheets column mapping to include/exclude specific fields Rate limiting**: Adjust the requestInterval in the HTTP Request node if needed Scheduling**: Replace the Manual Trigger with a Schedule Trigger for automated runs Output data includes Event title, description, and direct link Start date and timing information Venue and address details Ticket information and pricing Event location map links Event images Original search query for tracking Note: This workflow respects SerpApi rate limits with built-in delays between requests and processes up to 10 events per API call efficiently.
by Jimleuk
This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
by The Higher Pitch
This workflow automates the process of publishing PR News articles to the WordPress website. ๐ง How it works: Uses an RSS Feed Trigger to monitor new PR News articles. Extracts the article content and parses the featured image URL. Uploads the image to WordPress as a media item. Creates a new draft post on the WordPress site using the article's content and sets the uploaded image as the featured image. โ Features: Polls RSS feed every minute. Automatically extracts and sets featured images. Posts are created as drafts for editorial review. ๐ Requirements: WordPress REST API access with media upload permission. Active WordPress credentials in n8n. Perfect for teams who want to streamline PR content publishing without manual effort.
by Pavel Duchovny
Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
by Lucas Peyrin
How it works This workflow is a hands-on tutorial for the Code node in n8n, covering both basic and advanced concepts through a simple data processing task. Provides Sample Data: The workflow begins with a sample list of users. Processes Each Item (Run Once for Each Item): The first Code node iterates through each user to calculate their fullName and age. This demonstrates basic item-by-item data manipulation using $input.item.json. Fetches External Data (Advanced): The second Code node showcases a more advanced feature. For each user, it uses the built-in this.helpers.httpRequest function to call an external API (genderize.io) to enrich the data with a predicted gender. Processes All Items at Once (Run Once for All Items): The third Code node receives the fully enriched list of users and runs only once. It uses $items() to access the entire list and calculate the averageAge, returning a single summary item. Create a Binary File: The final Code node gets the fully enriched list of users once again and creates a binary CSV file to show how to use binary data Buffer in JavaScript. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial and requires no setup. Explore the Nodes: Click on each of the Code nodes to read the code and the comments explaining each step, from basic to advanced. Run the Workflow: Click "Execute Workflow" to see it in action. Check the Output: Click on each node after the execution to see how the data is transformed at each stage. Notice how the data is progressively enriched. Experiment! Try changing the data in the 1. Sample Data node, or modify the code in the Code nodes to see what happens.
by n8n Team
This workflow creates a GitHub issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in GitHub. Prerequisites Zendesk account and Zendesk credentials. GitHub account and GitHub credentials. GitHub repository to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in GitHub. The GitHub issue number is then saved in one of the ticket's fields (in setup we call this "GitHub Issue Number"). The next time a comment is added to the ticket, the workflow retrieves the GitHub issue number from the ticket's field and adds the comment to the issue in GitHub. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the โEndpoint URLโ field, and the โRequest methodโ should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as โNew ticketsโ. Under โConditionsโ in โMeet ALL of the following conditionsโ, add โStatus is Newโ. Under โActionsโ, select โNotify active webhookโ and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the GitHub issue number. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as โGitHub Issue Numberโ. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.