by Jonathan
How it works This workflow watches a selected Google Drive folder for any images added to it. It then takes that image, sends it the the tinypng.com service which optimises and reduces its size (where possible) Tinypng then returns the updated image which is then automatically saved in your chose Google Drive folder Setting things up It's pretty simple to configure and should only take around 5-10mins. You only need to set up credentials for Google Drive and Tinypng.com For Tinypng.com you can sign up for their free tier API access which gives you 500 optimisations per month Once you have those two things, you just need choose your 'input' folder to watch for images and your 'output' folder for where these images should be stored There are a few more optional things you can do such as the naming of your final image and also lots more you could do with the Tinypng API for more advanced image optimisation
by Milorad Filipovic
This workflow will translate all your PDF documents from specified Google Drive folder to the desired language. Translated files will be automatically uploaded to the original folder. Required accounts 1️⃣ Google Drive account 2️⃣ DeepL developer account and API key How to setup? 1️⃣ Setup your google drive folder url, target and source language in the configuration node 2️⃣ Connect your Google Drive account with all Google Drive nodes 3️⃣ Setup HTTP header credentials that should be used for HTTP nodes in the template (replace yourAuthKey with your DeepL API key) 4️⃣ Set your DeepL header credentials in all HTTP nodes
by Mutasem
Use Case This workflow aims to enrich new contacts in Intercom. The more relevant the Intercom profile, the more useful it is. Once active, this n8n workflow will update contact data (phone, email) as well as location data from ExactBuyer. Setup Add a webhook url in Intercom to call this workflow Add your Exact Buyer API key Add your Intercom API key Activate workflow How to adjust this template There's plenty of interesting info that ExactBuyer returns that could be helpful. Take a look and update this workflow to add what you need.
by n8n Team
This workflow syncs your GitHub issues to your Notion database. Whenever a new issue is opened in your GitHub repository, it will be shown in your Notion database, syncing the status property (opened/edited/closed/deleted). In case there’s no Notion database existing yet, a new one will be created automatically. Prerequisites Notion account and Notion credentials GitHub account and GitHub credentials How it works Github trigger starts the workflow when a new issue is created in a GitHub repository. If node splits the workflow conditionally, showing whether the issue is new or an update of an existing issue. If data is new, the Notion node will create a new database page in Notion. If the data is not new, the Function node will create a Notion filter that will find its specific database page by issue ID. Switch node will then conditionally route the data into the appropriate Notion page, based on the update made upon it.
by Adam Janes
How it works The workflow loads a list of test cases from a Google Sheet (previous results stored from an LLM) For each test case, we execute a call to an LLM judge in parallel (using HTTP Request + Webhook nodes) The judge uses the Input, Output, and Reference Answer fields from the spreadsheet to mark each LLM response as Pass/Fail The results are logged into a separate sheet in the same Sheets file. Set up steps: Add your credentials for Google Sheets and OpenRouter (or replace the OpenRouter node with your favourite chat model). Make a copy of the example Sheet to populate it with you own test data. Run the workflow with the Execute Workflow button next to the Manual Trigger node.
by Dvir Sharon
🔍 Extract Competitor SERP Rankings from Google Search to Sheets with Bright Data This template requires a self-hosted n8n instance to run. A comprehensive n8n automation that extracts competitor data from Google search results for specific keywords and target countries, automatically saving structured data to Google Sheets for competitive analysis and market research. 📋 Overview This workflow provides a professional competitor analysis solution that identifies ranking websites for specific search terms across different countries. Perfect for SEO research, competitive intelligence, market analysis, and content strategy planning. The system uses Bright Data's SERP API for accurate search result extraction and advanced HTML parsing for detailed competitor information. Who is this for? SEO professionals conducting competitive analysis Digital marketers researching market landscapes Business analysts studying competitor positioning Content strategists analyzing competitor content approaches Market researchers tracking competitive intelligence across regions What problem is this workflow solving? Extracting competitor data from Google search results Processing multiple keywords across different countries Organizing results in a structured, analyzable format Eliminating manual copy-paste work Ensuring consistent data collection methodology What this workflow does Manual Trigger: Starts the workflow execution Get Keywords from Sheet: Fetches keywords and target countries from Google Sheets URL Encode Keywords: Converts keywords to URL-safe format Process Keywords in Batches: Handles multiple keywords sequentially Fetch Google Search Results: Uses Bright Data SERP API to scrape HTML Extract Competitor Data from HTML: Parses HTML to extract competitor details Save Competitor Results to Sheet: Stores structured data in Google Sheets Wait to Avoid Rate Limits: Implements 30-second delays between requests Output Data Points | Field | Description | Example | | :--------------- | :--------------------------------- | :------------------------------------------ | | Keyword | Original search term | digital marketing services | | Target Country | Geographic target | US | | websiteName | Domain/company name | hubspot | | websiteUrl | Complete website URL | https://www.hubspot.com/marketing | | websiteTitle | Page title from search results | Digital Marketing Software & Tools | | websiteDescription | Meta description/snippet | Grow your business with HubSpot's digital marketing tools... | ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Bright Data account with SERP API access Google Sheet Structure This workflow utilizes two Google Sheets: one for input keywords and one for outputting competitor data. Input Sheet: "Keywords" This sheet should contain the keywords and target countries for your search queries. | Column Header | Data Type | Description | Example | | :------------- | :-------- | :------------------------------------------------- | :-------------- | | Keyword | Text | The search term you want to analyze. | digital marketing | | Country | Text | The 2-letter ISO country code for the target region of the search (e.g., US, GB, DE). | US | Output Sheet: "Competitor Results" This sheet will be populated automatically by the workflow with the extracted competitor data. | Column Header | Data Type | Description | Example | | :----------------- | :-------- | :---------------------------------------------------------------------------------- | :----------------------------------------------- | | Keyword | Text | The original search term used for the query. | digital marketing services | | Target Country | Text | The 2-letter ISO country code of the search results. | US | | websiteName | Text | The name of the website or domain found in the search results. | hubspot | | websiteUrl | URL | The full URL of the website or page found in the search results. | https://www.hubspot.com/marketing | | websiteTitle | Text | The title of the page as displayed in the Google search results. | Digital Marketing Software & Tools | | websiteDescription | Text | The meta description or snippet text displayed under the title in search results. | Grow your business with HubSpot's digital marketing tools... | Step-by-Step Setup Import the Workflow: Copy JSON → n8n → Workflows → + Add → Import from JSON Configure Bright Data Credentials: Credential Type: HTTP Header Auth Header Name: Authorization Header Value: Bearer YOUR_API_TOKEN Configure Google Sheets: Create two new Google Sheets as described above: one named "Keywords" (for input) and one named "Competitor Results" (for output). Set up Google Sheets OAuth2 credentials within n8n. Update Workflow Settings: Replace placeholders: YOUR_GOOGLE_SHEET_ID (for both input and output sheets), YOUR_BRIGHTDATA_CREDENTIAL_ID. Ensure correct sheet/tab names are selected in the Google Sheets nodes. Test & Activate: Add test data to your "Keywords" sheet → Execute workflow → Verify output in your "Competitor Results" sheet. 🛠 How to Customize Add More Data Points:** Modify the JavaScript code in the "Extract Competitor Data from HTML" node to parse and extract additional information from the HTML. Custom Filtering:** Implement logic to exclude specific domains, filter results by title length, or other criteria. Expand Geographic Coverage:** Add more 2-letter ISO country codes to the Bright Data SERP API call to broaden your competitive analysis. Batch Processing:** Adjust the settings in the "Process Keywords in Batches" node to optimize for your Bright Data plan and desired execution speed. Rate Limiting:** Modify the "Wait" node (default: 30 seconds) to increase or decrease the delay between requests based on API limits or performance needs. 📊 Use Cases & Examples SEO Competitive Analysis:** Identify top-ranking competitors for your target keywords and analyze their strategies. Market Entry Research:** Understand the competitive landscape in new geographic regions before expanding. Content Strategy Planning:** Analyze competitor page titles and meta descriptions for inspiration and to identify content gaps. International Market Research:** Compare search engine results and competitor positioning across different countries. 📈 Performance & Limits Single Keyword:** 30–60 seconds per keyword. Batch of 10 Keywords:** Typically takes 5–10 minutes. Large Lists (50+ Keywords):** Expect execution times of 30–60 minutes or more, depending on batching and rate limits. Success Rate:** Generally 95%+ for data extraction. Data Accuracy:** Typically 98%+ for extracted fields. API Calls:** 1 Bright Data SERP API call per keyword, plus multiple Google Sheets writes per execution. Rate Limit:** A 30-second delay between requests is recommended to prevent exceeding API limits. 🧰 Troubleshooting Bright Data API error:** Double-check your API token, ensure you have sufficient credits, and confirm SERP API access is enabled on your Bright Data account. No keywords found:** Verify the Google Sheet ID and ensure the column headers in your "Keywords" sheet precisely match the specifications (e.g., "Keyword", "Country"). Google Sheets permission denied:** Re-authenticate your Google Sheets credentials within n8n and check that the correct sharing settings are applied to your sheets. No results extracted:** Review the JavaScript parsing logic in the "Extract Competitor Data from HTML" node. Also, verify the validity of your keywords and target countries. Loop not processing all:** Check the batch settings in the "Process Keywords in Batches" node and ensure all connections within the loop are correctly configured. 🤝 Support & Community n8n Forum:** <https://community.n8n.io> n8n Docs:** <https://docs.n8n.io> Bright Data Support:** Access support directly via your Bright Data dashboard. GitHub Issues:** Report any bugs or suggest new features on the n8n GitHub repository. 🎯 Final Notes This workflow provides a comprehensive foundation for competitor research and market analysis. Customize it to fit your specific industry needs and competitive intelligence requirements. Please note that this template uses Community Nodes. Ensure you understand the risks before using community nodes.
by Ibrahim
Overview This n8n workflow is designed to extract specific interests from messages in a Telegram chat and retrieve related information using the Facebook Graph API. It aims to provide a streamlined solution for parsing and analyzing user-provided interests within the Telegram platform. Features Interest Extraction:** Automatically identifies and extracts interests from messages that start with the hashtag "#interest". Data Retrieval:** Utilizes the Facebook Graph API to retrieve information related to the extracted interests. Structured Outputs:** Presents the retrieved data in an organized format for further analysis and review. Requirements Operational instance of n8n (self-hosted or cloud version). Basic understanding of n8n workflows and nodes. Setup and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure Telegram Trigger Node: Ensure the Telegram trigger node is set up with the appropriate credentials and webhook ID. Configure and Test Nodes: Adjust node parameters as necessary and test the workflow to ensure proper functionality. How it Works Telegram Trigger: Listens for incoming messages in a specified Telegram chat. Check Message Contents: Verifies if the message begins with the specified hashtag and is from the designated chat ID. Extract Message: Extracts the content of the message for further processing. Split Message: Splits the extracted message to identify the interest and remaining content. Connect to Graph API: Utilizes the Facebook Graph API to search for information related to the extracted interest. Split Interests into a Table: Organizes the retrieved data into a structured table format. Get Variables: Maps the retrieved data to create a new JSON object containing specific fields related to the interest. Create a Spreadsheet: Generates a spreadsheet file in CSV format based on the retrieved and formatted data. Send the Spreadsheet File: Sends the generated spreadsheet file back to the original Telegram chat. Customization Modify the filtering conditions and fields to suit specific requirements. Adjust the frequency of the trigger node based on preference. Best Practices Regularly test the workflow to ensure consistent performance. Stay informed about any changes to external APIs that might affect the workflow's functionality. Contributing Your feedback and contributions are highly valued. Feel free to adapt, modify, and share enhancements with the n8n community.
by Joey D’Anna
This template is a set of building blocks to access Monday.com in ways not supported by the official Monday node. Prerequisites Monday account and Monday credentials. Included are setups to: Find a column value by the column's name (instead of a numerical index which can change when board structure is changed) Find a column value by the column's ID (again, instead of using a numerical index) Pull a board relation column, and get all the related pulses Pull an items subitems and split them out Upload a file to an item's files field Setup Create a Monday.com credential Update the nodes in the template to use your credential Copy/Paste the nodes you need from this template into any other workflow To retreive a column by name: Route a Monday.com node that gets an item to the COLUMN BY NAME node Edit the COLUMN BY NAME node, and enter the name in the first line of code. To retreive a column by its ID: Follow Monday.com's instructions to locate the column's ID Route a Monday.com node that gets an item to the COLUMN BY ID node -Edit the COLUMN BY ID node, and enter the ID in the first line of code. To retreive all linked pulses from a Board Relation column: Route a Monday.com node that gets an item to the GET BOARD RELATION node Edit the GET BOARD RELATION node to specify the column name. All linked pulses will be retrieved by the subsequent PULL LINKEDPULSE node To pull all subitems from an item: Route a Monday.com node that gets an item to the PULL SUBITEMS node All subitems will be retrieved by the subsequent GET EACH SUBITEM node To upload a File: Repalce the Convert to File node with whatever node you are using to output your binary file data Enable the MONDAY UPLOAD node If the destination column is named anything other then the default of "file" - edit the MONDAY UPLOAD node and change column_id:"file" in the first Value field to match the name of your file column
by Mutasem
Use Case This workflow aims to enrich new contacts in HubSpot. The more relevant the HubSpot profile, the more useful it is. Once active, this n8n workflow will update the social profiles, contact data (phone, email) as well as location data from ExactBuyer. Setup Add HubSpot trigger credential (be careful, scopes must be exactly as in n8n docs ) Add your Exact Buyer API key Add HubSpot credential for update node (be careful, scopes must be same as n8n docs for this. This is different from the trigger cred) Activate workflow How to adjust this template There's plenty of interesting info that ExactBuyer returns that could be helpful. Take a look and update this workflow to add what you need.
by Joachim Brindeau
What it does The workflow is a simple yet efficient way to automate the process of indexing your website on Google using the Google Indexing API. How it works It works by extracting information from your sitemap, converting it into a JSON file, and looping through each URL to submit it for indexing. Here's a brief rundown of the workflow: The workflow can be triggered manually via the "Execute Workflow" button or scheduled to run at a specific time using the "Schedule Trigger" node. The sitemap of your website is fetched using the "sitemap_set" node with a HTTP Request to the sitemap URL. This XML sitemap is then converted into a JSON file using the "sitemap_convert" node. The "sitemap_parse" node splits the JSON file into individual URLs. The "url_set" node then prepares each URL to be sent to the Google Indexing API. A loop is created using the "loop" node to process each URL individually and make a POST request to Google Indexing API indicating that the URL has been updated. If the POST request is successful and the URL has been updated, the workflow waits for 2 seconds before moving to the next URL. In case the daily limit for the Google Indexing API is reached (200/day by default), an error message is triggered using the "Stop and Error" node. Before you use the workflow Activate the indexing API Create an account with Google Cloud Platform > Console and then create a new project Search for the Indexing API in the Library Activate the API Create a Service Account and get credentials Open the Service accounts page. If prompted, select a project. Click add Create Service Account, enter a name and description for the service account. You can use the default service account ID, or choose a different, unique one. When done click Create. On the Grant users access to this service account screen, scroll down to the Create key section. Click add Create key. In the side panel that appears, select the JSON format Click Create. Your new public/private key pair is generated and downloaded to your machine. Open the file and copy the private key. Add the credentials in the url_index node Add the user as owner of the site Beware, for each site you need to add the user as a owner like this: Set your sitemap Open the sitemap_set node and add the url to your sitemap. Now you should be able to ensure that Google is always up-to-date with the latest content on your website, improving your website's visibility and SEO rankings, have fun!
by Jay Hartley
What this workflow does This workflow allows you to monitor multiple Github repos simultaneously without polling due to use of Webhooks. It programmatically allows for adding and deleting of repos to your watchlist to make management convenient. Description Can monitor multiple repos simultaneously. Programmatically register or unregister repos from a list. No need for manual work. Webhook notification means no constant polling necessary. Setup 1. Creating Credentials on Github Generate a personal access token on github by following these esteps; Right hand side of page -> Settings -> scroll to bottom -> Developer Settings > Personal Access Token > Tokens (classic) > Generate New Token Give scopes: admin:repo_hook repo (if you want to use it for your own private repo) if you need more help, see here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens 2. Setting Credentials in n8n In Register Github Webhook Authenticaion > Generic Credential Type Generic Auth Type > Header Auth Header Auth > Create New Credential with Name set to 'Authorization' and Value set to 'Bearer '. (You can reuse this for Delete Github Webhook and Get Existing Webhooks). Now in Register Github Webhook, scroll down to Send Body > JSON and inside the JSON, change the value of "url" to the webhook address given as Production URL in the node Webhook Trigger. 3. Notification settings In the third row, link up the Webhook Trigger to any API of your choice. Slack and Telegram are given as examples. You can also format the notification message as you wish. Setup time: roughly 10 minutes. Instructions Video: https://vimeo.com/1013473758 Test 1. Register Webhooks In Repos to Monitor, add any repo you want to monitor changes for. Disable Webhook Trigger, Click Test Workflow and if your Github credentials were set correctly, it will automatically register the webhooks. - You can test this by running the single node Get Existing Webhook and confirming it outputs the repo addresses. 2. Handle Github Events Now that you have registered the webhooks, re-enable Webhook Trigger and activate the workflow. Make a commit to any of the registered repos. Confirm that the notification went through. That's it!
by Ferenc Erb
Use Case Automate chat interactions in Bitrix24 with a customizable bot that can handle various events and respond to user messages. What This Workflow Does Processes incoming webhook requests from Bitrix24 Handles authentication and token validation Routes different event types (messages, joins, installations) Provides automated responses and bot registration Manages secure communication between Bitrix24 and external services Setup Instructions Configure Bitrix24 webhook endpoints Set up authentication credentials Customize bot responses and behavior Deploy and test the workflow