by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a queue using the AWS SQS node. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. AWS SQS: This node will send the data from the previous node to the iss-position queue. If you have created a queue with a different one, you can use that queue instead.
by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a table in Google BigQuery. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. Google BigQuery: This node will send the data from the previous node to the position table in Google BigQuery. If you have created a table with a different name, use that table instead.
by Solomon
Based on Jonathan's work. Check out his templates. This workflow will backup your credentials to GitHub. It uses a CLI command to export all credentials. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Config Options repo.owner - Github owner repo.name - Github repository name repo.path - Path within the Github repository β The credentials are all decrypted. Make sure you save them safely or tweak the CLI command to store them encrypted.== Check out my other templates π https://n8n.io/creators/solomon/
by Harshil Agrawal
Note: This workflow uses the internal API which is not official. This workflow might break in the future. The workflow executes every night at 23:59. You can configure a different time bin the Cron node. Configure the GitHub nodes with your username, repo name, and the file path. In the HTTP Request nodes (making a request to localhost:5678), create Basic Auth credentials with your n8n instance username and password.
by Jonathan
This workflow takes Dialpad call information for an answered call and pushes it into Syncro as either a ticket or an update to an existing ticket. You will need to have a workflow for each technician at this time. It also saves call/ticket information to a Google Sheet to be queried by the dialpad_to_syncro_timer.json workflow. This will match to inbound and outbound calls, so if that's not desired you need to add in an IF to only proceed on either inbound or outbound calls. > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by Eduard
βοΈπ οΈππ€π¦Ύ This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** β an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recuce the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
by siyad
Workflow Description: This workflow automates the synchronization of product data from a Shopify store to a Google Sheets document, ensuring seamless management and tracking. It retrieves product details such as title, tags, description, and price from Shopify via GraphQL queries. The outcome is a comprehensive list of products neatly organized in Google Sheets for easy access and analysis. Key Features: Automated: Runs on a schedule you define (e.g., daily, hourly) to keep your product data fresh. Complete Product Details: Retrieves titles, descriptions, variants, images, inventory, and more. Cursor-Based Pagination: Efficiently handles large product sets by navigating pages without starting from scratch. Google Sheets Integration: Writes product data directly to your designated sheets. Set up Instructions: Set up GraphQL node with Header Authentication for Shopify: Create Google Sheet Credentials: Follow this guide to set up your Google Sheet credentials for n8n: https://docs.n8n.io/integrations/builtin/credentials/google/ Choose your Google Sheet: Select the sheet where you want product information written. For the setup, we need a document with two sheets: 1. for storing Shopify data 2. for storing cursor details. Google sheet template : https://docs.google.com/spreadsheets/d/1I6JnP8ugqmMD5ktJlNB84J1MlSkoCHhAEuCofSa3OSM Schedule and run: Decide how often you want the data refreshed (daily, hourly, etc.) and let n8n do its magic!
by Eduard
π Supercharge Your Website Indexing with This Powerful n8n Workflow! π Google page indexing too slow? Tired of manually clicking through each page in the Google Search Console? π΄ Say goodbye to that tedious process and hello to automation with this n8n workflow! π **NB: this workflow was tested with sitemap.xml generated by Ghost CMS and WordPress. Reach out to Eduard if you need help adapting this workflow to your specific use-case!** βοΈ How this automation works π The workflow runs on a schedule or when you click "Test workflow". π It fetches the website's primary sitemap.xml and extracts all the content-specific sitemaps (this is a typical structure of the sitemap). π Each content-specific sitemap is then parsed to retrieve the individual page data. π The extracted page data is converted to JSON format for easy manipulation. ποΈ The lastmod (last modified date) and loc (page URL) fields are assigned to each page entry to ensure compliance with the Sitemap protocol. π The page entries are sorted by the lastmod field in descending order (newest to oldest). π The workflow then loops over each page entry and performs the following steps: π Checks the URL metadata in the Google Indexing API. β If the page is new or has been updated since the last indexing request, it sends a request to the Google Indexing API to update the URL. β³ Wait a sec and move on with the next page. π Benefits β° Save time by automating the indexing process. π― Ensure all your website pages are consistently indexed by Google. π Improve your website's visibility and search engine rankings. π οΈ Customize the workflow to fit your specific CMS and requirements. π§ Getting started To start using this powerful n8n workflow, follow these steps: βοΈ Make sure to verify the website ownership in the Google Search Console. π¨βπ» Import the workflow JSON into your n8n instance. Edit the Get sitemap.xml node and update the URL with your website's valid sitemap.xml π Set up the necessary credentials for the Google Indexing API. ποΈ Adjust the schedule trigger to run the workflow at your desired frequency. π Sit back and let the workflow handle the indexing process for you! Ready to take your website indexing to the next level? π Try this workflow now and see the difference it makes! π β οΈ IMPORTANT NOTE 1 Need help with connecting Google Cloud Platform to n8n? Check out our article on connecting Google Sheets to n8n. The process is mainly the same. When activating Google APIs, make sure to add Web Search Indexing API. Also, in the credential page of n8n, add the https://www.googleapis.com/auth/indexing scope: Check out Yulia's page for more n8n workflows! β οΈ IMPORTANT NOTE 2 Free Google Cloud Platform account allows (re)indexing only 200 pages per day. If your website has more, then the workflow will automatically fail on quota limit β. Next day it will skip the previously added items and continue with remaining pages. Example:* Assuming you have a free Google account, 500 pages on your website and they don't change for 3 days: On the first day 200 pages will be added for indexing and the workflow will fail due to quota limits. On the second day, the workflow will check 200 pages again and skip them (because the date of re-indexing is later then the page last modified date). The next 200 pages will be added to indexing. Workflow will fail again due to quota limits. On the third day 400 pages will be checked and skipped, the last 100 pages will be added for indexing and the workflow finishes successfully.
by Yaron Been
This workflow automatically monitors social media advertising performance across platforms to track campaign effectiveness and ROI. It saves you time by eliminating the need to manually check multiple ad platforms and provides consolidated performance data for all your social media campaigns. Overview This workflow automatically scrapes social media advertising platforms to extract campaign performance metrics including impressions, clicks, conversions, and cost data. It uses Bright Data to access ad platforms without being blocked and AI to intelligently parse advertising data into structured performance reports. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping social ad platforms without being blocked OpenAI**: AI agent for intelligent ad performance data extraction and analysis Google Sheets**: For storing and organizing advertising performance data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your ad performance tracking spreadsheet Customize: Set target ad platform URLs and campaign monitoring parameters Use Cases Digital Marketing**: Track ROI and performance across all social media ad campaigns Performance Analysis**: Identify top-performing ads and optimize underperforming campaigns Budget Management**: Monitor ad spend and cost-per-acquisition metrics Campaign Optimization**: Make data-driven decisions for ad creative and targeting Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #socialads #adperformance #brightdata #webscraping #digitalmarketing #n8nworkflow #workflow #nocode #adautomation #campaigntracking #socialmediamarketing #adanalytics #performancetracking #marketingautomation #admonitoring #campaignanalysis #socialadvertising #marketingdata #admetrics #digitaladvertising #adoptimization #campaignmonitoring #marketinganalysis #adinsights #socialmediaads #paidads #adcampaigns #marketingroi
by ConvertAPI
Who is this for? For developers and organizations that need to convert DOCX files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the DOCX file from the web. Converts the DOCX file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.
by ConvertAPI
Who is this for? For developers and organizations that need to convert image files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the JPG file from the web. Converts the JPG file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.