by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a queue using the AWS SQS node. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. AWS SQS: This node will send the data from the previous node to the iss-position queue. If you have created a queue with a different one, you can use that queue instead.
by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a table in Google BigQuery. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. Google BigQuery: This node will send the data from the previous node to the position table in Google BigQuery. If you have created a table with a different name, use that table instead.
by Solomon
Based on Jonathan's work. Check out his templates. This workflow will backup your credentials to GitHub. It uses a CLI command to export all credentials. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Config Options repo.owner - Github owner repo.name - Github repository name repo.path - Path within the Github repository ā The credentials are all decrypted. Make sure you save them safely or tweak the CLI command to store them encrypted.== Check out my other templates š https://n8n.io/creators/solomon/
by Harshil Agrawal
Note: This workflow uses the internal API which is not official. This workflow might break in the future. The workflow executes every night at 23:59. You can configure a different time bin the Cron node. Configure the GitHub nodes with your username, repo name, and the file path. In the HTTP Request nodes (making a request to localhost:5678), create Basic Auth credentials with your n8n instance username and password.
by Lorena
This workflow synchronizes data both ways between Pipedrive and HubSpot. Cron node** schedules the workflow to run every minute. Pipedrive* and *Hubspot nodes** pull in both lists of persons from Pipedrive and contacts from HubSpot. Merge1* and *Merge2 nodes** with the option Remove Key Matches identify the items that uniquely exist in HubSpot and Pipedrive, respectively. Update Pipedrive* and *Update HubSpot nodes** take those unique items and add them in Pipedrive and HubSpot, respectively.
by Jonathan
This workflow uses a Hubspot Trigger to check for new companies. It then checks the companies website exists using the HTTP node. If it doesn't, a message is sent to Slack. To configure this workflow you will need to set the credentials for the Hubspot and Slack Nodes. You will also need to select the Slack channel to use for sending the message.
by Jonathan
This workflow takes Dialpad call information for an answered call and pushes it into Syncro as either a ticket or an update to an existing ticket. You will need to have a workflow for each technician at this time. It also saves call/ticket information to a Google Sheet to be queried by the dialpad_to_syncro_timer.json workflow. This will match to inbound and outbound calls, so if that's not desired you need to add in an IF to only proceed on either inbound or outbound calls. > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by n8n Team
This workflow is for anyone looking to automatically fetch, validate, and parse complex language-based queries into a structured format. Its unique capability lies in not only processing language but also fixing invalid outputs before structuring them. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by n8n Team
This n8n workflow automates the handling of security detections from CrowdStrike, streamlining incident response and notification processes. The workflow is triggered daily at midnight by the Schedule Trigger node. It begins by fetching recent security detections from CrowdStrike using an HTTP Request node. The response is then split into individual detections for further processing. Each detection is enriched by querying the CrowdStrike API for detailed information using another HTTP Request node. The workflow then processes these detections sequentially using the Split In Batches node. Next, it looks up behavioral information associated with each detection in VirusTotal using two HTTP Request nodes. One node queries VirusTotal based on SHA256 values, and the other based on IOC (Indicator of Compromise) values. The workflow includes a 1-second pause using the Wait node to prevent rate limiting when making requests to the VirusTotal API. Subsequently, the workflow sets fields with relevant details from both CrowdStrike and VirusTotal, including detection links, confidence scores, filenames, usernames, and more. These details are concatenated using an Item Lists node for each detection. The final step involves creating Jira issues for each detection, including summaries with CrowdStrike alert severity and hostnames, as well as descriptions that incorporate information from CrowdStrike and VirusTotal. Information about this issue is then sent via a Slack message to a Slack user. Potential issues during setup might include configuring the Schedule Trigger node to trigger at the correct time zone and handling potential rate limiting from the VirusTotal API, which could lead to throttled requests. Additionally, the note about a possible typo in the URL for the Virustotal nodes should be addressed to ensure correct API calls. The Jira node may need to be replaced with the latest version for compatibility. Properly configuring API credentials and handling errors that may occur during API requests are essential for a smooth workflow operation. Careful testing with sample data is recommended to validate the workflow's functionality and ensure it aligns with your organization's security incident response processes.
by Eduard
āļøš ļøšš¤š¦¾ This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** ā an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recuce the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Jonathan
Task: Handle dates and times in your workflow Why: Date and time formats can be hard to work with, we have 2 main ways of doing that with n8n that cover all the main needs Main use cases: Change date format Set custom dates (incl. now and today) Date math