by Eduard
Extract data from a webpage (Ycombinator news page) and create a nice list using itemList node. It seems that current version in n8n (0.141.1) requires to extract each variable one by one. Hopefully in a futute it will be possible to create the table using just one itemList node. Another nice feature of the workflow is an automatically generated file name with the resulting table. Check out the "fileName" option of the Spreadsheet File node: "Ycombinator_news_{{new Date().toISOString().split('T', 1)[0]}}.{{$parameter[\"fileFormat\"]}}" The resulting table is saved as .xls file and delivered via email
by Lorena
This workflow transcribes audio files stored in AWS S3 and stores the information in Google Sheets. Google Drive Trigger node** triggers the workflow when a new file is uploaded in Google Drive. AWS S3 1 node** uploads the new file to an S3 bucket. AWS S3 2 node** gets the file from the S3 bucket. AWS Transcribe 1 node** creates a transciption job for the respective audio file. Wait node** waits for the transcription job from the previous node to be complete before proceeding with the workflow (necessary in case the service is busy or the file to be transcribed is large, delaying the workflow). AWS Transcribe 2 node** gets the information of the transcription job. Set node** sets the necessary values to be included in the data set. Google Sheets node** adds the transcription information to a sheet that serves as data set.
by Lorena
This workflow collects images from web search results on a specific query, analyzes the image for labels, formats the text, and adds the information in Google Sheets. HTTP Request node** gets images from the web. AWS Rekognition node** analyzes the images (in this case, it detects text in an image). Set node** sets the values necessary for the data set. Function node** transforms the text detected in the image to lower case. Google Sheets node** adds the image information to a sheet that serves as data set.
by TheUnknownEntity
I'm currently trialing a 4 day work week for all staff at my company, and one of the major impacts on productivity is interruptions. As such, I opted to use N8N to create a workflow to monitor my Google Calendar and when an event starts, to update my Slack status with an emote and the title of the calendar task. Additionally I opted to include to change the colour of Philips Hue lamp located in my living room where my wife is currently working so she know's if she can interrupt me or not. My calendar is built on the theory behind the Diary Detox system and as such the Slack Status reflect the colours involved. This was achieved using the emote aliases for the relevant colour circles. The Philips Hue lamp status is changed via the local API with Home Assistant. This is a very similiar process to controlling it with something like the Streamdeck, but the workflow calls the Webhook instead of the Streamdeck. This process can be found in lots of Youtube videos such as this. This gives my wife a very quick and easy way to know if she can interrupt me in my office (when the lights are Green or Blue) or when I'm busy (Red). Please Note: The above images are not intended to be an incentive to create your own Squid Games. Additionally, when integrating Slack with N8N, there are 2 x APIs which can be used. Typically the Bot User OAuth Token is used, however in order for your Status to be updated, the User OAuth Token must be used with the users.profile:read and users.profile:write permissions enabled. For clarity, I have removed the Webhooks from the Workflow as this would allow any person to control my lights. These can be inserted in the HTTP Request nodes. Each node responds to a different automation within the Home Assistant infrastructure. Acknowledgement: I would also credit Jon (Discord) aka 8668 (Workflows) for writing the Function node which turns the ColorID into a named variable.
by Jonathan
This workflow searches for mentions of a company's name on Twitter and shares the tweets that mention it in a Slack channel. Prerequisites A Slack account and credentials A Twitter account and credentials Nodes Cron node executes the workflow every 10 minutes. Note that if you change the Mode from "Every X" you will need to manually update the Date & Time node to subtract the interval you are using. Set nodes set the required values (name of the Slack channel, name of the Twitter account to search for, the tweet text and URL). Date & Time node subtracts 10 minutes from the workflow execution time. Twitter node gets the latest 50 tweets that mention the specified account. IF node filters tweets posted in the past 10 minutes. Slack node posts tweets in a Slack channel.
by Ron
This sample workflow allows you to forward alerts from TheHive 5 to SIGNL4 in order to send reliable alerts to your team. There are two nodes for testing the TheHive connection ("TheHive Read Alerts" and "TheHive Create Alert"). The node "TheHive Webhook Request" will receive requests for new alerts from TheHive. You need to configure the webhook and the notifications in TheHive accordingly. The node "SIGNL4 Send Alert" sends the alert to SIGNL4 and the node "SIGNL4 Resolve Alert" will close the alert in SIGNL4 in case it has been closed in TheHive.
by Tom
This workflow builds a valid RSS feed (which is an XML feed under the hood) for ARD Audiothek podcasts. This allows you to subscribe to such podcasts using your favourite podcatcher without using the ARD Audiothek app. The example builds a feed for Kalk & Welk, but the workflow can be easily adjusted by providing another podcast URL on the Get overview page HTTP Request node. To subscribe to the feed, active your n8n workflow and then use the Production URL from the intitial Feed Webhook node in your podcatcher. I've tested the resulting feed using Pocket Casts... ...and Miniflux: When using Miniflux, you can add your feed via this page to your account. Make sure you select Private when doing so to avoid sharing your n8n instance with the world. The resulting feed passes the W3C Feed Validation Service: The workflow can also be used as a foundation to free other podcasts from propriertary big media platforms, though not all of them will be as simple to deal with as the ARD Audiothek.
by n8n Team
This workflow is designed to compare two datasets (Dataset 1 and Dataset 2) based on a common field, "fruit," and provide insights into the differences. Here are the steps: Manual Trigger: The workflow begins when a user clicks "Execute Workflow." Dataset 1: This node generates the first dataset containing information about fruits, such as apple, orange, grape, strawberry, and banana, along with their colors. Dataset 2: This node generates the second dataset, also containing information about fruits, but with some variations in color. For example, it includes a "kiwi" with the color "mostly green." Compare Datasets: The "Compare Datasets" node takes both datasets and compares them based on the "fruit" field. It identifies any differences or matches between the two datasets. In summary, this workflow is used to compare two datasets of fruits and their colors, identify differences, and provide guidance on how to explore the comparison results.
by n8n Team
This workflow demonstrates two ways of exporting data from SQL to XML. First, several random records are received from the MySQL database. Then, in the upper part of the workflow, the structure of an XML is defined in the Set node. After that, the ItemLists node combines all items into an array. This allows an XML node to create a simple XML file. The lower part of the workflow shows how to create an XML with attributes. It is almost identical except that a $ (dollar sign) JSON key is used to define XML attributes. Finally, both files are saved locally.
by n8n Team
This workflow generates CSV files containing a list of 10 random users with specific characteristics using OpenAI's GPT-4 model. It then splits this data into batches, converts it to CSV format, and saves it to disk for further use. The execution of the workflow begins from here when triggered manually. "OpenAI" Node. This uses the OpenAI API to generate random user data. The input to the OpenAI API is a fixed string, which asks for a list of 10 random users with some specific attributes. The attributes include a name and surname starting with the same letter, a subscription status, and a subscription date (if they are subscribed). There is also a short example of the JSON object structure. This technique is called one-shot prompting. "Split In Batches" Node. This node is used to handle the OpenAI responses one by one. "Parse JSON" Node. This node converts the content of the message received from the OpenAI node (which is in string format) into a JSON object. "Make JSON Table" Node. This node is used to convert the JSON data into a tabular format, which is easier to handle for further data processing. "Convert to CSV" Node. This node converts the table format data received from the "Make JSON Table" node into CSV format and assigns a file name. "Save to Disk" Node. This node is used to save the CSV generated in the previous node to disk in the ".n8n" directory. The workflow is designed in a circular manner. So, after saving the file to disk, it goes back to the "Split In Batches" node to process the OpenAI output, until all batches are processed.
by n8n Team
This is an example workflow that imports an XML file into an SQL database. The ReadBinaryFiles node loads the XML file from the server. Then the Code node extracts the file content from the binary buffer. Afterwards, an XML node converts the XML string into a JSON structure. Finally, in the MySQL node inserts the data records into the SQL table. In the upper part of the workflow there is another MySQL node that is disabled. This node creates a new table with all the required variables based on the sample SQL database: https://www.mysqltutorial.org/mysql-sample-database.aspx
by Yaron Been
Fire Part Crafter Image Generator Description PartCrafter is a structured 3D mesh generation model that creates multiple parts and objects from a single RGB image. Overview This n8n workflow integrates with the Replicate API to use the fire/part-crafter model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters image** (string): Input image for 3D mesh generation Optional Parameters seed** (integer, default: 0): Random seed for reproducibility. Use 0 for random seed num_parts** (integer, default: 16): Number of parts to generate num_tokens** (string, default: 2048): Number of tokens for generation guidance_scale** (number, default: 7): Guidance scale for generation remove_background** (boolean, default: False): Remove background from input image use_flash_decoder** (boolean, default: False): Use flash decoder for faster inference (Tempermental?) num_inference_steps** (integer, default: 50): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/part-crafter API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters