by Yaron Been
Prunaai Flux Schnell Image Generator Description This is a 3x faster FLUX.1 [schnell] model from Black Forest Labs, optimised with pruna with minimal quality loss. Contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux-schnell model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image Optional Parameters seed** (integer, default: None): Random seed. Set for reproducible generation megapixels** (string, default: 1): Approximate number of megapixels for generated image speed_mode** (string, default: Juiced 🔥 (default)): Run faster predictions with model optimized for speed num_outputs** (integer, default: 1): Number of outputs to generate aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Format of the output images output_quality** (integer, default: 80): Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs num_inference_steps** (integer, default: 4): Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux-schnell API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Prunaai Flux.1 Dev Image Generator Description This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux.1-dev model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Seed guidance** (number, default: 3.5): Guidance scale image_size** (integer, default: 1024): Base image size (longest side) speed_mode** (string, default: Juiced 🔥 (default)): Speed optimization level aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Output format output_quality** (integer, default: 80): Output quality (for jpg and webp) num_inference_steps** (integer, default: 28): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux.1-dev API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by MRJ
Modular Hazard Analysis Workflow : Free Version Business Value Proposition Accelerates ISO 26262 compliance for automotive/industrial systems by automating safety analysis while maintaining rigorous audit standards. :chart_with_upwards_trend: Key Benefits Time Instant report generation vs. weeks of documentation for HAZOP Risk Mitigation Pre-validated templates reduce human error Quick guide Input a systems_description file to the workflow Provide an OPENAI_API_KEY to the chat model. You can also replace the chat model with the model of your interest. :play_or_pause_button: Running the Workflow Refer to the github repo to understand in detail about how the workflow can be used :email: Contact For collaboration proposals or security issues, contact me by Email. :warning: Validation & Limitations AI-Assisted Analysis Considerations | Advantage | Mitigation Strategy | Implementation Example | |-----------|---------------------|------------------------| | Rapid hazard identification | Human validation layer | Manual review nodes in workflow | | Consistent S/E/C scoring | Rule-based validation | ASIL-D → Redundancy check | | Edge case coverage | Cross-reference with historical data | Integration with incident databases |
by Nícolas Pastorello
What is this? This is an n8n workflow designed to supercharge your Sonarr setup. Instead of just waiting for releases to appear in your RSS feed, this workflow proactively runs on a schedule, finds what's missing, actively searches for it, and grabs the best result based on your specific criteria. It's a "set it and forget it" solution to ensure your library is always complete. Key Features 🚀 Proactive Searching: Doesn't wait for content to come to you. It actively triggers a search for missing episodes. 🗓️ Fully Automated & Scheduled: Runs every 12 hours by default to check for anything new that's missing. 🧠 Smart & Efficient: Searches only once per season, even if multiple episodes from that season are missing, preventing unnecessary API calls. 🎯 Precise Release Filtering: It validates search results against the exact quality name and language you define before telling Sonarr to grab it. This gives you more control than standard quality profiles. ✅ Automatic Download: Once a valid release is found, it's automatically pushed to your download client via Sonarr. How It Works Trigger: The workflow starts automatically on a schedule. Fetch Missing: It connects to your Sonarr instance and gets a list of all monitored, "wanted" episodes. Filter & Group: It intelligently creates a unique list of seasons that need searching. Search: It loops through each unique season and tells Sonarr to perform an interactive search. Validate: It inspects the search results and only allows releases that match both the pre-defined quality AND language. Grab: If a perfect match is found, it sends a final command to Sonarr to grab that specific release and begin the download. How to Use This Template Import the JSON file into your n8n instance. Find the node named "info" (it's a "Set" node near the beginning). This is your main configuration area. Update the following values in the "info" node: urlSonar: Change http://192.168.31.204:8989 to your Sonarr's URL. apikey: Paste your Sonarr API key here. quality: Set the exact quality name you want to match (e.g., WEBDL-1080p). languages: Set the exact language name you want to match (e.g., English, Spanish). Activate the workflow. That's it! You can also change the schedule by editing the "Schedule Trigger" node.
by Harshil Agrawal
This workflow automatically monitors the functionality of a factory. The workflow logs machine data coming from factory sensors in a CrateDB database, generates an incident report in PagerDuty, and notifies the responsible staff members when the temperature of a machine crosses the threshold value. This workflow builds on a workflow that generates factory data. Read more about this use case and how to build both workflows with step-by-step instructions in the blog post How to automate your factory's incident reporting. Prerequisites A PagerDuty account and credentials AMQP, an ActiveMQ connection, and credentials A CrateDB instance running locally or on a server, and credentials. Nodes AMQP Trigger node starts the workflow. IF node filters sensor values higher than 50°C. PagerDuty node creates an incident in the account. Set nodes set the required incident information and sensor data, respectively. CrateDB nodes ingest the information data and machine sensor data, respectively. Function node converts degrees from Celsius to Fahrenheit.
by jason
Not sure what to eat tonight? Have recipes emailed to you daily based on your criterial. To run this workflow, you will need to have: A Recipe Search API key from Edamam An active email account with configured credentials To set up your credentials: Set your Edamam AppID and AppKey in the Search Criteria node Select (or create) your email credentials in the Send Recipes node (and set up the to: and from: email addresses while you are at it) To customize the recipes that you receive, open up the Search Criteria node and modify one or more of the following: RecipeCount** - the numner of recipes you would like to receive IngredientCount** - the maximum number of ingredients you would like each recipe to have CaloriesMin** - the minimum number of calories the recipe will have CaloriesMax** - the maximum number of calories the recipe will have TimeMin** - the minimum amount of time (in minutes) the recipe will take to prepare TimeMax** - the maximum amount of time (in minutes) the recipe will take to prepare Diet** - Select one of the following options: balanced - Protein/Fat/Carb values in 15/35/50 ratio high-fiber - More than 5g fiber per serving high-protein - More than 50% of total calories from proteins low-carb - Less than 20% of total calories from carbs low-fat - Less than 15% of total calories from fat low-sodium - Less than 140mg Na per serving random - selects a different random diet each day Health** - Select one of the following options: alcohol-free - No alcohol used or contained immuno-supportive - Recipes which fit a science-based approach to eating to strengthen the immune system celery-free - does not contain celery or derivatives crustacean-free - does not contain crustaceans (shrimp, lobster etc.) or derivatives dairy-free - No dairy; no lactose egg-free - No eggs or products containing eggs fish-free - No fish or fish derivatives fodmap-free - Does not contain FODMAP foods gluten-free - No ingredients containing gluten keto-friendly - Maximum 7 grams of net carbs per serving kidney-friendly - per serving – phosphorus less than 250 mg AND potassium less than 500 mg AND sodium: less than 500 mg kosher - contains only ingredients allowed by the kosher diet. However it does not guarantee kosher preparation of the ingredients themselves low-potassium - Less than 150mg per serving lupine-free - does not contain lupine or derivatives mustard-free - does not contain mustard or derivatives low-fat-abs - Less than 3g of fat per serving no-oil-added - No oil added except to what is contained in the basic ingredients low-sugar - No simple sugars – glucose, dextrose, galactose, fructose, sucrose, lactose, maltose paleo - Excludes what are perceived to be agricultural products; grains, legumes, dairy products, potatoes, refined salt, refined sugar, and processed oils peanut-free - No peanuts or products containing peanuts pecatarian - Does not contain meat or meat based products, can contain dairy and fish pork-free - does not contain pork or derivatives red-meat-free - does not contain beef, lamb, pork, duck, goose, game, horse, and other types of red meat or products containing red meat. sesame-free - does not contain sesame seed or derivatives shellfish-free - No shellfish or shellfish derivatives soy-free - No soy or products containing soy sugar-conscious - Less than 4g of sugar per serving tree-nut-free - No tree nuts or products containing tree nuts vegan - No meat, poultry, fish, dairy, eggs or honey vegetarian - No meat, poultry, or fish wheat-free - No wheat, can have gluten though random - selects a different random health option each day SearchItem* - the general term that you are looking for e.g. *chicken
by Angel Menendez
This workflow will allow you at the beginning of each day to copy your google calendar events into Trello so you can take notes, label, or automate your tasks. When deploying this, don't forget to change: Label ID for meeting type under "Create Trello Cards". You should be able to find instructions Here on how to find the label ID. Description for Trello cards under "Create Trello Cards". I currently pull in notes but it should be simple to change to pull the Gcal description instead. You can change the trigger time to fire at a different time.
by Lorena
This workflow is triggered when a new order is created in Shopify. Then: the order information is stored in Zoho CRM, an invoice is created in Harvest and stored in Trello, if the order value is above 50, an email with a discount coupon is sent to the customer and they are added to a MailChimp campaign for high-value customers; otherwise, only a "thank you" email is sent to the customer. Note that you need to replace the List ID in the Trello node with your own ID (see instructions in our docs). Same goes for the Account ID in the Harvest node (see instructions here).
by Lorena
This workflow is triggered when a typeform is submitted, then it saves the sender's information into HubSpot as a new contact. Typeform Trigger: triggers the workflow when a typeform is submitted. Set: sets the fields for the values from Typeform. HubSpot 1: creates a new contact with information from Typeform. IF: filters contacts who expressed their interest in business services. HubSpot 2: updates the contact's stage to opportunity. Gmail: sends an email to the opportunity contacts with informational material. NoOp: takes no action for contacts who are not interested.
by Eduard
Extract data from a webpage (Ycombinator news page) and create a nice list using itemList node. It seems that current version in n8n (0.141.1) requires to extract each variable one by one. Hopefully in a futute it will be possible to create the table using just one itemList node. Another nice feature of the workflow is an automatically generated file name with the resulting table. Check out the "fileName" option of the Spreadsheet File node: "Ycombinator_news_{{new Date().toISOString().split('T', 1)[0]}}.{{$parameter[\"fileFormat\"]}}" The resulting table is saved as .xls file and delivered via email
by Lorena
This workflow transcribes audio files stored in AWS S3 and stores the information in Google Sheets. Google Drive Trigger node** triggers the workflow when a new file is uploaded in Google Drive. AWS S3 1 node** uploads the new file to an S3 bucket. AWS S3 2 node** gets the file from the S3 bucket. AWS Transcribe 1 node** creates a transciption job for the respective audio file. Wait node** waits for the transcription job from the previous node to be complete before proceeding with the workflow (necessary in case the service is busy or the file to be transcribed is large, delaying the workflow). AWS Transcribe 2 node** gets the information of the transcription job. Set node** sets the necessary values to be included in the data set. Google Sheets node** adds the transcription information to a sheet that serves as data set.
by TheUnknownEntity
I'm currently trialing a 4 day work week for all staff at my company, and one of the major impacts on productivity is interruptions. As such, I opted to use N8N to create a workflow to monitor my Google Calendar and when an event starts, to update my Slack status with an emote and the title of the calendar task. Additionally I opted to include to change the colour of Philips Hue lamp located in my living room where my wife is currently working so she know's if she can interrupt me or not. My calendar is built on the theory behind the Diary Detox system and as such the Slack Status reflect the colours involved. This was achieved using the emote aliases for the relevant colour circles. The Philips Hue lamp status is changed via the local API with Home Assistant. This is a very similiar process to controlling it with something like the Streamdeck, but the workflow calls the Webhook instead of the Streamdeck. This process can be found in lots of Youtube videos such as this. This gives my wife a very quick and easy way to know if she can interrupt me in my office (when the lights are Green or Blue) or when I'm busy (Red). Please Note: The above images are not intended to be an incentive to create your own Squid Games. Additionally, when integrating Slack with N8N, there are 2 x APIs which can be used. Typically the Bot User OAuth Token is used, however in order for your Status to be updated, the User OAuth Token must be used with the users.profile:read and users.profile:write permissions enabled. For clarity, I have removed the Webhooks from the Workflow as this would allow any person to control my lights. These can be inserted in the HTTP Request nodes. Each node responds to a different automation within the Home Assistant infrastructure. Acknowledgement: I would also credit Jon (Discord) aka 8668 (Workflows) for writing the Function node which turns the ColorID into a named variable.