by Yaron Been
Prunaai Flux.1 Dev Image Generator Description This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux.1-dev model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Seed guidance** (number, default: 3.5): Guidance scale image_size** (integer, default: 1024): Base image size (longest side) speed_mode** (string, default: Juiced 🔥 (default)): Speed optimization level aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Output format output_quality** (integer, default: 80): Output quality (for jpg and webp) num_inference_steps** (integer, default: 28): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux.1-dev API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Nícolas Pastorello
What is this? This is an n8n workflow designed to supercharge your Sonarr setup. Instead of just waiting for releases to appear in your RSS feed, this workflow proactively runs on a schedule, finds what's missing, actively searches for it, and grabs the best result based on your specific criteria. It's a "set it and forget it" solution to ensure your library is always complete. Key Features 🚀 Proactive Searching: Doesn't wait for content to come to you. It actively triggers a search for missing episodes. 🗓️ Fully Automated & Scheduled: Runs every 12 hours by default to check for anything new that's missing. 🧠 Smart & Efficient: Searches only once per season, even if multiple episodes from that season are missing, preventing unnecessary API calls. 🎯 Precise Release Filtering: It validates search results against the exact quality name and language you define before telling Sonarr to grab it. This gives you more control than standard quality profiles. ✅ Automatic Download: Once a valid release is found, it's automatically pushed to your download client via Sonarr. How It Works Trigger: The workflow starts automatically on a schedule. Fetch Missing: It connects to your Sonarr instance and gets a list of all monitored, "wanted" episodes. Filter & Group: It intelligently creates a unique list of seasons that need searching. Search: It loops through each unique season and tells Sonarr to perform an interactive search. Validate: It inspects the search results and only allows releases that match both the pre-defined quality AND language. Grab: If a perfect match is found, it sends a final command to Sonarr to grab that specific release and begin the download. How to Use This Template Import the JSON file into your n8n instance. Find the node named "info" (it's a "Set" node near the beginning). This is your main configuration area. Update the following values in the "info" node: urlSonar: Change http://192.168.31.204:8989 to your Sonarr's URL. apikey: Paste your Sonarr API key here. quality: Set the exact quality name you want to match (e.g., WEBDL-1080p). languages: Set the exact language name you want to match (e.g., English, Spanish). Activate the workflow. That's it! You can also change the schedule by editing the "Schedule Trigger" node.
by TheUnknownEntity
I'm currently trialing a 4 day work week for all staff at my company, and one of the major impacts on productivity is interruptions. As such, I opted to use N8N to create a workflow to monitor my Google Calendar and when an event starts, to update my Slack status with an emote and the title of the calendar task. Additionally I opted to include to change the colour of Philips Hue lamp located in my living room where my wife is currently working so she know's if she can interrupt me or not. My calendar is built on the theory behind the Diary Detox system and as such the Slack Status reflect the colours involved. This was achieved using the emote aliases for the relevant colour circles. The Philips Hue lamp status is changed via the local API with Home Assistant. This is a very similiar process to controlling it with something like the Streamdeck, but the workflow calls the Webhook instead of the Streamdeck. This process can be found in lots of Youtube videos such as this. This gives my wife a very quick and easy way to know if she can interrupt me in my office (when the lights are Green or Blue) or when I'm busy (Red). Please Note: The above images are not intended to be an incentive to create your own Squid Games. Additionally, when integrating Slack with N8N, there are 2 x APIs which can be used. Typically the Bot User OAuth Token is used, however in order for your Status to be updated, the User OAuth Token must be used with the users.profile:read and users.profile:write permissions enabled. For clarity, I have removed the Webhooks from the Workflow as this would allow any person to control my lights. These can be inserted in the HTTP Request nodes. Each node responds to a different automation within the Home Assistant infrastructure. Acknowledgement: I would also credit Jon (Discord) aka 8668 (Workflows) for writing the Function node which turns the ColorID into a named variable.
by n8n Team
This is an example workflow that imports an XML file into an SQL database. The ReadBinaryFiles node loads the XML file from the server. Then the Code node extracts the file content from the binary buffer. Afterwards, an XML node converts the XML string into a JSON structure. Finally, in the MySQL node inserts the data records into the SQL table. In the upper part of the workflow there is another MySQL node that is disabled. This node creates a new table with all the required variables based on the sample SQL database: https://www.mysqltutorial.org/mysql-sample-database.aspx
by Dataki
This workflow enriches new Pipedrive organization's data by adding a note to the organization object in Pipedrive. It assumes there is a custom "website" field in your Pipedrive setup, as data will be scraped from this website to generate a note using OpenAI. Then, a notification is sent in Slack. ⚠️ Disclaimer This workflow uses a scraping API. Before using it, ensure you comply with the regulations regarding web scraping in your country or state. Important Notes The OpenAI model used is GPT-4o, chosen for its large input token capacity. However, it is not the cheapest model if cost is very important to you. The system prompt in the OpenAI Node generates output with relevant information, but feel free to improve or modify it according to your needs. How It Works Node 1: Pipedrive Trigger - An Organization is Created This is the trigger of the workflow. When an organization object is created in Pipedrive, this node is triggered and retrieves the data. Make sure you have a "website" custom field in Pipedrive (the name of the field in the n8n node will appear as a random ID and not with the Pipedrive custom field name). Node 2: ScrapingBee - Get Organization's Website's Homepage Content This node scrapes the content from the URL of the website associated with the Pipedrive Organization created in Node 1. The workflow uses the ScrapingBee API, but you can use any preferred API or simply the HTTP request node in n8n. Node 3: OpenAI - Message GPT-4o with Scraped Data This node sends HTML-scraped data from the previous node to the OpenAI GPT-4o model. The system prompt instructs the model to extract company data, such as products or services offered and competitors (if known by the model), and format it as HTML for optimal use in a Pipedrive Note. Node 4: Pipedrive - Create a Note with OpenAI Output This node adds a Note to the Organization created in Pipedrive using the OpenAI node output. The Note will include the company description, target market, selling products, and competitors (if GPT-4o was able to determine them). Node 5 & 6: HTML To Markdown & Code - Markdown to Slack Markdown These two nodes format the HTML output to Slack Markdown. The Note created in Pipedrive is in HTML format, as specified by the System Prompt of the OpenAI Node. To send it to Slack, it needs to be converted to Markdown and then to Slack Markdown. Node 7: Slack - Notify This node sends a message in Slack containing the Pipedrive Organization Note created with this workflow.
by Fernanda Silva
Workflow Description Your workflow is an intelligent chatbot, using ++OpenAI assistant++, integrated with a backend that supports WhatsApp Business, designed to handle various use cases such as sales and customer support. Below is a breakdown of its functionality and key components: Workflow Structure and Functionality Chat Input (Chat Trigger) The flow starts by receiving messages from customers via WhatsApp Business. Collects basic information, such as session_id, to organize interactions. Condition Check (If Node) Checks if additional customer data (e.g., name, age, dependents) is sent along with the message. If additional data is present, a customized prompt is generated, which includes this information. The prompt specifies that this data is for the assistant's awareness and doesn’t require a response. Data Preparation (Edit Fields Nodes) Formats customer data and the interaction details to be processed by the AI assistant. Compiles the customer data and their query into a single text block. AI Responses (OpenAI Nodes) The assistant’s prompt is carefully designed to guide the AI in providing accurate and relevant responses based on the customer’s query and data provided. Prompts describe the available functionalities, including which APIs to call and their specific purposes, helping to prevent “hallucinated” or irrelevant responses. Memory and Context (Postgres Chat Memory) Stores context and messages in continuous sessions using a database, ensuring the chatbot maintains conversation history. API Calls The workflow allows the use of APIs with any endpoints you choose, depending on your specific use case. This flexibility enables integration with various services tailored to your needs. The OpenAI assistant understands JSON structures, and you can define in the prompt how the responses should be formatted. This allows you to structure responses neatly for the client, ensuring clarity and professionalism. Make sure to describe the purpose of each endpoint in the assistant’s prompt to help guide the AI and prevent misinterpretation. Customer Response Delivery After processing and querying APIs, the generated response is sent to the backend and ultimately delivered to the customer through WhatsApp Business. Best Practices Implemented Preventing Hallucinations** Every API has a clear description in its prompt, ensuring the AI understands its intended use case. Versatile Functionality** The chatbot is modular and flexible, capable of handling both sales and general customer inquiries. Context Persistence** By utilizing persistent memory, the flow maintains continuous interaction context, which is crucial for longer conversations or follow-up queries. Additional Recommendations Include practical examples in the assistant’s prompt, such as frequently asked questions or decision-making flows based on API calls. Ensure all responses align with the customer’s objectives (e.g., making a purchase or resolving technical queries). Log interactions in detail for future analysis and workflow optimization. This workflow provides a solid foundation for a robust and multifunctional virtual assistant 🚀
by Ron
This flow monitors a file for changes of its content. If changed, an alert is sent out and you receive it as push, SMS or voice call on SIGNL4. User cases: Log-file monitoring Monitoring of production data Integration with third-party systems via file interface Etc. Sample file "alert-data.json": { "Body": "Alert in building A2.", "Done": false, "eventId": "2518088743722201372_4ee5617b-2731-4d38-8e16-e4148b8fb8a0" } Body: The alert text to be sent. Done: If false this is a new alert. If true this indicated the alert has been closed. eventId: Last SIGNL4 event ID written by SIGNL4. This flow can be easily adapted for database monitoring as well.
by Viktor Klepikovskyi
Advanced Retry and Delay Logic This template provides a robust solution for handling API rate limits and temporary service outages in n8n workflows. It overcomes the limitations of the default node retry settings, which cap retries at 5 and delays at 5 seconds. By using a custom loop with a Set, If, and Wait node, this workflow gives you complete control over the number of retries and the delay between them. Instructions: Replace the placeholder HTTP Request node with your target node (the one that might fail). In the initial Set Fields node, modify the max_tries value to set the total number of attempts for your workflow. Adjust the delay_seconds value to define the initial delay between retries. Optionally, configure the Edit Fields node to implement exponential backoff by adjusting the delay_seconds expression (e.g., {{$json.delay_seconds * 2}}). For a more detailed breakdown and tutorial of this template, you can find additional information here.
by WeblineIndia
This n8n workflow automates the process of converting a newly stored PDF file from Google Drive into an HTML file and saving it back to Google Drive. The workflow is triggered whenever a new PDF is uploaded to a specific folder, ensuring seamless conversion and storage without any manual intervention. This workflow provides an efficient, automated solution for converting PDFs to HTML, eliminating the need for manual file handling and ensuring a smooth document transformation process. It is particularly useful for scenarios where PDFs need to be dynamically converted and stored in an organised manner for web usage, archiving, or further processing. Prerequisites : Before setting up this workflow, ensure the following: PDF.co API Key: Sign up at PDF.co and obtain an API key for PDF to HTML conversion. Proper Authentication: Ensure authentication is configured for Google Drive in n8n. Customisation Options : Modify the API request to convert PDFs to other formats like Text, CSV, or XML. Extend the IF Node to reject files based on size or other properties. Send a notification once the conversion is complete using an Email or Telegram Node. Steps : Step 1: Google Drive Trigger Node (Watch for New Files) Click "Add Node" and search for Google Drive. Select "Google Drive Trigger" and add it to the workflow. Authenticate with your Google Account. Select the folder to monitor. Set the trigger to activate whenever a new file is added. Click "Execute Node" to test. Click "Save". Step 2: IF Node (Check if File is a PDF) Click "Add Node" and search for IF. Add a condition to check if the file extension is .pdf If true → Send the file to the next step. If false → Stop the workflow. Click "Execute Node" to test. Click "Save". Step 3: HTTP Request Node (Convert PDF to HTML) Click "Add Node" and search for HTTP Request. Set the Method to POST. Enter the PDF.co API endpoint for PDF to HTML conversion. In the Headers, add API key which we have get from pdf.co Send the binary PDF data as the request body. Click "Execute Node" to test. Click "Save". Step 4: Function Node (Convert Response to Binary) Click "Add Node" and search for Function. Write a JavaScript function to transform the API response into a binary file. Click "Execute Node" to test. Click "Save". Step 5: Google Drive Node (Save Converted HTML File) Click "Add Node" and search for Google Drive. Select "Upload File" as the action. Authenticate with your Google Account. Set the destination folder for storing the HTML file. Map the binary data from the Function Node. Click "Execute Node" to test. Click "Save". Step 6: Connect & Test the Workflow Link the nodes in this order (Google Drive Trigger → IF Node → HTTP Request → Function Node → Google Drive Upload) Run the workflow manually. Upload a test PDF to Google Drive. Check Google Drive for the converted HTML file. Who’s behind this? WeblineIndia’s AI development team. We've delivered 3500+ software projects across 25+ countries since 1999. From no-code automations to complex AI systems — our AI team builds tools that drive results. Looking to hire AI developers? Start with us.
by n8n Team
This automated workflow takes a Typeform form, and once it is filled out, it is automatically uploaded as a Lead in Pipedrive. There is an option for custom fields (this workflow works with company size), and leaves notes in the note section based on questions answered. Prerequisites Typeform account and Typeform credentials and a form for people to fill out Pipedrive account and Pipedrive credentials Nodes Typeform node gets the data after the survey is completed Set node extracts data from the Typeform node and keeps only relevant data Function node maps the company size Pipedrive node populates a pipeline with a deal and adds custom fields
by Tom
This workflow shows a no code approach to creating Salesforce accounts and contacts based on data coming from an Excel file. For Excel 365 (the online version of Microsoft Excel) check out this workflow instead. To run the workflow: Make sure your Salesforce account is authenticated with n8n. Have a Microsoft Excel workbook with contacts and their account names ready. The workflow uses this example file, but you probably want to use your own data instead. Hit the Execute Workflow button at the bottom of the n8n canvas. Here is how it works: The workflow first searches for existing Salesforce accounts by name. It then branches out depending on whether the account already exists in Salesforce or not. If an account does not exist yet, it will be created. The data is then normalised before both branches converge again. Finally the contacts are created or updated as needed in Salesforce.
by rangelstoilov
This will send your Github notifications to a discord webhook. Since Github doesn't send push notifications to mobile devices other then @mention this is a great workaround to receive notifications on Discord with this. Using a github trigger was not a good option as there is no trigger for notifications only events (which don't work on org repos). Using http request on notifications api is way better. ++TAGGING USER IN MESSATGE:++ Change ** with your discord Id to get tagged when sending notifications. To find your own id type in any channel backslash followed by your username with the 4 digit hash code You can copy this by clicking on your username next to your profile picture Example: \@username#9999 Enjoy!