by Yaron Been
Seraphina Design Tracy AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the seraphina-design/tracy model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: seraphina-design/tracy API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Paigedutcher2 Paige AI Generator Description "Custom AI model trained on Paige — bold, curvy, confident energy. Think Barbie meets boss. Great for glam, fantasy, seductive, and influencer-style prompts. Use trigger word CharacterPGE to activate the model. Overview This n8n workflow integrates with the Replicate API to use the paigedutcher2/paige model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: paigedutcher2/paige API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Digitalhera Heranathalie AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the digitalhera/heranathalie model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: digitalhera/heranathalie API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Ligua033 Lorealcantara AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the ligua033/lorealcantara model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: ligua033/lorealcantara API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Settyan Flash V2.0.0 Beta.4 AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the settyan/flash-v2.0.0-beta.4 model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: settyan/flash-v2.0.0-beta.4 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Settyan Flash V2.0.0 Beta.7 AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the settyan/flash-v2.0.0-beta.7 model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: settyan/flash-v2.0.0-beta.7 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Settyan Flash V2.0.0 Beta.0 AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the settyan/flash-v2.0.0-beta.0 model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: settyan/flash-v2.0.0-beta.0 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Dahiana
Monitor website performance with PageSpeed Insights and save to Google Sheets with alerts This n8n template automatically monitors website performance using Google's PageSpeed Insights API, compiles detailed reports, and tracks performance trends over time in Google Sheets. Use cases: Agency client monitoring, competitor analysis, performance regression detection, SEO reporting, site migration monitoring, A/B testing performance impact, and maintaining performance SLAs. Who's it for Digital agencies monitoring client websites SEO professionals tracking site performance DevOps teams maintaining performance SLAs Business owners wanting automated site monitoring How it works Automated Testing:** Scheduled audits of multiple websites using PageSpeed Insights API Core Web Vitals:** Tracks LCP, FID, CLS, and overall performance scores Historical Tracking:** Maintains performance history for trend analysis Alert System:** Sends notifications when performance drops below thresholds Detailed Reporting:** Captures specific recommendations and optimization opportunities Two Workflow Paths Scheduled Audit: Automatically tests all URLs from Google Sheet on schedule On-Demand Testing: Webhook endpoint for immediate single-URL testing How to set up Get a free PageSpeed Insights API key from Google Cloud Console Create Google Sheet with columns: URL, Site Name, Category, Alert Threshold, Last_Processed_Date and Device. Set up Google Sheets API credentials Configure notification preferences (Slack, email, etc.) Set audit schedule (daily, weekly, or custom) Define performance thresholds for alerts Requirements Google PageSpeed Insights API key (free) Google Sheets API access n8n instance (cloud or self-hosted) Optional: Slack/email for notifications Google Sheet Structure Input Sheet ("sites"): URL, Site_Name, Category, Alert_Threshold, Last_Processed_Date and Device. Results Sheet ("audit_results"): Date, URL, Site_Name, Device, Performance_Score, LCP, FID, CLS, Recommendations, Full_Report_URL API Usage (On-Demand) POST to webhook: { "url": "https://example.com", "site_name": "Example Site", "alert_threshold": 75 } How to customize Add custom performance thresholds per site Include additional metrics (accessibility, SEO, best practices) Connect to other dashboards (Data Studio, Grafana) Add competitor benchmarking Integrate with project management tools for issue tracking Set up different notification channels based on severity Sample Google Sheet Included
by Max Tkacz
Who is this template for? This workflow template is designed for Sales and Customer Success professionals seeking alerts when potential high-value users, prospects, or existing customers register for a Discourse community. Leveraging Clearbit, it retrieves enriched data for the new member to assess their value. Example result in Slack How it works Each time a new member is created in Discourse, the workflow runs (powered by Discourse's native Webhooks feature). After filtering out popular private email accounts, we run the member's email through Clearbit to fetch available information on the member as well as their organization. If the enriched data meets certain criteria, we send a Slack message to a channel. This message has a few quick actions: Open LinkedIn profile and Email member Setup instructions Overview is below. Watch this 🎥 quick set up video for detailed instructions on how to get the template running, as well as how to customize it. Complete the Set up credentials step when you first open the workflow. You'll need a Discourse (admin user), Clearbit, and Slack account. Set up the Webhook in Discourse, linking the On new Discourse user Trigger with your Discourse community. Set the correct channel to send to in the Post message in channel step After testing your workflow, swap the Test URL to Production URL in Discourse and activate your workflow Template was created in n8n v1.29.1
by Jonathan
How it works This workflow watches a selected Google Drive folder for any images added to it. It then takes that image, sends it the the tinypng.com service which optimises and reduces its size (where possible) Tinypng then returns the updated image which is then automatically saved in your chose Google Drive folder Setting things up It's pretty simple to configure and should only take around 5-10mins. You only need to set up credentials for Google Drive and Tinypng.com For Tinypng.com you can sign up for their free tier API access which gives you 500 optimisations per month Once you have those two things, you just need choose your 'input' folder to watch for images and your 'output' folder for where these images should be stored There are a few more optional things you can do such as the naming of your final image and also lots more you could do with the Tinypng API for more advanced image optimisation
by Milorad Filipovic
This workflow will translate all your PDF documents from specified Google Drive folder to the desired language. Translated files will be automatically uploaded to the original folder. Required accounts 1️⃣ Google Drive account 2️⃣ DeepL developer account and API key How to setup? 1️⃣ Setup your google drive folder url, target and source language in the configuration node 2️⃣ Connect your Google Drive account with all Google Drive nodes 3️⃣ Setup HTTP header credentials that should be used for HTTP nodes in the template (replace yourAuthKey with your DeepL API key) 4️⃣ Set your DeepL header credentials in all HTTP nodes
by Niklas Hatje
Use case When collecting leads via an online form, you often need to manually add those new leads into your Pipedrive CRM. This not only takes a lot of time but is also error-prone. This workflow automates this tedious work for you. What this workflow does The workflow is triggered each time a form is submitted in n8n. It validates the email address using Hunter.io. If the email is valid, the workflow checks for an existing person with that email in Pipedrive. If no existing person is found, it utilizes Clearbit to enrich the person's information. It then verifies if the person's organization already exists in Pipedrive, creating a new organization if necessary. The workflow then registers the person in Pipedrive. Lastly, it creates a lead in Pipedrive using information from the person and organization. Setup This workflow is very quick to set up. Add your Hunter.io, Clearbit and Pipedrive credentials Click the test workflow button Activate the workflow and use the form trigger production URL to collect your leads in a smart way How to adjust it to your needs Exchange the n8n form trigger with your form of choice (Typeform, Google Forms, SurveyMonkey...) Add a filter criteria to only add new leads if they match certain requirements Remove the email check with Hunter.io if you don't own this tool and expect new form submission to have a correct email anyways Add ways to handle invalid emails or existing Persons