by Davide
1. How it Works This n8n workflow automates fine-tuning OpenAI models through these key steps: Manual Trigger**: Starts with the "When clicking ‘Test workflow’" event to initiate the process. Downloads a .jsonl file from Google Drive Upload to OpenAI**: Uploads the .jsonl file to OpenAI via the "Upload File" node (with purpose "fine-tune"). Create Fine-tuning Job**: Sends a POST request to the endpoint https://api.openai.com/v1/fine_tuning/jobs with: { "training_file": "{{ $json.id }}", "model": "gpt-4o-mini-2024-07-18" } OpenAI automatically starts training the model based on the provided file. Interaction with the Trained Model**: An "AI Agent" uses the custom model (e.g., ft:gpt-4o-mini-2024-07-18:n3w-italia::XXXX7B) to respond to chat messages. 2. Set up Steps To configure the workflow: Prepare the Training File: Create a .jsonl file following the specified syntax (e.g., travel assistant Q/A examples). Upload it to Google Drive and update the ID in the "Google Drive" node. Configure Credentials: Google Drive: Connect an account via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key in the "OpenAI Chat Model" and "Upload File" nodes. Customize the Model: In the "OpenAI Chat Model" node, specify the name of your fine-tuned model (e.g., ft:gpt-4o-mini-...). Update the HTTP request body (Create Fine-tuning Job) if needed (e.g., a different base model). Start the Workflow: Use the manual trigger ("Test workflow") to begin the upload and training process. Test the model via the "Chat Trigger" (chat messages). Integrated Documentation: Follow the instructions in the Sticky Notes to: Properly format the .jsonl (Step 1). Monitor progress on OpenAI (Step 2, link: https://platform.openai.com/finetune/). Note: Ensure the .jsonl file adheres to OpenAI’s required structure and that credentials are valid.
by Harshil Agrawal
This workflow demonstrates the use of the Split In Batches node and the Wait node to avoid API rate limits. Customer Datastore node: The workflow fetches data from the Customer Datastore node. Based on your use case, replace it with a relevant node. Split In Batches node: This node splits the items into a single item. Based on the API limit, you can configure the Batch Size. HTTP Request node: This node makes API calls to a placeholder URL. If the Split In Batches node returns 5 items, the HTTP Request node will make 5 different API calls. Wait node: This node will pause the workflow for the time you specify. On resume, the Split In Batches node gets executed node, and the next batch is processed. Replace Me (NoOp node): This node is optional. If you want to continue your workflow and process the items, replace this node with the corresponding node(s).
by Samir Saci
Tags: Automation, Finance, Google Sheets, API Note: This workflow uses the Exchange rate API and requires a valid API key. Context I’m a Supply Chain Data Scientist who builds automations to streamline operations, reduce manual tasks, and boost decision-making through real-time data. In this workflow, I automated the task of fetching live currency exchange rates, updating a Google Sheet with the latest values, and archiving historical records — all without writing any code. > Improve your productivity by automating admin tasks with n8n! 📬 For business inquiries, you can add me on LinkedIn Who is this template for? This template is perfect for: Finance teams** tracking multi-currency cashflows Analysts** building dashboards or models requiring updated FX data Anyone working with spreadsheets** who needs up-to-date exchange rates It updates: A live sheet with the latest USD-based exchange rates An archive tab to track historical changes over time How does it work? This workflow runs in N8N and performs the following steps: 🌐 Calls the ExchangeRate API to get the latest rates based on USD 🧠 Extracts and formats key fields: base currency, timestamp, and conversion values 📊 Updates a main Google Sheet with the latest data (using upsert logic) 🗂️ Appends all rates to a second Google Sheet tab for historical tracking You can schedule this workflow to run daily, hourly, or on-demand. What do I need to start? You don’t need to write a single line of code. Prerequisites: A Google Sheet with two tabs: Rate Sheet and Archives (Link of the publicly available example in the template) A valid Exchangerate API key Google Sheets API** connected via OAuth2 Next Steps Use the sticky notes in the workflow to understand how to: Add your Exchangerate API key Map the fields to match your Google Sheet layout Schedule the run frequency using the Cron node Optionally add Slack or email alerts if the base rate changes For more information, check my tutorial: 🎥 Watch My Tutorial 🚀 Want to build finance automation workflows like this? 📬 Let’s connect on LinkedIn Notes You can adapt this template for other currencies by changing the API endpoint This workflow was built using *n8n 1.85.4** Submitted: April 15th, 2025*
by Aayushman Sharma
Sync Youtube Videos with Google Sheets (Part 1 of Youtube comments sentiment analyze automation along with detailed dashboard) This workflow is the first part of a multi-part automation system designed to perform large-scale YouTube comment sentiment analysis alongwith detailed dashboard. It solves the problem of manually tracking new videos across multiple YouTube channels by automatically fetching and organizing video URLs in a Google Sheet, setting the stage for deeper analysis in Part 2. What It Does Reads Channel IDs from Sheet3 of a connected Google Sheet. Fetches the latest videos from each Channel ID using the YouTube Data API. Extracts video URLs and metadata (like title and publish date). Appends the video data to Sheet2 of the same Google Sheet — this sheet is later used by Part 2 for further processing. Part of a Multi-Step System This is Part 1 of a 2-workflow system: Part 1 (this workflow)** populates a sheet with the latest videos from a list of channels. Part 2* reads the video URLs from Sheet2, fetches comments for each video, analyzes their sentiment using *OpenAI**, and stores structured results in Sheet1. 👉 Continue to Part 2 – YouTube Comment Sentiment Analyzer with Google Sheets & OpenAI ✅ Use Cases Monitor and organize new videos from a list of YouTube channels Automate content pipelines for social media teams and analysts Build scalable datasets for comment and sentiment analysis Perfect for creators, agencies, or data analysts managing multiple YouTube accounts 🔧 Apps Used Google Sheets** – To read and write channel/video data YouTube** – To fetch video data from public channels 💡 Why Use This? Manually checking YouTube channels for new content is time-consuming and error-prone. This automation ensures your data stays current and structured — enabling consistent tracking and deeper analysis (especially when paired with Part 2). It brings speed, scale, and automation to your YouTube content operations. How to Customize 1. Modify Trigger Settings Change the Google Sheet (Sheet 3) channel ID entry to track other channels. Use a time-based trigger to fetch new videos regularly, ensuring your data stays up to date. 2. Adjust Output Fields Fetch additional details from YouTube, such as view count, description, or thumbnails. Add custom columns in Sheet 2 for organizing videos by different criteria, such as: "Published Date" "Video Type" "View Count" "Video Description" 3. Extend with Integrations Integrate with other workflows like YouTube Comment Sentiment Analysis (Part 2) for a deeper dive into content analysis. Use filters to fetch videos by certain tags, keywords, or publish dates. 4. Adjust Sheet Structure Modify the structure of Sheet 2 to categorize videos based on criteria like: Channel Video Status (e.g., "Published," "Scheduled") Video Type (e.g., "Tutorial," "Review") 5. Schedule Regular Fetching Set a schedule trigger to fetch videos at regular intervals (e.g., daily or weekly), ensuring new content is automatically added to your sheet. 6. Customize Google Sheet Layout Change the layout of Sheet 2 to better fit your needs. For example, you can add additional columns for
by Sascha
Automating your marketing campaign management process can streamline your workflow and save you valuable time. With the combination of Baserow and n8n, you can efficiently handle your campaign data and seamlessly publish content to your Shopify store. In this workflow template, I demonstrate how to leverage Baserow as a centralized platform for organizing your marketing campaign assets, including copy and images. By utilizing n8n, we automate the process of fetching images and campaign descriptions from Baserow and uploading them directly to your Shopify store. With this automated solution, you can expedite the publishing process, ensuring that your campaigns are launched swiftly across your sales channels. Additionally, this workflow serves as a foundational step towards further automation in campaign management, allowing you to dynamically generate and upload content to your Shopify store with ease. This template will help you: Use n8n to get images for marketing campaigns from Baserow and upload them to your Shopify media library Dynamically inject data from Baserow into a template file Upload a template file to your Shopify theme This template will demonstrate the follwing concepts in n8n: use the Webhook node use the IF node to control the execution flow of the workflow do time calculation using expressions and javascript use the GraphQL node to upload images to your Shopify media files create a dynamic template file for your Shopify theme use the HTTP Reqest node to upload your template file to your Shopify store How to get started? Create a custom app in Shopify get the credentials needed to connect n8n to Shopify This is needed for the Shopify Trigger Create Shopify Acces Token API credentials n n8n for the Shopify trigger node Create Header Auth credentials: Use X-Shopify-Access-Token as the name and the Acces-Token from the Shopify App you created as the value. The Header Auth is neccessary for the GraphQL nodes. You will need a running Baserow instance for this. You can also sign up for a free account at https://baserow.io/ Please make sure to read the notes in the template. For a detailed explanation please check the corresponding video: https://youtu.be/Ky-dYlljGiY
by joseph
🧵 Generate Conversational Twitter/X Threads with GPT-4o AI (n8n Workflow) This workflow uses OpenAI (GPT-4o) and Twitter/X to automatically generate and publish engaging, conversational threads in response to a trigger (e.g., from a chatbot or form). 🚀 What Does It Do? Listens for an incoming message (e.g., via webhook or another n8n input). Uses GPT-4o to craft a narrative-style Twitter thread in a personal, friendly tone. Publishes the first tweet, then automatically posts each following tweet as a reply—building a full thread. 🛠️ What Do You Need to Configure? Before using this template, make sure to set up the following credentials: OpenAI Add your OpenAI API key in the OpenAI Chat Model node. This is used to generate the thread content. Twitter/X Add your Twitter OAuth2 credentials to the First Tweet and Thread Reply nodes. This allows the workflow to publish tweets on your behalf. ✨ Who Is This For? This template is perfect for: Content creators who want to share ideas regularly Personal brands looking to grow their presence Social media managers automating thread creation 🔧 How to Customize It You can easily adjust the tone, structure, or length of the threads by modifying the system prompt in the OpenAI node. For example: To create threads with humor, change the prompt to “Write in a witty and humorous tone.” To tailor it for marketing, prompt it with “Write a persuasive product-focused Twitter thread.” You can also integrate this workflow with: Telegram bots Web forms (e.g., Typeform, Tally) CRM tools or newsletter platforms 📋 Sample Output Prompt sent to the workflow: “Tips for growing on Twitter in 2025” Generated thread: ++Tweet 1:++ Thinking of growing your presence on Twitter/X in 2024? Here's a thread with the most effective strategies that actually work 🧵 ++Reply 1:++ Engage, don’t broadcast Twitter is a conversation platform. Reply to others, quote-tweet, and start discussions instead of just posting links. ++Reply 2:++ Consistency beats virality Tweeting regularly builds trust and visibility. You don't need to go viral — just show up.
by Tom
This workflow uses a number of technologies to track the value of ETFs, stocks and other exchange-traded products: Baserow: To keep track of our investments n8n’s Cron node: To trigger the workflow compiling our daily morning briefing Webscraping: The HTTP Request & HTML Extract nodes to fetch up-to-date prices from the relevant stock exchange and structure this infromation Javascript: We’ll use the Function node to build a custom HTML body with all the relevant information Sendgrid: The Email Service Provider in this workflow to send out our email Thanks to n8n, the steps in this workflow can easily be changed. Not a Sendgrid user? Simply remove the Sendgrid node and add a Gmail node instead. The stock exchange has a REST API? Just throw away the HTML Extract node. Here’s how it works: Data Source In this scenario, our data source is Baserow. In our table, we’ll track all information needed to identify each investment product: We have two text type columns (Name and ISIN) as well as two number type columns (Count and Purchase Price). Workflow Nodes 1. Cron The Cron node will trigger our workflow to run each work day in the morning hours. 2. Baserow The Baserow node will fetch our investments from the database table shown above. 3. HTTP Request Using the HTTP Request node we can fetch live data from the stock exchange of our choice based on the ISIN. This example uses Tradegate, which is used by many German fintechs. The basic approach should also work for other exchanges, as long as they provide the required data to the public. 4. HTML Extract Since our HTTP Request node fetches full websites, we’re using the HTML Extract node to extract the information we’re looking for from each website. If an exchange other than Tradegate is used, the selectors used in this node will most likely need to be updated. 5. + 6. Set The Set nodes helps with setting the exact columns we’ll use in our table. In this case we’re first formatting the results from our exchange, then calculate the changes based on the purchase price. 7. Function Here were using a bit of Javascript magic to build an HTML email. This is where any changes to the email content would have to be made. 8. Sendgrid Finally we send out the email built in the previous step. This is where you can configure sender and recipients. Result The basic email generated by this workflow will look like so:
by Trung Tran
🤖 Smart Interview Assistant: Tailored Questions Based on CV, JD, and Round Watch the demo video below: 📌 Who’s it for This workflow is designed for: Recruiters* and *Talent Acquisition Specialists** who want to automate candidate interview prep. Hiring Managers** conducting multiple interviews and needing personalized question sets. Technical Interviewers** who want to save time and be well-prepared with relevant questions. ⚙️ How it works / What it does The Smart Interview Assistant automates the interview preparation process in a few clicks: Accepts: Multiple resumes (PDFs) Selected job role Chosen interview round Extracts structured data from: The candidate’s CV The corresponding Job Description (JD) Uses GPT-4 to analyze: Candidate profile Role requirements Interview round context Generates: Tailored interview questions Expected answers A summarized interview prep report Sends the report directly to the hiring team via email (SMTP) 📁 Google Drive Structure 📂 Root Folder ├── 📁 jd/ # Stores all job descriptions in PDF format │ ├── Backend_Engineer.pdf │ ├── Azure_DevOps_Lead.pdf │ └── ... └── 📄 Positions (Google Sheet) # Maps Job Role ↔ JD File Link 📝 Sample Mapping Sheet: Positions Sheet Columns: Job Role Job Description File URL (pointing to PDF in jd/ folder) 🛠️ How to Set Up Step 1: Configure API Integrations ✅ Connect your OpenAI GPT-4 API Key ✅ Enable Google Cloud APIs: Google Sheets API (to read job roles) Google Drive API (to access CV and JD files) ✅ Set up SMTP credentials (for email delivery) Step 2: Prepare Google Drive & Mapping Sheet Create a root folder on Google Drive Inside the root folder: Create a folder named /jd/ and upload all job descriptions (PDFs) Create a Google Sheet named Positions with the following format: | Job Role | Job Description File URL | |-----------------------------|--------------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Step 3: Build the Application Form Use any form tool (e.g., Typeform, Tally, or custom HTML) that collects: 📎 Resume file (PDF) 🧾 Job Role (dropdown) 🔄 Interview Round (dropdown) Step 4: Resume & JD Extraction 🔍 Use Extract from PDF to parse the resume content 📄 Retrieve the JD link from the Positions sheet based on the selected Job Role 🔗 Use Download file to pull the PDF for processing Step 5: Analyze with GPT-4 Run both Resume and JD through a Profile Analyzer Agent (GPT-4 with JSON output) Merge results Add manual input or mapping for the Interview Round metadata Step 6: Generate Interview Report Use a second GPT-4 agent (e.g., HR Expert Agent) to: Generate 6–8 tailored interview questions Include expected answers and rationale Step 7: Deliver Final Report Format the content as: 📄 PDF (optional) 📨 Email body Send the report to the recruiter, hiring manager, or interviewer via SMTP ✅ Requirements 🔑 OpenAI GPT-4 API Key 📁 Google Drive (for resume and JD storage) 📊 Google Sheet (job role mapping) 📬 SMTP credentials (host, username, password) 🧰 n8n self-hosted or cloud instance with: PDF Parser Google Sheets node HTTP Download node Email node ✏️ How to Customize the Workflow | Part | Customization Options | |----------------------------|-------------------------------------------------------------| | Form UI | Modify the design, dropdown options, or input validations | | Job Description Source | Replace Google Sheet with Notion, Airtable, or database | | Interview Metadata | Add job level, region, or language preference | | AI Prompt Tuning | Adjust prompt phrasing or temperature in GPT nodes | | Report Format | Generate PDF instead of email body using PDF node | | Delivery Method | Add internal HR portal webhook or generate downloadable link |
by Yaron Been
Citoreh Nazanin AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the citoreh/nazanin model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: citoreh/nazanin API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Settyan Flash V2.0.0 Beta.9 AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the settyan/flash-v2.0.0-beta.9 model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: settyan/flash-v2.0.0-beta.9 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Spuuntries Ilearnmate Icts AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the spuuntries/ilearnmate-icts model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Optional Parameters seed** (integer, default: None): Seed for reproducibility of example generation and vector training. Set to 0 for random behavior. num_examples_per_side** (integer, default: 3): Number of descriptive examples to generate for each side of the contrast. More examples might lead to better vectors but will increase generation time. attributes_to_generate** (string, default: girly,modestly,verbose,happy): Comma-separated list of attributes for which to generate control vectors (e.g., 'girly,modestly,verbose,happy') How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: spuuntries/ilearnmate-icts API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Calistus Christian
Overview Receive a URL via Webhook, submit it to urlscan.io, wait ~30 seconds for artifacts (e.g., screenshot), then email a clean summary with links to the result page, screenshot, and API JSON. What this template does Ingests a URL from a POST request. Submits the URL to urlscan.io and captures the scan UUID. Waits 30s** to give urlscan time to generate the screenshot and result artifacts. Sends a formatted HTML email via Gmail with all relevant links. Nodes used Webhook** (POST /urlscan) urlscan.io → Perform a scan** Wait** (30 seconds; configurable) Gmail → Send a message** Input { "url": "https://example.com" }