by Mathis
Convert PDF documents to AI-generated podcasts with Google Gemini and Text-to-Speech Transform any PDF document into an engaging, natural-sounding podcast using Google's Gemini AI and advanced Text-to-Speech technology. This automated workflow extracts text content, generates conversational scripts, and produces high-quality audio files. Who is this for? This workflow template is perfect for content creators, educators, researchers, and marketing professionals who want to repurpose written content into audio format. Ideal for creating podcast episodes, educational content, or making documents more accessible. What problem does this solve? Converting written documents to engaging audio content manually is time-consuming and requires scriptwriting skills. This workflow automates the entire process, turning static PDFs into dynamic, conversational podcasts that sound natural and engaging. What this workflow does Extracts text from uploaded PDF documents Generates podcast script using Google Gemini AI with conversational tone Converts script to speech using Google's advanced TTS with customizable voices Processes audio into properly formatted WAV files Saves final podcast ready for distribution Setup Obtain API credentials: Get Google Gemini API key from AI Studio Configure credentials in n8n as "Google Gemini(PaLM) Api account" Configure voice settings: Choose from available voices: Kore (professional), Aoede (conversational), Laomedeia (energetic) Customize script generation prompts if needed Test the workflow: Upload a sample PDF file Verify audio output quality Adjust voice settings as preferred How to customize this workflow Modify script style:** Edit the prompt in the "Generate Podcast Script" node to change tone, length, or format Change voice:** Update the voice name in "Prepare TTS Request" node Add preprocessing:** Insert text cleaning nodes before script generation Integrate with storage:** Connect to Google Drive, Dropbox, or other storage services Add notifications:** Include Slack or email notifications when podcasts are ready Note: This template requires Google Gemini API access and works best with text-based PDF files under 10MB.
by Hostinger
This n8n workflow template is designed for developers, system administrators, and IT professionals who manage Linux VPS environments. It leverages an AI chatbot powered by the OpenAI model to interpret and execute SSH commands on a Linux VPS directly from chat messages. The workflow triggers when a specific chat message is received, which is then processed by the AI SysAdmin ReAct Agent to execute predefined SSH commands securely. How It Works Chat Trigger: The workflow starts when a chat message is received via a supported platform (like Slack, Telegram, etc.). AI Processing: The message is passed to the AI SysAdmin ReAct Agent, which uses an embedded OpenAI model to interpret the command and map it to a corresponding SSH action. Command Execution: The interpreted command is securely executed on the target Linux VPS using SSH, with login credentials managed through a secure method embedded within the workflow. Setup Instructions Import the Workflow: Download and import the workflow into your n8n instance. Configure Chat Integration: Set up the chat trigger node by connecting it to your preferred chat platform and configuring the trigger conditions. Set SSH Credentials: Securely input your SSH credentials in the designated SSH login credentials node. Deploy and Test: Deploy the workflow and perform tests to ensure that commands are executed correctly and securely on your VPS. Embrace the future of VPS management with our AI-driven SysAdmin for Linux VPS template. This innovative solution transforms how system administrators interact with and manage their servers, offering a streamlined, secure, and efficient method to handle routine tasks through simple chat commands. With the power of AI at your fingertips, enhance your operational efficiency, reduce response times, and manage your Linux environments more effectively. Get started today to experience a smarter way to manage your systems directly through your chat tool.
by James Francis
Overview In cold email campaigns, the lead's company name is the 2nd most frequently inserted variable after their first name. They're critical for effective cold email personalization. However, company names are often messy and can contain taglines, legal suffixes (e.g. LLC, Inc.), and other variations that would never be written out by a human in an email. If your email starts with "I came across Techwave Solutions LLC on LinkedIn...", it's a dead giveaway that you're sending a tempalted email and a response is much less likely. This simple workflow uses AI to clean up messy company names in a Google Sheet so that your cold email campaigns can achieve better results. How It Works A form is submitted with a Google Sheet url The workflow grabs the leads and uses an LLM node to clean the company names The updated leads are saved back in a new sheet within the original spreadsheet Setup Steps Add your Google Sheets and OpenAI (or your AI model provider of choice) credentials to n8n Create a Google Sheet with your list of leads. IMPORTANT: the sheet MUST have a column called "Company" (Optional). The AI workflow has a highly optimized system prompt. However, you may achieve better results by updating the list of examples in the prompt with companies (real or fake) in the industry you're targeting. If you have any questions or feedback about this workflow, or would like me to build custom workflows for your business, email me at n8n@paperjam.agency.
by Davi Saranszky Mesquita
Use case Workshop We are using this workflow in our workshops to teach how to use Tools a.k.a functions with artificial intelligence. In this specific case, we will use a generic "AI Agent" node to illustrate that it could use other models from different data providers. Enhanced Weather Forecasting In this small example, it's easy to demonstrate how to obtain weather forecast results from the Open-Meteo site to accurately display the upcoming days. This can be used to plan travel decisions, for example. What this workflow does We will make an HTTP request to find out the geographic coordinates of a city. Then, we will make other HTTP requests to discover the weather for the upcoming days. In this workshop, we demonstrate that the AI will be able to determine which tool to call first—it will first call the geolocation tool and then the weather forecast tool. All of this within a single client conversation call. Setup Insert an OpenAI Key and activate the workflow. by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Juan Carlos Cavero Gracia
This workflow turns any Telegram bot into an AI-powered social media command center for photos, videos and voice notes. video demo From one Telegram chat you can: Send a photo and auto-post it to Instagram, TikTok and Pinterest with AI captions. Send a video and: Let AI generate titles + descriptions and upload it to TikTok, Instagram and YouTube. Use /thumb to generate 4 custom thumbnails with Nano Banana Pro. Use /edit ... to run FFmpeg edits (cut, mute, resize, speed, etc.) via Upload-Post FFmpeg jobs and get the edited video back in Telegram. Send a voice note and turn it into posts for LinkedIn, X (Twitter) and Threads, then auto-publish. Keep human approval in the loop: every caption or text post is shown in Telegram for you to accept before publishing. Out of the box, the captions and long descriptions are optimized for Spanish (es-ES), but you can easily change the prompts to any language or brand voice. What You Can Do From Telegram 1. Photo → Instagram, TikTok & Pinterest Just send a photo (or image as document) to your Telegram bot: The workflow downloads the photo from Telegram. Gemini 2.5 Flash** analyzes the image plus your caption/text (if any). An AI Agent generates platform-specific descriptions for: TikTok (short hook, 90 chars) Instagram Pinterest (title + description) You receive a message in Telegram with all the proposed descriptions. You approve or reject with inline buttons. On approval, Upload-Post publishes the photo to: Instagram TikTok Pinterest (to the board you configured) and sends back a status message with success flags, post URLs and error messages. 2. Video → TikTok, Instagram & YouTube (no commands) If you send a video with no special caption: The workflow treats it as a standard video post. It fetches the file from Telegram. Gemini 2.5 Flash** analyzes the video and describes its content. An AI Agent turns that description + your caption into: TikTok description Instagram description YouTube title + description You get a Telegram message with the three platform descriptions to review. Once you approve: It shows “Uploading…” in Telegram. The video is sent to Upload-Post, which uploads to TikTok, Instagram and YouTube with the generated text. Finally, you receive an upload report for each platform (success, URL, error message). 3. /thumb → AI Thumbnails for Your Video (Nano Banana Pro) If you send a video with caption exactly /thumb: The workflow downloads the video. Gemini 2.5 Flash* generates a *long, SEO-rich description in Spanish** of everything that happens in the video. A second AI Agent uses that detailed description to create 3 concepts: Each concept has: title, description, and a full prompt_thumbnail (Spanish, single line) specially crafted for Nano Banana Pro. In Telegram you see the 3 concepts (titles) and select: 0, 1, 2 or “create new”. Once you choose a concept: The prompt is sent to Nano Banana Pro (fal-ai/nano-banana-pro/edit) with your reference face image (configurable). Nano Banana Pro generates 4 thumbnails (16:9). The workflow downloads the 4 images and sends them back to you in Telegram as photos so you can pick and use your favorite in your YouTube/Upload-Post pipeline. Use /thumb whenever you already have the video and just want killer thumbnails generated with AI. 4. /edit … → Natural-Language FFmpeg Video Editor If you send a video with a caption starting with /edit, for example: /edit cut the first 3 seconds and remove the audio /edit crop to vertical 9:16 and speed up x1.5 /edit blur the background and keep the subject centered The workflow behaves as a text-to-FFmpeg command generator: An AI Agent (powered by Gemini) reads your /edit instructions. It generates a safe FFmpeg command in JSON format: Always uses ffmpeg -y Uses {input} and {output} placeholders No semicolons and no dangerous shell characters The workflow then: Downloads the original video from Telegram. Calls Upload-Post FFmpeg jobs API with the video and the generated full_command. Polls the job status until it’s finished. Downloads the processed (edited) video. Sends the edited video back to you in Telegram with a simple sendVideo node. This makes Telegram a front-end for a remote FFmpeg engine: you describe the edit in natural language, and the workflow handles all the FFmpeg complexity. > Note: The edited video is returned to Telegram; if you want to auto-post it, simply send the new video again without /edit so it goes through the normal multi-platform publishing path. 5. Voice Notes → LinkedIn, X & Threads (Text Posts) For voice messages: The Telegram Trigger detects message.voice. The workflow downloads the audio file. OpenAI Whisper** transcribes the recording. An AI Agent turns the transcription into: A LinkedIn post (Spanish, long-form dev/creator style, based on your examples). A Threads post (Spanish, up to ~500 chars). A Tweet / X post or thread (English, using hooks + hashtags like #n8n, #automation, #dev). In Telegram you see a preview message with the suggested copy for Threads, LinkedIn and X. After you approve: You get an “Uploading…” message. Upload-Post publishes: To your LinkedIn organization page (configured by ID). To X (Twitter). To Threads. The workflow sends a status message with success flags and URLs for each platform. This is perfect for “talk to your phone, ship content to all your text platforms”. How the Workflow Is Structured Telegram Trigger** Listens to every incoming message and routes by type: /start → No-Op voice → Audio pipeline document/photo → Photo pipeline video → Video/thumbnail/editor pipelines (/thumb, /edit or normal) AI Blocks (Gemini + OpenAI)** Gemini 2.5 Flash for: Photo understanding. Short video descriptions (for auto-posting). Long, detailed video summaries (for thumbnail generator). OpenAI Whisper for voice transcription. Multiple AI Agents (Gemini chat) with structured JSON output parsers for: Per-platform social captions. Threads/LinkedIn/X posts. Thumbnail prompts and title concepts. FFmpeg command generation. Upload-Post Integration** Photos → Instagram, TikTok, Pinterest. Videos → TikTok, Instagram, YouTube. Text → LinkedIn page, X, Threads. FFmpeg job endpoint for server-side video editing. All uploads return status, URL and error messages back into Telegram. Human-in-the-Loop** All critical AI outputs go through sendAndWait nodes in Telegram: You review and choose whether to publish or not. You choose which thumbnail concept to use. Requirements & Setup Accounts & APIs** Telegram bot (via @BotFather). Upload-Post.com account with your social profiles connected. OpenAI API key (Whisper). Google Gemini API key (AI Studio). Nano Banana Pro / fal.ai key (for thumbnails). Runtime** n8n instance (cloud or self-hosted). FFmpeg available where n8n runs (Docker image, VM, etc.) for local checks if needed (the heavy lifting is delegated to Upload-Post FFmpeg jobs). Configuration** Create Telegram credentials with your bot token. Create Upload-Post credentials with your API token. Set upload_post_user and pinterest_board_id in the Edit Fields node. Optionally replace: Example face image URL used for Nano Banana Pro. LinkedIn organization ID. Any language / tone in the AI agent system prompts. Ideal Use Cases Creators & influencers* who want to post to every platform from *one Telegram chat**. Agencies** who want a “content butler” clients can use without touching n8n. Solo devs & makers** who publish workflows, devlogs and product updates and want: Multi-platform video posts. Voice → LinkedIn/X/Threads posts. Easy text-based video editing and thumbnail generation. Install this template, plug in your keys, talk to your bot in Telegram, and turn it into your all-in-one AI social media machine.
by EoCi - Mr.Eo
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Introduction Tired of spending time crafting the perfect AI prompt? This workflow takes your simple ideas like "write a blog post" and automatically transforms them into detailed, structured prompts that actually work. 🎯 What This Does Automatically converts simple user prompts like "write a blog post" into structured, professional AI prompts with metadata, variables, and clear instructions. Perfect for everybody, all industries and organizations who are wanting to eliminate prompt engineering works. 🔄 How It Works Google Sheets Trigger monitors for new prompts AI Enhancement Pipeline uses Gemini + Groq to add structure & context Field Completion auto-generates missing metadata (topic, categories) Quality Assurance validates & stores complete results 🚀 Setup Requirements AI APIs**: Gemini, Telegram, Groq API keys Google Sheets**: 2 sheets (Main, ModifiedPrompt) 5 minutes setup time** - detailed instructions in blue sticky notes Set up steps Setup time: < 5 minutes Create a Google Spreadsheet with two tabs (sheets): OriginalPrompts and ModifiedPrompts. OriginalPrompts columns: Original Prompt ID | Model | Original Prompt | Created Time ModifiedPrompts columns (example): Modified Prompt ID | Original Prompt ID | Topic | Topic Categories | Modified Prompt | Prompt Title | Prompt Type | Model Used | Improvement Notes | Updated Time | Created Time | isProcessed Add and attach credentials in n8n: Google Sheets OAuth2 (required for getting new prompt) Gemini and Groq API credentials (required for AI Agent) Telegram credential (required for notifications) Save & Activate the workflow. Add a test row to OriginalPrompts, for example: Original Prompt ID: 1 — Original Prompt: "Write a short blog post about AI ethics". Wait ~30–60s and check ModifiedPrompts for the enhanced output. That’s it ! Once it configured, drop short ideas into your sheet and get professional prompts back automatically. Your prompts get better, your AI outputs improve, and you save hours of manual prompt crafting.
by Polina Medvedieva
Who is this template for This template is for marketers, SEO specialists, or content managers who need to analyze keywords to identify which ones contain references to a specific area or topic, in this case – IT software, services, tools, or apps. Use case Automating the process of scanning a large list of keywords to determine if they reference known IT products or services (like ServiceNow, Salesforce, etc.), and updating a Google Sheet with this classification. This helps in categorizing keywords for targeted SEO campaigns, content creation, or market analysis. How this workflow works Fetches keyword data from a Google Sheet Processes keywords in batches to prevent rate limiting Uses an AI agent (OpenAI) to analyze each keyword and determine if it contains a reference to an IT service/software Updates the original Google Sheet with the results in a "Service?" column Continues processing until all keywords are analyzed Set up steps Connect your Google Sheets account credentials Set the Google Sheet document ID (currently using "Copy of Sheet1 1") Configure the OpenAI API credentials for the AI agent Adjust the batch size (currently 6) if needed based on your API rate limits Ensure the Google Sheet has the required columns: "Number", "Keyword", and "Service?" The AI agent's prompt is highly customizable to match different identification needs. For example, instead of looking for IT software/services, you could modify the prompt to identify: Industry-specific terms (healthcare, finance, education) Geographic references (cities, countries, regions) Product categories (electronics, clothing, food) Competitor brand mentions Here's how you could modify the prompt for different use cases: Copy // For identifying educational content keywords "Check the keyword I provided and define if this keyword relates to educational content, courses, or learning materials and return yes or no." // For identifying local service keywords "Check the keyword I provided and determine if it contains location-specific terms (city names, neighborhoods, regions) that suggest local service intent and return yes or no." // For identifying competitor mentions "Check the keyword I provided and determine if it mentions any of our competitors (CompetitorA, CompetitorB, CompetitorC) and return yes or no." `
by Trung Tran
CV Extractor: Google Drive to Sheet + Slack Update for Recruiters Watch the demo video below: > This workflow automatically processes resumes (PDFs) uploaded or updated in a Google Drive folder. It extracts and structures the candidate’s information using AI, then updates or inserts the data into a Google Sheet, acting as a central talent database. Finally, it notifies the hiring team via Slack with a summary. Perfect for HR and TA teams, this automation eliminates the repetitive task of manually copying candidate details from CVs into spreadsheets, saving hours of admin work every week and keeping your hiring pipeline clean, fast, and up to date. 👤 Who’s it for This workflow is designed for: Recruiters* and *HR coordinators** who manage candidate profiles via Google Drive. Talent Acquisition teams** who want to automate CV parsing, enrichment, and database updating. Companies or hiring agencies** using spreadsheets for candidate tracking and CRM-like HR ops. ⚙️ How it works / What it does This smart and fully automated workflow: Monitors a Google Drive folder for any uploaded or updated resumes (PDFs). Downloads and extracts resume content using PDF parsing. Sends the raw text to GPT-4, which returns a structured profile (name, title, experience, skills, etc.). Verifies the profile and transforms it into a clean, row-based format. Upserts the candidate profile into a Google Sheet (insert or update by email). Notifies the hiring team in Slack or email that a profile was added or updated. This is a no-touch pipeline to keep your candidate data clean, current, and centralized. 🛠️ How to set up Step 1: Prepare your Google Drive folder Create a folder like /SmartHR/cv/ Upload sample resumes in .pdf format Step 2: Create your Google Sheet Columns to include: Email, FullName, JobTitle, Phone, Location, Experience, Education, Skills, etc. Optional: Add conditional formatting to highlight updates Step 3: Connect the n8n workflow Use the Google Drive Trigger: fileCreated → new profile uploaded fileUpdated → existing profile modified Use Google Drive (Download file) to fetch the resume Use Extract From PDF to get raw content Step 4: Configure GPT-4 node Use the structured system prompt to extract profile information Use json parser node to ensure safe formatting for next steps Step 5: Transform & Save Use a Function node to map fields to Google Sheet columns Use Append or update row (based on email as unique key) Optionally send Slack or email message to notify hiring team ✅ Requirements 🔑 OpenAI GPT-4 API key 🟩 n8n Cloud or Self-hosted with: Google Drive integration Google Sheets integration Email/Slack credentials (optional) 📄 Resume files in readable PDF format 📊 Google Sheet prepared with relevant headers ✏️ How to customize the workflow | Part | Customization Options | |----------------------------|----------------------------------------------------------------------------------------| | GPT Prompt | Tune for different job levels or fields (e.g., engineers vs marketers) | | Field Mapping | Update transform node to include other profile fields (LinkedIn, portfolio, etc.) | | Notification | Switch to Microsoft Teams, Telegram, or email alerts instead of Slack | | Data Store | Replace Google Sheet with Airtable, Notion, or database system | | Trigger Source | Trigger from email attachments or webhook instead of Google Drive if needed | | Output Format | Generate PDF profile cards or summary documents using HTML → PDF node |
by Friedemann Schuetz
Welcome to my AI Social Media Caption Creator Workflow! What this workflow does This workflow automatically creates a social media post caption in an editorial plan in Airtable. It also uses background information on the target group, tonality, etc. stored in Airtable. This workflow has the following sequence: Airtable trigger (scan for new records every minute) Wait 1 Minute so the Airtable record creator has time to write the Briefing field retrieval of Airtable record data AI Agent to write a caption for a social media post. The agent is instructed to use background information stored in Airtable (such as target group, tonality, etc.) to create the post. Format the output and assign it to the correct field in Airtable. Post the caption into Airtable record. Requirements Airtable Database: Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Example of an editorial plan in Airtable: Editorial Plan example in Airtable For this workflow you need the Airtable fields "created_at", "Briefing" and "SoMe_Text_AI" Feel free to contact me via LinkedIn, if you have any questions!
by Muhammad Farooq Iqbal
This n8n template demonstrates how to automate the creation of high-quality visual content using AI. The workflow takes simple titles from a Google Sheets spreadsheet, generates detailed artistic prompts using AI, creates photorealistic images, and manages the entire process from data input to final delivery. Use cases are many: Perfect for digital marketers, content creators, social media managers, e-commerce businesses, advertising agencies, and anyone needing consistent, high-quality visual content for marketing campaigns, social media posts, or brand materials! Good to know The Gemini 2.0 Flash Exp image generation model used in this workflow may have geo-restrictions. The workflow processes one image at a time to ensure quality and avoid rate limiting. Each generated image maintains high consistency with the source prompt and shows minimal AI artifacts. How it works Automated Trigger: A schedule trigger runs every minute to check for new entries in your Google Sheets spreadsheet. Data Retrieval: The workflow fetches rows from your Google Sheets document, specifically looking for entries with "pending" status. AI Prompt Generation: Using Google Gemini, the workflow takes simple titles and transforms them into detailed, artistic prompts for image generation. The AI considers: Specific visual elements, styles, and compositions Natural poses, interactions, and environmental context Lighting conditions and mood settings Brand consistency and visual appeal Proper aspect ratios for different platforms Text Processing: A code node ensures proper JSON formatting by escaping newlines and maintaining clean text structure. Image Generation: Gemini's advanced image generation model creates photorealistic images based on the detailed prompts, ensuring high-quality, consistent results. File Management: Generated images are automatically uploaded to a designated folder in Google Drive with organized naming conventions. Public Sharing: Images are made publicly accessible with read permissions, enabling easy sharing and embedding. Database Update: The workflow completes by updating the Google Sheets with the generated image URL and changing the status from "pending" to "posted", creating a complete audit trail. How to use Setup: Ensure you have the required Google Sheets document with columns for ID, prompt, status, and imageUrl. Configuration: Update the Google Sheets document ID and folder IDs in the respective nodes to match your setup. Activation: The workflow is currently inactive - activate it in n8n to start processing. Data Input: Simply add new rows to your Google Sheets with titles and set status to "pending" - the workflow will automatically process them. Monitoring: Check the Google Sheets for updated status and image URLs to track progress. Requirements Google Gemini API** account for LLM and image generation capabilities Google Drive** for file storage and management Google Sheets** for data input and tracking n8n instance** with proper credentials configured Customizing this workflow Content Variations: Try different visual styles, seasonal themes, or trending designs by modifying the AI prompt in the LangChain agent. Output Formats: Adjust the aspect ratio or image specifications for different platforms (Instagram, Pinterest, TikTok, Facebook ads, etc.). Integration Options: Replace the schedule trigger with webhooks for real-time processing, or add notification nodes for status updates. Batch Processing: Modify the limit node to process multiple items simultaneously, though be mindful of API rate limits. Quality Control: Add additional validation nodes to ensure generated images meet quality standards before uploading. Analytics: Integrate with analytics platforms to track image performance and engagement metrics. This workflow provides a complete solution for automated visual content creation, perfect for businesses and creators looking to scale their visual content production while maintaining high quality and consistency across all marketing materials.
by Yulia
Create a Telegram bot that combines advanced AI functionalities with LangChain nodes and new tools. Nodes as tools and the HTTP request tool are a new n8n feature that extend custom workflow tool and simplify your setup. We used the workflow tool in the previous Telegram template to call the Dalle-3 model. In the new version, we've achieved similar results using the HTTP Request tool and the Telegram node tool instead. The main difference is that Telegram bot becomes more flexible. The LangChain Agent node can decide which tool to use and when. In the previous version, all steps inside the custom workflow tool were executed sequentially. ⚠️ Note that you'd need to select the Tools Agent to work with new tools. Before launching the template, make sure to set up your OpenAI and Telegram credentials. Here’s how the new Telegram bot works: Telegram Trigger listens for new messages in a specified Telegram chat. This node activates the rest of the workflow after receiving a message. AI Tool Agent receives input text, processes it using the OpenAI model and replies to a user. It addresses users by name and sends image links when an image is requested. The OpenAI GPT-4o model generates context-aware responses. You can configure the model parameters or swap this node entirely. Window buffer memory helps maintain context across conversations. It stores the last 10 interactions and ensures that the agent can access previous messages within a session. Conversations from different users are stored in different buffers. The HTTP request tool connects with OpenAI's DALL-E-3 API to generate images based on user prompts. The tool is called when the user asks for an image. Telegram node tool sends generated images back to the user in a Telegram chat. It retrieves the image from the URL returned by the DALL-E-3 model. This does not happen directly, however. The response from the HTTP request tool is first stored in the Agent’s scratchpad (think of it as a short-term memory). In the next iteration, the Agent sends the updated response to the GPT model once again. The GPT model will then create a new tool request to send the image back to the user. To pass the image URL, the tool uses the new $fromAI() expression. Send final reply node sends the final response message created by the agent back to the user on Telegram. Even though the image was already passed to the user, the Agent always stops with the final response that comes from dedicated output. ⚠️ Note, that the Agent may not adhere to the same sequence of actions in 100% of situations. For example, sometimes it could skip sending the file via the Telegram node tool and instead just send an URL in the final reply. If you have a longer series of predefined steps, it may be better to use the “old” custom workflow tool. This template is perfect as a starting point for building AI agentic workflow. Take a look at another agentic Telegram AI template that can handle both text and voice messages.
by Davide
1. How it Works This n8n workflow automates fine-tuning OpenAI models through these key steps: Manual Trigger**: Starts with the "When clicking ‘Test workflow’" event to initiate the process. Downloads a .jsonl file from Google Drive Upload to OpenAI**: Uploads the .jsonl file to OpenAI via the "Upload File" node (with purpose "fine-tune"). Create Fine-tuning Job**: Sends a POST request to the endpoint https://api.openai.com/v1/fine_tuning/jobs with: { "training_file": "{{ $json.id }}", "model": "gpt-4o-mini-2024-07-18" } OpenAI automatically starts training the model based on the provided file. Interaction with the Trained Model**: An "AI Agent" uses the custom model (e.g., ft:gpt-4o-mini-2024-07-18:n3w-italia::XXXX7B) to respond to chat messages. 2. Set up Steps To configure the workflow: Prepare the Training File: Create a .jsonl file following the specified syntax (e.g., travel assistant Q/A examples). Upload it to Google Drive and update the ID in the "Google Drive" node. Configure Credentials: Google Drive: Connect an account via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key in the "OpenAI Chat Model" and "Upload File" nodes. Customize the Model: In the "OpenAI Chat Model" node, specify the name of your fine-tuned model (e.g., ft:gpt-4o-mini-...). Update the HTTP request body (Create Fine-tuning Job) if needed (e.g., a different base model). Start the Workflow: Use the manual trigger ("Test workflow") to begin the upload and training process. Test the model via the "Chat Trigger" (chat messages). Integrated Documentation: Follow the instructions in the Sticky Notes to: Properly format the .jsonl (Step 1). Monitor progress on OpenAI (Step 2, link: https://platform.openai.com/finetune/). Note: Ensure the .jsonl file adheres to OpenAI’s required structure and that credentials are valid.