by Oneclick AI Squad
📘 Student Absence Alerts & Attendance Tracking Automation Automatically alerts parents about student absences and tracks 30-day attendance patterns to identify risks and trends. 🔧 Main Components Daily Attendance Check – 10:30 AM** Triggers the workflow every day at 10:30 AM. Read Today’s Attendance** Retrieves current-day attendance records from the source Excel or database. Read Student Contacts** Reads contact details (email, phone) of students for alert delivery. Process Absent Students** Identifies students who are absent and unexcused for the day. Prepare Absence Email** Generates customized email content for absent students. Send Absence Email** Sends an absence alert email to the student’s parent/guardian. Prepare Absence SMS** Formats WhatsApp-friendly message for alerts. Send Absence WhatsApp** Sends the WhatsApp message using API (e.g., Facebook Graph). Generate Attendance Report** Prepares a daily attendance summary with absence level classifications. Save Attendance Report** Appends the generated report to a historical attendance sheet. ⚠️ Alert Logic Based on the past 30-day absence pattern, the system classifies students into: | Level | Absences in 30 Days | Status | | --------- | ------------------- | -------------- | | 🔴 High | 5+ | Critical Alert | | 🟡 Medium | 3–4 | Warning | | 🟢 Low | 1–2 | Low Risk | 📊 Tracking Features 🔢 Attendance Rate Calculation – Tracks each student's attendance percentage 🔍 Pattern Analysis – Detects recurring absenteeism trends 🚨 Risk Identification – Flags high-risk students for early intervention 📈 Historical Reporting – Maintains daily logs for future reference ✅ Essential Prerequisites Excel sheet or database with daily attendance logs Excel sheet or database with student contact details SMTP credentials for sending emails WhatsApp API integration (e.g., Facebook Graph or Twilio) Storage access for saving attendance reports 📁 Required Excel File Structures Attendance Sheet (daily_attendance.xlsx) | Student ID | Date | Status | | ---------- | ---------- | ------ | | ST101 | 2025-08-06 | Absent | Contacts Sheet (student_contacts.xlsx) | Student ID | Name | Email | Phone | | ---------- | ---------- | --------------------------------------------- | ------------- | | ST101 | Aryan Shah | aryan@example.com | +919123456789 | 🧾 Expected Input Format Example { "studentId": "ST101", "name": "Aryan Shah", "email": "aryan@example.com", "phone": "+919123456789", "status": "Absent", "date": "2025-08-06" } 🚀 Key Features ⏰ Scheduled Daily Execution – Automated tracking at 10:30 AM ✉️ Multi-Channel Notifications – Email + WhatsApp alerts to parents 📊 Absence Pattern Monitoring – 30-day trend analysis 🧠 Risk-Based Alerts – Smart classification into alert levels 🗂️ Daily Reports – Easy to audit attendance summary logs ⚙️ Quick Setup Guide Import Workflow JSON into n8n. Configure schedule trigger at 10:30 AM. Set Excel file paths in "Read Today’s Attendance" and "Read Student Contacts". Customize absence thresholds in the “Process Absent Students” node. Add SMTP details for the “Send Absence Email” node. Integrate WhatsApp API in the “Send Absence WhatsApp” node. Test with mock data and review reports. Activate the workflow. 🔧 Parameters to Configure | Parameter | Description | | ---------------------- | -------------------------------------- | | attendance_file_path | Path to today's attendance records | | contacts_file_path | Path to student contacts sheet | | smtp_user | Email username for SMTP server | | smtp_password | Password for SMTP server | | whatsapp_api_url | Endpoint for sending WhatsApp messages | | alert_thresholds | Absence count thresholds for alerts |
by Guillaume Duvernay
Unlock a new level of sophistication for your AI agents with this template. While the native n8n Think Tool is great for giving an agent an internal monologue, it's limited to one instance. This workflow provides a clever solution using a sub-workflow to create multiple, custom thinking tools, each with its own specific purpose. This template provides the foundation for building agents that can plan, act, and then reflect on their actions before proceeding. Instead of just reacting, your agent can now follow a structured, multi-step reasoning process that you design, leading to more reliable and powerful automations. Who is this for? AI and automation developers:** Anyone looking to build complex, multi-tool agents that require robust logic and planning capabilities. LangChain enthusiasts:** Users familiar with advanced agent concepts like ReAct (Reason-Act) will find this a practical way to implement similar frameworks in n8n. Problem solvers:** If your current agent struggles with complex tasks, giving it distinct steps for planning and reflection can dramatically improve its performance. What problem does this solve? Bypasses the single "Think Tool" limit:** The core of this template is a technique that allows you to add as many distinct thinking steps to your agent as you need. Enables complex reasoning:** You can design a structured thought process for your agent, such as "Plan the entire process," "Execute Step 1," and "Reflect on the result," making it behave more intelligently. Improves agent reliability and debugging:** By forcing the agent to write down its thoughts at different stages, you can easily see its line of reasoning, making it less prone to errors and much easier to debug when things go wrong. Provides a blueprint for sophisticated AI:** This is not just a simple tool; it's a foundational framework for building state-of-the-art AI agents that can handle more nuanced and multi-step tasks. How it works The re-usable "Thinking Space": The magic of this template is a simple sub-workflow that does nothing but receive text. This workflow acts as a reusable "scratchpad." Creating custom thinking tools: In the main workflow, we use the Tool (Workflow) node to call this "scratchpad" sub-workflow multiple times. We give each of these tools a unique name (e.g., Initial thoughts, Additional thoughts). The power of descriptions: The key is the description you give each of these tool nodes. This description tells the agent when and how it should use that specific thinking step. For example, the Initial thoughts tool is described as the place to create a plan at the start of a task. Orchestration via system prompt: The main AI Agent's system prompt acts as the conductor, instructing the agent on the overall process and telling it about its new thinking abilities (e.g., "Always start by using the Initial thoughts tool to make a plan..."). A practical example: This template includes two thinking tools to demonstrate a "Plan and Reflect" cycle, but you can add many more to fit your needs. Setup Add your own "action" tools: This template provides the thinking framework. To make it useful, you need to give the agent something to do. Add your own tools to the AI Agent, such as a web search tool, a database lookup, or an API call. Customize the thinking tools: Edit the description of the existing Initial thoughts and Additional thoughts tools. Make them relevant to the new action tools you've added. For example, "Plan which of the web search or database tools to use." Update the agent's brain: Modify the system prompt in the main AI Agent node. Tell it about the new action tools you've added and how it should use your customized thinking tools to complete its tasks. Connect your AI model: Select the OpenAI Chat Model node and add your credentials. Taking it further Create more granular thinking steps:** Add more thinking tools for different stages of a process, like a "Hypothesize a solution" tool, a "Verify assumptions" tool, or a "Final answer check" tool. Customize the thought process:* You can change *how the agent thinks by editing the prompt inside the fromAI('Thoughts', ...) field within each tool. You could ask for thoughts in a specific format, like bullet points or a JSON object. Change the workflow trigger:** Switch the chat trigger for a Telegram trigger, email, Slack, whatever you need for your use case! Integrate with memory:** For even more power, combine this framework with a long-term memory solution, allowing the agent to reflect on its thoughts from past conversations.
by Oneclick AI Squad
📚 Automated School Fee Reminder Workflow with Payment Link Automatically sends fee reminders (via email and WhatsApp) to parents with secure payment links, 3 days before the due date. 🔧 Main Components Daily Fee Check – 8 AM** Scheduled trigger that starts the workflow daily at 8 AM. Read Pending Fees** Fetches student fee records from an Excel sheet (using getAll method). Process Fee Reminders** Filters records to find pending fees due within the next 3 days. Prepare Email Reminder** Generates personalized email messages with payment links. Wait for Email Preparation** Adds delay/wait condition for email logic readiness. Send Email Reminder** Sends the fee reminder email with a secure payment link to the parent. Prepare WhatsApp Reminder** Generates WhatsApp-friendly messages with fee and payment details. Wait for WhatsApp Preparation** Waits for WhatsApp message logic to complete. Send WhatsApp Message** Sends the message to the parent’s WhatsApp number using a message API. Update Reminder Status** Updates the Excel file to mark reminders as sent to avoid duplicates. 🧩 Channels Used 📧 Email – with personalized payment link 💬 WhatsApp – formatted reminder message 🔐 Payment Integration Secure payment links are auto-generated per student to enable direct and safe online fee payments. ✅ Essential Prerequisites Excel sheet with fee records (student_fee_data.xlsx) SMTP credentials for sending email WhatsApp API or provider integration (like Twilio or Gupshup) Access to a payment gateway or service for link generation File storage access to update reminder status in Excel 📁 Required Excel File Structure (student_fee_data.xlsx) | Student ID | Name | Email | Phone | Fee Due Date | Amount | Reminder Sent | | ---------- | ---- | ----- | ----- | ------------ | ------ | ------------- | 🧾 Expected Input Format Example { "studentId": "ST123", "name": "Ria Mehta", "email": "ria.mehta@example.com", "phone": "+919123456789", "dueDate": "2025-08-10", "amount": "₹5000", "reminderSent": "No" } 🚀 Key Features ⏰ Scheduled Daily Execution – Fully automated at 8 AM 🧮 Due-Date Filtering – Only targets fees due in the next 3 days 💬 Multi-Channel Notifications – Sends reminders via both Email and WhatsApp 🔗 Secure Payment Links – Auto-generated for each student 🔄 Reminder Tracking – Prevents duplicate reminders by updating status ⚙️ Quick Setup Guide Import Workflow JSON into your n8n instance. Configure schedule in the “Daily Fee Check” node (default: 8 AM). Set Excel file path in the “Read Pending Fees” node. Update your fee processing logic in the “Process Fee Reminders” node. Add email credentials in the “Send Email Reminder” node. Integrate WhatsApp provider API in the “Send message” node. Define how you generate secure payment links. Test with sample data and activate workflow. 🛠️ Parameters to Configure | Parameter | Description | | ------------------ | ------------------------------------------ | | excel_file_path | Path to the fee tracking Excel file | | smtp_host | SMTP server for sending email reminders | | smtp_user | Email username | | smtp_password | Email password | | whatsapp_api_key | WhatsApp API key for sending messages | | payment_api_url | URL for generating payment links | | admin_email | (Optional) Admin email for error reporting |
by Intuz
This n8n template from Intuz provides a complete and automated solution for hyper-personalized email outreach. It powerfully combines AI with Gmail and Google Sheets, using specific keywords and prospect data to automatically craft unique, compelling email content that boosts engagement and secures more replies. Instead of manually replying to every lead or inquiry, this template does the heavy lifting for you, ensuring every response is relevant, thoughtful, and timely. It reads each person's unique inquiry, uses OpenAI to craft a perfectly tailored and human-like response, and sends it directly from your Gmail account. Ideal for sales, marketing, and customer support teams looking to boost engagement and save hours of manual work. Use Cases: Sales Teams: Instantly follow up with new leads from your website's contact form with a personalized touch. Customer Support: Provide initial, intelligent responses to support tickets, answering common questions or acknowledging receipt of a complex issue. Marketing Automation: Nurture leads by responding to content downloads or webinar sign-ups with relevant, non-generic information. Founders & Solopreneurs: Manage all incoming business inquiries (partnerships, media, etc.) efficiently without sacrificing quality. How It Works: Trigger the Flow (Manual): Start the automation whenever you're ready to process a new batch of inquiries from your sheet. Fetch Inquiries from Google Sheets: The workflow connects to your specified Google Sheet and reads each row. It pulls the contact's First Name, Email ID, the Inquiry Intent (e.g., "Demo Request," "Pricing Inquiry"), and the full text of their Original Inquiry. Sync Your Signature: Before writing the email, an HTTP Request node dynamically fetches your display name from your Gmail account settings. This ensures the signature in the generated email (Thanks, {{Your Name}}) is always accurate. Craft a Hyper-Personalized Reply with AI: It uses this context to generate a high-quality, professional, and friendly email reply in HTML format. For example: If the intent is "Technical Support," the AI will generate a helpful, empathetic response addressing the technical issue. If the intent is "Partnership Proposal," it will draft a professional reply acknowledging the proposal and outlining the next steps. Send via Gmail: The final node takes the AI-generated message, adds a relevant subject line (e.g., "Re: Your Demo Request"), and sends it directly to the contact's email address from your connected Gmail account. This process loops for every single row in your Google Sheet, turning a list of names into a series of meaningful conversations. Setup Instructions: To get this workflow running, you'll need to configure a few things: Credentials: Google: Connect your Google account via OAuth2 and ensure you have enabled access for Google Sheets, Google Drive, and Gmail. OpenAI: Add your OpenAI API key as a credential. Google Sheet Setup: Create a Google Sheet with the following exact column headers: -First Name -Email ID -Inquiry Intent (A short category like "Demo Request", "Billing Issue", etc.) -Original Inquiry (The full text of the email or message you received). Node Configuration: Get row(s) in sheet: Select your Google Sheet document and the specific sheet name. Message a model (OpenAI): Choose your preferred OpenAI model (e.g., gpt-4-turbo, gpt-3.5-turbo). HTTP Request & Send Personalized emails: These nodes should automatically use your configured Gmail credentials. No changes are typically needed. Connect with us Website: https://www.intuz.com/cloud/stack/n8n Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz
by Tomek
How it works Use Telegram to send in new phrases (flashcard front) You can also manually input phrase in the workflow itself ChatGPT generates provided phrase description (in English but you can change it) including multiple meanings & generates examples of using the phrase in a sample sentence (flashcard back) Steps to setup Provide your Telegram bot API key (optional) Provide your OpenAI key Provide Google Sheets credentials How to import flashcards from Google Sheets into Anki Use Google Sheets to Anki add-on: 1871608121 In Anki simply click Sync Decks and you're done :) Enjoy
by Robert Breen
This n8n training workflow demonstrates how to connect a sub-workflow as a tool to an AI Agent. In this example, the main workflow is a Website Chatbot that engages visitors, collects contact information, and sends that data to a CRM process. The CRM process itself is a separate sub-workflow, connected to the agent as a tool via the Tool Workflow node. Step-by-Step Setup Instructions 1. Create the Sub-Workflow (CRM Tool) This sub-workflow will be triggered by the AI agent to process collected information. It will: Receive inputs (email, description) from the main chatbot workflow. Format the data into a structured JSON format. Append the data to a Google Sheet (acting as the CRM database). Send a confirmation message back to the main workflow. Steps inside the sub-workflow: When Executed by Another Workflow** – Triggered by the main workflow’s tool node. Convert Conversation (Agent)** – Uses OpenAI to extract and format the input into a JSON structure: { "email": "jane.doe@example.com", "description": "Wants help automating lead intake and sending Slack notifications." } Structured Output Parser – Ensures the extracted data matches the expected JSON schema. Append row in sheet (Google Sheets) – Adds the new lead data to your CRM sheet. Code Node – Returns a simple text confirmation like "Thanks for the info, we will be in touch soon". Required setup for Google Sheets: Enable the Google Sheets API and connect your Google account in n8n. Create a sheet with at least the columns email and description. Use the sheet's Document ID and tab name in the Google Sheets node. 2. Create the Main Workflow (Website Chatbot) This workflow acts as the main AI Agent handling incoming chat messages. Steps in the main workflow: When chat message received – Starts the workflow whenever a visitor sends a message via your chatbot integration. Website Chatbot (Agent Node) – Configured with a System Message that: Briefly explains your services. Asks the visitor what processes they want to automate. Requests their name and email. Sends collected data to the CRM tool once email and description are available. OpenAI Chat Model – Connects to the AI agent as its language model. Simple Memory – Stores short-term context for the ongoing chat. CRM Tool (Tool Workflow Node) – Points to the sub-workflow created in Step 1, allowing the chatbot to trigger it directly. 3. Connecting the Sub-Workflow to the AI Agent Add a Tool Workflow node to the main workflow. Select "Parameter" as the source. Paste in your sub-workflow JSON or select it from your n8n workflows. Connect the Tool Workflow node to your AI Agent using the ai_tool connection. Give the tool a clear description (e.g., crm tool to store lead information) so the agent knows when to use it. 4. How It Works in Action A visitor sends a message through the chatbot. The AI Agent engages, asks questions, and collects their name, email, and request. Once collected, the agent triggers the CRM Tool. The sub-workflow formats the data, stores it in Google Sheets, and sends a confirmation. The chatbot confirms with the visitor that their request was received. 5. Customization Ideas Replace Google Sheets with your actual CRM API. Add validation to ensure the email format is correct before saving. Expand the CRM tool to send a Slack or email notification after storing the lead. Created by Robert A. – Ynteractive Website: https://ynteractive.com Email: robert@ynteractive.com
by Jesse Davids
Workflow Documentation Description: This workflow is designed to optimize prompts by enhancing user inputs for clarity and specificity using AI. The workflow takes a user-provided prompt as input and uses a Natural Language Processing (NLP) model to refine and improve the prompt. The optimized prompt is then sent back to the user, ready for use in further workflows or processes. Setup: This workflow is suitable for users who want to improve their prompts for better communication and understanding in their workflows. The workflow utilizes an AI Agent powered by an OpenAI Chat Model to enhance user prompts. Expected Outcomes: Users can provide vague or imprecise prompts as input to the workflow. The AI Agent will refine and optimize the prompt, adding clarity and specific details. The optimized prompt will be delivered back to the user via Telegram or can be input for the next nodes. Extra Information: A. A Telegram node is used to deliver the optimized prompt back to the user. B. Ensure you have the necessary credentials set up for Telegram and OpenAI accounts. C. Customize the workflow's settings, such as the AI model used for prompt optimization, to suit your requirements. D. Activate the workflow once all configurations are set to start optimizing prompts efficiently.
by Abdullah
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Overview This workflow automates the process of transcribing audio files and summarizing them using OpenAI models, with the final output stored neatly in Notion. Whether you're a researcher, content creator, student, or professional, this automation saves time by converting voice recordings into actionable summaries with zero manual effort. Created by: Abdullah Dilshad Contact: iamabdullahdilshad@gmail.com Who It’s For This template is ideal for: Researchers**: Transcribe and summarize interviews, lectures, or research recordings. Content Creators**: Convert podcasts or videos into transcripts and social captions/show notes. Students**: Automatically turn lectures or study group audio into summarized notes. Professionals**: Log meeting notes and summaries directly into your Notion workspace. How It Works This four-step workflow performs the following: Step 1:* *Trigger: New Audio in Google Drive** Automatically triggers when a new audio file (MP3/WAV) is uploaded to a specified Google Drive folder.The file is then downloaded for processing. Step 2: Transcribe Audio with Whisper** The audio file is sent to OpenAI’s Whisper model for high-accuracy transcription. Step 3: Summarize Transcript with GPT-4** The transcript is passed to GPT-4, which generates a clean, concise summary. Step 4: Store Summary in Notion** A new Notion page is created with the generated summary and optional metadata (file name, upload time, etc.). Setup Instructions Step 1: Google Drive Trigger** Connect your Google Drive account. Select the folder you want to monitor. This node detects new file uploads and passes the file for download. Step 2: Download File** Downloads the new audio file for transcription. Step 3: Transcribe Recording (OpenAI Whisper) Connect your OpenAI API Key. Ensure this node receives the binary audio file. It will return the transcription as plain text. Step 3: Transcribe Recording (OpenAI Whisper)** Connect your OpenAI API Key. Ensure this node receives the binary audio file. It will return the transcription as plain text. Step 4: Summarize Transcript (GPT-4 via AI Agent)** Use your OpenAI API Key. Configure a summarization prompt like: "Summarize the following transcript in a clear and concise manner:" Connect the output from Whisper into this GPT-4 prompt. Step 5: Notion Integration** Connect your Notion account. Choose or create a database to store summaries. Map the GPT output (summary) to a "Text" or "Rich Text" property. Optionally include metadata like filename, file upload date, etc.
by Muhammad Farooq Iqbal
This n8n template demonstrates how to automate the creation of high-quality visual content using AI. The workflow takes simple titles from a Google Sheets spreadsheet, generates detailed artistic prompts using AI, creates photorealistic images, and manages the entire process from data input to final delivery. Use cases are many: Perfect for digital marketers, content creators, social media managers, e-commerce businesses, advertising agencies, and anyone needing consistent, high-quality visual content for marketing campaigns, social media posts, or brand materials! Good to know The Gemini 2.0 Flash Exp image generation model used in this workflow may have geo-restrictions. The workflow processes one image at a time to ensure quality and avoid rate limiting. Each generated image maintains high consistency with the source prompt and shows minimal AI artifacts. How it works Automated Trigger: A schedule trigger runs every minute to check for new entries in your Google Sheets spreadsheet. Data Retrieval: The workflow fetches rows from your Google Sheets document, specifically looking for entries with "pending" status. AI Prompt Generation: Using Google Gemini, the workflow takes simple titles and transforms them into detailed, artistic prompts for image generation. The AI considers: Specific visual elements, styles, and compositions Natural poses, interactions, and environmental context Lighting conditions and mood settings Brand consistency and visual appeal Proper aspect ratios for different platforms Text Processing: A code node ensures proper JSON formatting by escaping newlines and maintaining clean text structure. Image Generation: Gemini's advanced image generation model creates photorealistic images based on the detailed prompts, ensuring high-quality, consistent results. File Management: Generated images are automatically uploaded to a designated folder in Google Drive with organized naming conventions. Public Sharing: Images are made publicly accessible with read permissions, enabling easy sharing and embedding. Database Update: The workflow completes by updating the Google Sheets with the generated image URL and changing the status from "pending" to "posted", creating a complete audit trail. How to use Setup: Ensure you have the required Google Sheets document with columns for ID, prompt, status, and imageUrl. Configuration: Update the Google Sheets document ID and folder IDs in the respective nodes to match your setup. Activation: The workflow is currently inactive - activate it in n8n to start processing. Data Input: Simply add new rows to your Google Sheets with titles and set status to "pending" - the workflow will automatically process them. Monitoring: Check the Google Sheets for updated status and image URLs to track progress. Requirements Google Gemini API** account for LLM and image generation capabilities Google Drive** for file storage and management Google Sheets** for data input and tracking n8n instance** with proper credentials configured Customizing this workflow Content Variations: Try different visual styles, seasonal themes, or trending designs by modifying the AI prompt in the LangChain agent. Output Formats: Adjust the aspect ratio or image specifications for different platforms (Instagram, Pinterest, TikTok, Facebook ads, etc.). Integration Options: Replace the schedule trigger with webhooks for real-time processing, or add notification nodes for status updates. Batch Processing: Modify the limit node to process multiple items simultaneously, though be mindful of API rate limits. Quality Control: Add additional validation nodes to ensure generated images meet quality standards before uploading. Analytics: Integrate with analytics platforms to track image performance and engagement metrics. This workflow provides a complete solution for automated visual content creation, perfect for businesses and creators looking to scale their visual content production while maintaining high quality and consistency across all marketing materials.
by Yulia
Create a Telegram bot that combines advanced AI functionalities with LangChain nodes and new tools. Nodes as tools and the HTTP request tool are a new n8n feature that extend custom workflow tool and simplify your setup. We used the workflow tool in the previous Telegram template to call the Dalle-3 model. In the new version, we've achieved similar results using the HTTP Request tool and the Telegram node tool instead. The main difference is that Telegram bot becomes more flexible. The LangChain Agent node can decide which tool to use and when. In the previous version, all steps inside the custom workflow tool were executed sequentially. ⚠️ Note that you'd need to select the Tools Agent to work with new tools. Before launching the template, make sure to set up your OpenAI and Telegram credentials. Here’s how the new Telegram bot works: Telegram Trigger listens for new messages in a specified Telegram chat. This node activates the rest of the workflow after receiving a message. AI Tool Agent receives input text, processes it using the OpenAI model and replies to a user. It addresses users by name and sends image links when an image is requested. The OpenAI GPT-4o model generates context-aware responses. You can configure the model parameters or swap this node entirely. Window buffer memory helps maintain context across conversations. It stores the last 10 interactions and ensures that the agent can access previous messages within a session. Conversations from different users are stored in different buffers. The HTTP request tool connects with OpenAI's DALL-E-3 API to generate images based on user prompts. The tool is called when the user asks for an image. Telegram node tool sends generated images back to the user in a Telegram chat. It retrieves the image from the URL returned by the DALL-E-3 model. This does not happen directly, however. The response from the HTTP request tool is first stored in the Agent’s scratchpad (think of it as a short-term memory). In the next iteration, the Agent sends the updated response to the GPT model once again. The GPT model will then create a new tool request to send the image back to the user. To pass the image URL, the tool uses the new $fromAI() expression. Send final reply node sends the final response message created by the agent back to the user on Telegram. Even though the image was already passed to the user, the Agent always stops with the final response that comes from dedicated output. ⚠️ Note, that the Agent may not adhere to the same sequence of actions in 100% of situations. For example, sometimes it could skip sending the file via the Telegram node tool and instead just send an URL in the final reply. If you have a longer series of predefined steps, it may be better to use the “old” custom workflow tool. This template is perfect as a starting point for building AI agentic workflow. Take a look at another agentic Telegram AI template that can handle both text and voice messages.
by Davide
1. How it Works This n8n workflow automates fine-tuning OpenAI models through these key steps: Manual Trigger**: Starts with the "When clicking ‘Test workflow’" event to initiate the process. Downloads a .jsonl file from Google Drive Upload to OpenAI**: Uploads the .jsonl file to OpenAI via the "Upload File" node (with purpose "fine-tune"). Create Fine-tuning Job**: Sends a POST request to the endpoint https://api.openai.com/v1/fine_tuning/jobs with: { "training_file": "{{ $json.id }}", "model": "gpt-4o-mini-2024-07-18" } OpenAI automatically starts training the model based on the provided file. Interaction with the Trained Model**: An "AI Agent" uses the custom model (e.g., ft:gpt-4o-mini-2024-07-18:n3w-italia::XXXX7B) to respond to chat messages. 2. Set up Steps To configure the workflow: Prepare the Training File: Create a .jsonl file following the specified syntax (e.g., travel assistant Q/A examples). Upload it to Google Drive and update the ID in the "Google Drive" node. Configure Credentials: Google Drive: Connect an account via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key in the "OpenAI Chat Model" and "Upload File" nodes. Customize the Model: In the "OpenAI Chat Model" node, specify the name of your fine-tuned model (e.g., ft:gpt-4o-mini-...). Update the HTTP request body (Create Fine-tuning Job) if needed (e.g., a different base model). Start the Workflow: Use the manual trigger ("Test workflow") to begin the upload and training process. Test the model via the "Chat Trigger" (chat messages). Integrated Documentation: Follow the instructions in the Sticky Notes to: Properly format the .jsonl (Step 1). Monitor progress on OpenAI (Step 2, link: https://platform.openai.com/finetune/). Note: Ensure the .jsonl file adheres to OpenAI’s required structure and that credentials are valid.
by Udit Rawat
This n8n automation is designed to extract, process, and store content from Notion pages into a Pinecone vector store. Here's a breakdown of the workflow: Notion - Page Added Trigger: The automation starts by monitoring for newly added pages in a specific Notion database. It triggers whenever a new page is created, capturing the page's metadata. Notion - Retrieve Page Content: Once triggered, the automation fetches the full content of the newly added Notion page, including blocks like text, images, and videos. Filter Non-Text Content: The next step filters out non-text content (such as images and videos), ensuring only textual content is processed. Summarize - Concatenate Notion's blocks content: The remaining text content is concatenated into a single block of text for easier processing. Token Splitter: The concatenated text is then split into manageable tokens, which are chunks of text that can be used for embedding. Create metadata and load content: Metadata such as the page ID, creation time, and title are added to the content, making it easy to reference and track. Embeddings Google Gemini: The processed text is passed through a Google Gemini model to generate embeddings, which are numerical representations of the text that capture its semantic meaning. Pinecone Vector Store: Finally, the embeddings, along with the content and metadata, are stored in a Pinecone vector store, making it searchable and ready for use in applications like document retrieval or natural language processing tasks. This workflow ensures that every new page added to the Notion database is processed into a format that can be easily searched and used in machine learning applications. The automation runs every minute to capture new data in real-time, providing an up-to-date and searchable vector database of Notion content. Use Case: This automation converts Notion pages into vector embeddings and stores them in Pinecone for enhanced search and AI-driven insights. It’s ideal for teams using Notion for knowledge management, enabling semantic search and context-based content retrieval. For example, employees can easily find relevant information across documents, and data scientists can use AI models to analyze and summarize the content stored in Notion.