by Trung Tran
📒 Telegram Expense Tracker to Google Sheets with GPT-4.1 👤 Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture ⚙️ How it works / What it does User sends a text message on Telegram describing an expense (e.g., “Bought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input 🛠️ How to set up 1. Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node 2. Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email 3. OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message 4. Add Parser Use the Structured Output Parser node to safely parse the JSON response 5. Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response 6. Final Actions If relevant**: Send confirmation via Telegram Append row to Google Sheet If not relevant**: Send fallback message via Telegram ✅ Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with 💬 Telegram + 🧠 GPT-4.1 Mini + 📊 Google Sheets + ⚡ n8n
by WeWeb
This n8n template helps you build a full AI-powered LinkedIn content generator with just a few clicks. Paired with the free WeWeb UI template, it becomes a ready-to-use web app where users can: Add their own OpenAI API key Customize the prompt and define 6 content topics Edit the AI-generated topics Choose when to generate LinkedIn posts, complete with hashtags and an optional image Who This Is For Perfect for marketers, indie hackers, and solopreneurs who want to build their personal brand on LinkedIn while staying in control of what gets posted. 🧠 What Makes This Different Unlike most AI agents, you stay fully in control: You define the tone and focus via the prompt. You choose which topics to keep or modify. You decide when to generate a post. You can build on top of this and create your own SaaS product. It’s also modular and extendable—hook it up to your backend, add user login, or feed AI improvements based on user input. ⚙️ How It Works Triggering Events: The app includes 3 pre-configured triggers, ready to be hooked into your WeWeb frontend. Just update the webhook URLs after duplicating the n8n workflow. Topic Generation: A call is made to OpenAI (GPT-4) to generate topic ideas based on your prompt. Post Creation: Once topics are approved or edited, GPT-4 writes full posts with suggested hashtags. Image Generation (Optional): If enabled, a DALL·E call generates a relevant image. Everything Stays Local: All data and images are handled locally, no cloud storage setup needed. 🧪 Requirements & Setup No fancy infrastructure required. Here’s what helps you get started: Free WeWeb account** (recommended) to use the frontend UI template OpenAI account** with API access (for GPT-4 and DALL·E) n8n account** (self-hosted or cloud) to run the backend workflow The template is completely free to use. Since each user adds their own OpenAI API key, you don't need to worry about usage costs or rate limits on your end. 🔧 Want to Go Further? This setup is beginner-friendly, but developers can: Add user accounts Save post history Feed user feedback back into the prompt logic Launch their own branded version as a SaaS
by David Roberts
This workflow allows you to ask questions about the data in a Google Sheet over a chat interface. It uses n8n's built-in chat, but could be modified to work with Slack, Teams, WhatsApp, etc. Behind the scenes, the workflow uses GPT4, so you'll need to have an OpenAI API key that supports it. How it works The workflow uses an AI agent with custom tools that call a sub-workflow. That sub-workflow reads the Google Sheet and returns information from it. Because models have a context window (and therefore a maximum number of characters they can accept), we can't pass the whole Google Sheet to GPT - at least not for big sheets. So we provide three ways of querying less data, that can be used in combination to answer questions. Those three functions are: List all the columns in the sheet Get all values of a single column Get all values of a single row Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Darien Kindlund
If you have multiple users managing workflows, there may come a time where a user “accidentally” turns off a workflow. Or, if you have workflows that automatically turn off other workflows, that code might “accidentally” turn off the wrong one. In either case, here’s a workflow that can attempt to “auto-start” accidentally disabled workflows: How it works: When activated, then every 4 hours, the workflow will search all other workflows that have the auto_resume:true tag present. If any other workflow has auto_resume:true set but is currently turned off, then this workflow will turn it back on. Of course, this watchdog won’t work if the watchdog workflow is turned off. That said, we’ve found this useful in recovering from accidental actions that cause production workflows to be turned off.
by Mario
Purpose This workflow enables you to listen to your recent favorites in very hight quality offline without sacrificing all of your storage. How it works This workflow automatically creates a playlist in Spotify named "Downloads" which periodically gets updated so it always contains only a defined amount of the latest liked songs. This enables only the Downloads playlist to set for automatic downloading and thus free up space on the device. Setup The workflow is ready to go. Just select your Spotify credentials and activate the workflow. In Spotify just enable automatic downloads on the automatically created Downloads folder after the first workflow run. Current limitations This setup currently supports a maximum of 50 songs in the Downloads Playlist. This is due to the paylod limits defined by Spotify encountered in the Get liked songs node. Implementing batching would solve the issue.
by Arunava
This n8n workflow automates replying to Google Play Store reviews using AI. It analyzes each review’s sentiment and tone and posts a human-like response — saving time for indie devs, founders, and PMs managing multiple apps. 💡 Use Cases Respond to reviews at scale without sounding robotic Prioritize negative sentiment feedback Maintain consistent tone and support messaging Free up time for teams to focus on product instead of ops 🧠 How it works Uses the Play Store API to fetch new app reviews Filters out reviews that have already been replied to Analyzes sentiment using OpenAI GPT-4o Passes sentiment and review context to an AI Agent node that crafts a reply Replies are posted to Play Store via Google API (Optional) Logs the reply to Slack for visibility 🛠️ Setup Instructions (Sticky notes included in the workflow) 1. HTTPS Node Replace the package name with your app’s package ID Add Google Service Account credentials → Create from Google Cloud Console with access to Play Console → Add to n8n Credential Manager 2. OpenAI Node Add your OpenAI API key → GPT-4o or GPT-4o mini supported → Customize model or instructions if needed 3. AI Agent Node Modify prompt to reflect your app name, tone, and feature set → E.g. polite, witty, casual, support-friendly, etc. → You can add reply conditions or logic for different types of reviews 4. Slack Node (Optional) Configure Slack Webhook or OAuth credentials if you want reply logs → Otherwise, delete the node to simplify the workflow ⚡ Requirements Google Play Developer Console access Google Cloud Project with service account OpenAI account (GPT-4o or mini) (Optional) Slack workspace & app for logging 🙌 Don’t want to set this up yourself? I’ll do it for you. Just drop me an email: imarunavadas@gmail.com Let’s automate the boring stuff so you can focus on growth. 🚀
by MRJ
:car: Business Value Proposition Accelerates ISO 26262 compliance for automotive/industrial systems by automating safety analysis while maintaining rigorous audit standards. :gear: How It Works graph TD A[Engineer uploadssystem description] --> B(LLM identifies hazards) B --> C(LLM scores risks per ISO 26262) C --> D(Generates mitigation strategies) D --> E(Produces audit-ready reports) :chart_with_upwards_trend: Key Benefits Time 50-70% faster than manual HAZOP/FMEA sessions Instant report generation vs. weeks of documentation Risk Mitigation Pre-validated templates reduce human error Auto-generated traceability simplifies audits :warning: Governance Controls Human-in-the-loop: All LLM outputs require engineer sign-off Version tracking: Full history of modifications Audit mode: Export all decision rationales :computer: Technical Requirements Runs on existing n8n instances Docker deployment (<1hr setup) Integrates with JAMA/DOORS (optional) :wrench: Setup and Usage Prerequisites Docker (Install Guide) Docker Compose (Install Guide) n8n instance (Free Self-Hosted or Cloud - Paid) OpenAI API key (Get Key) Enterprise-ready deployment: When supported by IT infrastructure teams, this solution transforms into a scalable AI safety assistant, providing real-time HARA guidance akin to engineering Co-pilot tools. :arrow_down: Installation and :play_or_pause_button: Running the Workflow For installation procedures and usage of workflow, refer the repository :warning: Validation & Limitations AI-Assisted Analysis Considerations | Advantage | Mitigation Strategy | Implementation Example | |-----------|---------------------|------------------------| | Rapid hazard identification | Human validation layer | Manual review nodes in workflow | | Consistent S/E/C scoring | Rule-based validation | ASIL-D → Redundancy check | | Edge case coverage | Cross-reference with historical data | Integration with incident databases | Critical Validation Steps AI Output Review node in n8n Example: (by code) { "type": "function", "parameters": { "functionCode": "if ($input.item.json.ASIL === 'D' && !$input.item.json.redundancy) throw new Error('ASIL D requires redundancy');" } } Version Control Prompt versions tied to ISO standard editions (e.g., ISO26262:2018-v1.2) Git-tracked changes to ai_models/training_data/ Audit trails Providing a log structure for audit trails Log structure /logs/ └── YYYY-MM-DD/ ├── hazards_approved.log └── hazards_rejected.log
by n8n Team
This workflow creates/updates ClickUp tasks when Notion database pages are created/updated. All fields in the Notion database are mapped to a ClickUp property. Notion database will require setup before the workflow can be used. See the list of fields available in the setup below. Prerequisites Notion account and Notion credentials. ClickUp account and ClickUp credentials. How it works When a new database page is created in Notion, the workflow creates a new task in ClickUp with all required fields. The new ClickUp task's ID is saved in the Notion database page's "ClickUp ID" field. Then, when the database page is updated in Notion, the workflow updates the specific ClickUp task identified by the "ClickUp ID" field in Notion. Setup This workflow requires that you set up a Notion database. To do so, follow the steps below: In Notion, create a new database. Add the following columns to the database: Task name (renamed from "Name") Status (with type "Select" with the following options: "to do", "in progress", "review", "revision", "complete") Deadline (with type "Date") ClickUp ID (with type "Text") Add any other fields you require. Share the database to n8n. By default, the workflow will fill all the fields provided above, except for any other additional fields you add.
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Survey Insights scenario where survey participant responses can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, researchers can save days and even weeks of work breaking down cohorts of participants and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vT6m8XH8JWJTUAfwojc68NAUGC7q0lO7iV738J7aO5fuVjiVzdTRRPkMmT1C4N8TwejaiT0XrmF1Q48/pubhtml# How it works All survey questions and responses are imported from a Google Sheet. Responses are then inserted into a Qdrant collection carefully tagged with the question and survey metadata. For each question, all relevant response are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher. Requirements Survey data and format as shown in the attached google sheet. Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Add more clusters for open-ended questions and less clusters when responses are multiple choice.
by Yaron Been
This workflow provides automated access to the Ibm Granite Granite Speech 3.3 8B AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for text generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete text generation process using the Ibm Granite Granite Speech 3.3 8B model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Granite-speech-3.3-8b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). Key Capabilities Advanced text generation and processing** Natural language understanding and generation** Intelligent text manipulation and analysis** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Ibm Granite/granite-speech-3.3-8b AI model Ibm Granite Granite Speech 3.3 8B**: The core AI model for text generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Text Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Content Writing**: Generate articles, blogs, and marketing copy Code Generation**: Assist with programming and code documentation Text Analysis**: Process and analyze large volumes of text data Automated Communication**: Generate responses and communication templates Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #textgeneration #nlp #aiwriting #textai #contentgeneration #aitext #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
This workflow provides automated access to the Lucataco Seed X Ppo AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for text generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete text generation process using the Lucataco Seed X Ppo model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Seed-X-PPO-7B by ByteDance-Seed, a powerful series of open-source multilingual translation language models Key Capabilities Advanced text generation and processing** Natural language understanding and generation** Intelligent text manipulation and analysis** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Lucataco/seed-x-ppo AI model Lucataco Seed X Ppo**: The core AI model for text generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Text Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Content Writing**: Generate articles, blogs, and marketing copy Code Generation**: Assist with programming and code documentation Text Analysis**: Process and analyze large volumes of text data Automated Communication**: Generate responses and communication templates Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #textgeneration #nlp #aiwriting #textai #contentgeneration #aitext #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
This workflow provides automated access to the Zsxkib Canary Qwen 2.5B AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for text generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete text generation process using the Zsxkib Canary Qwen 2.5B model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: 🎤The best open-source speech-to-text model as of Jul 2025, transcribing audio with record 5.63% WER and enabling AI tasks like summarization directly from speech✨ Key Capabilities Advanced text generation and processing** Natural language understanding and generation** Intelligent text manipulation and analysis** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Zsxkib/canary-qwen-2.5b AI model Zsxkib Canary Qwen 2.5B**: The core AI model for text generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Text Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Content Writing**: Generate articles, blogs, and marketing copy Code Generation**: Assist with programming and code documentation Text Analysis**: Process and analyze large volumes of text data Automated Communication**: Generate responses and communication templates Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #textgeneration #nlp #aiwriting #textai #contentgeneration #aitext #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation