by Robert Breen
Chat to write or reword a blog post. The workflow stores each result in Google Sheets and uses a sub-workflow “Google tool” to count rows per session (your running context). If a session exceeds a row threshold, the flow can branch (e.g., stop or notify). ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Prepare Your Google Sheet Connect your Data in Google Sheets Use this format: Sample Sheet Row 1 = column names (e.g., session, Rows, output) Data in rows 2–100 (or more if you prefer) In n8n, use Google Sheets OAuth2 → pick your Spreadsheet and Worksheet (Optional) You can adapt this to Airtable, Notion, or a Database 🧠 How It Works Chat Trigger**: Provide a topic (write) or paste existing text (reword). Code Node (“Choose to Write or Edit Blog”)**: Builds a system_prompt + user_prompt Instructs the agent to call the Google tool (sub-workflow) with only the sessionid to count existing rows. Tool Workflow (“google”)**: Fetches rows from the sheet → filters by session → summarizes row count. Agent (“Blog Writer & Editor”)**: Returns structured JSON (items/rows, session, blog body). Store (Google Sheets)**: Appends { session, Rows, output } to the sheet. If Node**: Example rule: Rows > 3 → branch/limit/notify as needed. 💬 Example Prompts “Write a 600-word blog about n8n agents with 3 bullet takeaways. Session: abc123.” “Reword this post into a concise LinkedIn article. Session: launchQ3:\n<your text here>” “Draft a blog intro and 5 SEO headlines on marketing automation. Session: mkt-01.” 📬 Contact Need help tailoring this to Airtable/Notion/DB, or adding auto-publishing? 📧 rbreen@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by achiya
How it works Courier sends an invoice photo to WhatsApp → AI extracts all details via Google Vision OCR Courier sends a payment photo (check, bank transfer, credit card voucher) → AI matches it to the invoice AI presents a summary and asks for confirmation Once approved — receipt is created in Rivhit, invoice is closed, and the PDF is sent back to WhatsApp Supports cash, checks, credit cards, bank transfers, and split payments. Includes automatic customer lookup by tax ID and Israeli bank code recognition. Set up steps Takes about 10 minutes: Set up a WAHA instance and point its webhook to this workflow Add your Google Cloud Vision API key to the HTTP Request node Add your Rivhit API token to the "api key" Set node Replace the WhatsApp group ID in the Filter node with yours Connect your OpenAI credentials Activate and start sending photos! See the sticky notes inside the workflow for detailed instructions.
by Ehsan
Analyze food ingredients from Telegram photos using Gemini and Airtable 🛡️ Personal Ingredient Bodyguard Turn your Telegram bot into an intelligent food safety scanner. This workflow analyzes photos of ingredient labels sent via Telegram, extracts the text using AI, and cross-references it against your personal database of "Good" and "Bad" ingredients in Airtable. It solves the problem of manually reading tiny, complex labels for allergies or dietary restrictions. Whether you are Vegan, Halal, allergic to nuts, or just avoiding specific additives, this workflow acts as a strict, personalized bodyguard for your diet. It even features a customizable "Persona" (like a Sarcastic Bodyguard) to make safety checks fun. 🎯 Who is it for? People with specific dietary restrictions (Vegan, Gluten-free, Keto). Individuals with food allergies (Nuts, Dairy, Shellfish). Special dietary observers (Halal, Kosher). Health-conscious shoppers avoiding specific additives (e.g., E120, Aspartame). 🚀 How it works Trigger: You send a photo of a product label to your Telegram Bot. Fetch Rules: The workflow retrieves your active "Watchlist" (Ingredients to avoid/prefer) and "Persona" settings from Airtable. Vision & Logic: It uses an AI Vision model to extract text from the image (OCR) and Google Gemini to analyze the text against your strict veto rules (e.g., "Safe" only if ZERO bad items are found). Response: The bot replies instantly on Telegram with a Safe/Unsafe verdict, highlighting detected ingredients using HTML formatting. Log: The result is saved back to Airtable for your records. ⚙️ How to set up This workflow relies on a specific Airtable structure to function as the "Brain." Set up Airtable Sign up for Airtable: Click here Copy the required Base: Click here to copy the "Ingredients Brain" base Connect Airtable to n8n (5-min guide): Watch Tutorial Set up Telegram Message @BotFather on Telegram to create a new bot and get your API Token. Add your Telegram credentials in n8n. Configure AI Add your Google Gemini API credentials. Note on OCR: This template is configured to use a local LLM for OCR to save costs (via the OpenAI-compatible node). If you do not have a local model running, simply swap the "OpenAI Chat Model" node for a standard GPT-4o or Gemini Vision node. 📋 Requirements n8n** (Cloud or Self-hosted) Airtable** account (Free tier works) Telegram** account Google Gemini** API Key Local LLM* (Optional, for free OCR) OR *OpenAI/Gemini** Key (for standard Cloud Vision) 🎨 How to customize Change the Persona:** Go to the "Preferences" table in Airtable to change the bot's personality (e.g., "Helpful Nutritionist") and output language. Update Ingredients:** Add or remove items in the "Watchlist" table. Mark them as "Good Stuff" or "Bad Stuff" and set Status to "Active". Adjust Sensitivity:** The AI prompt in the "AI Agent" node is set to strict "Veto" mode (Bad overrides Good). You can modify the system prompt to change this logic. ⚠️ Disclaimer This tool is for informational purposes only. Not Medical Advice: Do not rely on this for life-threatening allergies. AI Limitations: OCR can misread text, and AI can hallucinate. Verify: Always double-check the physical product label. Use at your own risk.
by JJ Tham
Generate AI Voiceovers from Scripts and Upload to Google Drive This is the final piece of the AI content factory. This workflow takes your text-based video scripts and automatically generates high-quality audio voiceovers for each one, turning your text into ready-to-use audio assets for your video ads. Go from a spreadsheet of text to a folder of audio files, completely on autopilot. ⚠️ CRITICAL REQUIREMENTS (Read First!) This is an advanced, self-hosted workflow that requires specific local setup: Self-Hosted n8n Only:** This workflow uses the Execute Command and Read/Write Files nodes, which requires you to run your own instance of n8n. It will not work on n8n Cloud. FFmpeg Installation:** You must have FFmpeg installed on the same machine where your n8n instance is running. This is used to convert the audio files to a standard format. What it does This is Part 3 of the AI marketing series. It connects to the Google Sheet where you generated your video scripts (in Part 2). For each script that hasn't been processed, it: Uses the Google Gemini Text-to-Speech (TTS) API to generate a voiceover. Saves the audio file to your local computer. Uses FFmpeg to convert the raw audio into a standard .wav file. Uploads the final .wav file to your Google Drive. Updates the original Google Sheet with a link to the audio file in Drive and marks the script as complete. How to set up IMPORTANT: This workflow is Part 3 of a series and requires the output from Part 2 ("Generate AI Video Ad Scripts"). If you need Part 1 or Part 2 of this workflow series, you can find them for free on my n8n Creator Profile. Connect to Your Scripts Sheet: In the "Getting Video Scripts" node, connect your Google Sheets account and provide the URL to the sheet containing your generated video scripts from Part 2. Configure AI Voice Generation (HTTP Request): In the "HTTP Request To Generate Voice" node, go to the Query Parameters and replace INSERT YOUR API KEY HERE with your Google Gemini API key. In the JSON Body, you can customize the voice prompt (e.g., change <INSERT YOUR DESIRED ACCENT HERE>). Set Your Local File Path: In the first "Read/Write Files from Disk" node, update the File Name field to a valid directory on your local machine where n8n has permission to write files. Replace /Users/INSERT_YOUR_LOCAL_STORAGE_HERE/. Connect Google Drive: In the "Uploading Wav File" node, connect your Google Drive account and choose the folder where your audio files will be saved. Update Your Tracking Sheet: In the final "Uploading Google Drive Link..." node, ensure it's connected to the same Google Sheet from Step 1. This node will update your sheet with the results. Name and Description for Submission Form Here are the name and description, updated with the new information, ready for you to copy and paste. Name: Generate AI Voiceovers from Scripts and Upload to Google Drive Description: Welcome to the final piece of the AI content factory! 🔊 This advanced workflow takes the video ad scripts you've generated and automatically creates high-quality audio voiceovers for each one, completing your journey from strategy to ready-to-use media assets. ⚠️ This is an advanced workflow for self-hosted n8n instances only and requires FFmpeg to be installed locally. ⚙️ How it works This workflow is Part 3 of a series. It reads your video scripts from a Google Sheet, then for each script it: Generates a voiceover using the Google Gemini TTS API. Saves the audio file to your local machine. Converts the file to a standard .wav format using FFmpeg. Uploads the final audio file to Google Drive. Updates your Google Sheet with a link to the new audio file. 👥 Who’s it for? Video Creators & Marketers: Mass-produce voiceovers for video ads, tutorials, or social media content without hiring voice actors. Automation Power Users: A powerful example of how n8n can bridge cloud APIs with local machine commands. Agencies: Drastically speed up the production of audio assets for client campaigns. 🛠️ How to set up This workflow requires specific local setup due to its advanced nature. IMPORTANT: This is Part 3 of a series. To find Part 1 ("Generate a Strategic Plan") and Part 2 ("Generate Video Scripts"), please visit my n8n Creator Profile where they are available for free. Setup involves connecting to your scripts sheet, configuring the AI voice API, setting a local file path for n8n to write to, and connecting your Google Drive.
by Simeon Penev
Who’s it for Content/SEO teams who want a fast, consistent, research-driven brief for a copywriters from a single keyword—without manual review and analysis of the SERP (Google results). How it works / What it does Form Trigger collects the keyword/topic and redirects to Google Drive Folder after the final node. FireCrawl Search & Scrape pulls the top 5 pages for the chosen keyword. AI Agent (with Think + OpenAI Chat Model) analyzes sources and generates an original Markdown brief. Markdown to JSON converts the Markdown into Google Docs batchUpdate requests (H1/H2/H3, lists, links, spacing). Then this is used in Update a document for updating the empty doc. Create a document + Update a document write a Google Doc titled “SEO Brief for ” and update the Google Doc in your target Drive folder. How to set up Add credentials: Firecrawl (Authorization header), OpenAI (Chat), Google Docs OAuth2. Replace placeholders: {{APIKEY}}, {{googledrivefolderid}}, {{googledrivefolderurl}}. Publish and open the Form URL to test. Requirements Firecrawl API key • OpenAI API key • Google account with access to the target Drive folder. Resources Google OAuth2 Credentials Setup - https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ Firecrawl API key - https://take.ms/lGcUp OpenAI API key - https://docs.n8n.io/integrations/builtin/credentials/openai/
by Vadim Mubi
This workflow acts as an automated market analyst for educational purposes. It scans Binance Futures (Testnet) for high-volume pairs, applies custom technical analysis (RSI, Bollinger Bands, EMA, ATR) using JavaScript, and uses AI to validate trends against recent news sentiment. It is designed for paper trading to demonstrate how to build advanced financial logic and adaptive risk management systems in n8n without risking real funds. 💡 Why use this? Smart Scanning:** Automatically filters top 150 pairs by volume and excludes stablecoins to find active markets. Dynamic Risk Management:* Uses *ATR (Average True Range)** to calculate adaptive Stop Loss and Take Profit levels based on current market volatility. Custom Technical Analysis:** Demonstrates how to calculate complex indicators via a Function node, eliminating the need for paid TA APIs. AI Sentiment Filter:** Scrapes recent news and uses an LLM (OpenAI) to "vet" the technical signal against potential FUD or risks. Secure Execution:** Shows how to sign HMAC SHA256 requests manually to interact with the Binance Futures API. ⚙️ How it works Filter: Runs every 15 minutes to find liquid assets on Binance. Calculate: Computes indicators (EMA 200, BB, RSI) and defines Entry/Exit points using ATR logic. Validate: If a technical signal matches, it fetches news and asks AI: "Is there any breaking news that contradicts this trade?" Execute: If AI returns "CONFIRM", it posts the detailed analysis to Telegram and places a paper trade order on the Testnet. 🛠 Setup Steps Binance Testnet: Create a free account on Binance Futures Testnet and generate API keys. Configuration: Open the 📝 MAIN CONFIG node and enter your Testnet Keys and Telegram Channel ID. Credentials: Add your OpenAI (or OpenRouter) credentials to the AI node. > Disclaimer: This workflow connects to the Binance Testnet by default. It is intended for educational purposes only. The author and n8n are not responsible for financial decisions.
by dirogar
Telegram Tasker Bot — это сценарий n8n, который принимает голосовые сообщения в Telegram, автоматически превращает их в текст, извлекает из него ключевые поля задачи и создаёт карточку в нужной доске Trello. Пользователь просто говорит задачу — бот сам оформляет её и присылает ссылку на готовую карточку. Для использования вам потребуется telegram bot. Его можно создать через бота BotFather Так же понадобится доступ к API chatgpt - он используется только для транскрибции аудио в речь. Вы можете использовать любой другой сервис, по вашему выбору. И аккаунт в trello, с доступом к API. !Внимание! ID доски в trello можно взять из url ID столбца на доске трелло можно взять через инструменты разработчика (по крайней мере я так получал эти данные)
by Deborah
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step. Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
by Dataki
This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1**: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2**: Handle upserts for WordPress content when edits are made. Workflow 3**: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8n_website_embedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: ** **Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content** Updated content** Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8n_website_embedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the **Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent** Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook** This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically analyzes customer lifetime value (CLV) metrics to optimize customer acquisition and retention strategies. It saves you time by eliminating the need to manually calculate CLV and provides data-driven insights for maximizing customer profitability and improving business growth. Overview This workflow automatically scrapes customer data, purchase history, and engagement metrics to calculate and analyze customer lifetime value patterns. It uses Bright Data to access customer analytics platforms and AI to intelligently segment customers, predict CLV, and identify high-value customer characteristics. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping customer analytics and CRM platforms without being blocked OpenAI**: AI agent for intelligent CLV analysis and customer segmentation Google Sheets**: For storing CLV calculations and customer analysis data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your CLV analysis spreadsheet Customize: Define customer data sources and CLV calculation parameters Use Cases Customer Success**: Focus retention efforts on high-value customers Marketing Strategy**: Optimize customer acquisition costs based on projected CLV Sales Teams**: Prioritize prospects with higher lifetime value potential Business Strategy**: Make data-driven decisions about customer investments Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #customerlifetimevalue #clv #customeranalytics #brightdata #webscraping #customerdata #n8nworkflow #workflow #nocode #customersegmentation #valueanalysis #customerinsights #revenueoptimization #customervalue #clvanalysis #customermetrics #customerprofitability #businessintelligence #customerretention #valueprediction #customeroptimization #revenueanalysis #customerstrategy #lifetimevalue #customerroi #valuedriven #customerworth #profitability
by Ari Nakos
This n8n workflow automates lead generation by searching Reddit for relevant posts based on keywords, filtering them, using OpenRouter AI to analyze and summarize content, and logging the findings (link, summary, etc.) to Google Sheets. Watch the full setup tutorial on how I setup this ETL pipeline using n8n: https://youtu.be/F3-fbU3UmYQ Required Authentication: To run this workflow, you need to set up credentials in n8n for: Reddit: Uses OAuth 2.0. Requires creating an app on Reddit to get a Client ID & Secret. (YT Tutorial for Reddit App Creation: https://youtu.be/zlGXtW4LAK8) OpenRouter: Uses an API Key. Generate this key directly from your OpenRouter account settings. (YT Tutorial : https://youtu.be/Cq5Y3zpEhlc) Google Sheets: Uses OAuth 2.0. Requires setup in Google Cloud Console (enable Sheets API, create OAuth Client ID with n8n redirect URI) to get a Client ID & Secret. Ensure these credentials are created and selected in the respective n8n nodes (Get Posts, OpenRouter Chat Model nodes, Output The Results).
by David Roberts
This AI agent can access data provided by another n8n workflow. Since that workflow can be used to retrieve any data from any service, this template can be used give an agent access to any data. Note that to use this template, you need to be on n8n version 1.19.4 or later.