by 飯盛 正幹
Analyze Furusato Nozei trends from Google News to Slack This workflow acts as a specialized market analyst for Japan's "Furusato Nozei" (Hometown Tax) system. It automates the process of monitoring related news, validating keyword popularity via search trends, and delivering a concise, strategic report to Slack. By combining RSS feeds, AI agents, and real-time search data, this template helps marketers and municipal researchers stay on top of the highly competitive Hometown Tax market without manual searching. 👥 Who is this for? Municipal Government Planners:** To track trending return gifts and competitor strategies. E-commerce Marketers:** To identify high-demand keywords for Furusato Nozei portals. Content Creators:** To find trending topics for blogs or social media regarding tax deductions. Market Researchers:** To monitor the seasonality and shifting interests in the Hometown Tax sector. ⚙️ How it works News Ingestion: The workflow triggers on a schedule and fetches the latest "Furusato Nozei" articles from Google News via RSS. AI Analysis & Extraction: An AI Agent (using OpenRouter) summarizes the news cluster and identifies the most viable search keyword (e.g., "Scallops," "Travel Vouchers," or specific municipalities). Data Validation: The workflow queries the Google Trends API (via SerpApi) to retrieve search volume history for the extracted keyword in Japan. Strategic Reporting: A second AI Agent analyzes the search trend data alongside the keyword to generate a market insight report. Delivery: The final report is formatted and sent directly to a Slack channel. 🛠️ Requirements To use this workflow, you will need: n8n** (Version 1.0 or later recommended). OpenRouter API Key** (or you can swap the model nodes for OpenAI/Anthropic). SerpApi Key** (Required to fetch Google Trends data programmatically). Slack Account** (with permissions to post to a channel). 🚀 How to set up Configure Credentials: Add your OpenRouter API key to the Chat Model nodes. Add your SerpApi key to the Google Trends API node. Connect your Slack account in the Send a message node. Check the RSS Feed: The RSS Read node is pre-configured for "Furusato Nozei" (ふるさと納税). You can leave this as is. Regional Settings: The workflow is pre-set for Japan (jp / ja). If you need to change this, check the Workflow Configuration and Google Trends API nodes. Schedule: Enable the Schedule Trigger node to run at your preferred time (default is 9:00 AM JST). 🎨 How to customize Change the Topic:** While this is optimized for Furusato Nozei, you can change the RSS feed URL to track other Japanese market trends (e.g., NISA, Inbound Tourism). Swap AI Models:** The template uses OpenRouter, but you can easily replace the "Chat Model" nodes with OpenAI (GPT-4) or Anthropic (Claude) depending on your preference. Adjust AI Prompts:** The AI prompts are currently in Japanese to match the content. You can modify the system instructions in the AI Agent nodes if you prefer English reports.
by Eugene
Run a multi-agent SEO domain audit with SE Ranking and Claude Who is this for SEO agencies running competitor analysis for clients Content teams planning editorial strategies Marketing teams tracking competitive performance What this workflow does Enter any domain and get a full SEO strategy report. Five AI agents analyze your technical health, backlinks, keywords, AI visibility, and competitors — then a Strategy Director builds a prioritized 90-day action plan. What you'll get Domain performance baseline (keywords, traffic, traffic value) Technical SEO audit with health score and Core Web Vitals Backlink profile with anchor text analysis Top competitors discovered by keyword overlap AI visibility across ChatGPT, Perplexity, Gemini, and AI Overviews Prioritized 90-day action plan from the Strategy Director Full report in Google Drive + metrics in Google Sheets How it works You enter a domain, business description, and target market via a form Pulls domain overview, keywords, competitors, backlinks, and audit data in parallel Checks AI search visibility across 4 engines Four specialist agents analyze the data (Technical SEO, Links, Keywords, AI Visibility) A Strategy Director agent reviews everything and builds a unified plan Saves the report to Google Drive and metrics to Google Sheets Requirements SE Ranking community node v1.3.5+ (Install from npm) SE Ranking API token (Get one here) Anthropic API key (for Claude) Google Drive + Sheets accounts (optional) Setup Install the SE Ranking community node v1.3.5+ Add your SE Ranking and Anthropic API credentials Connect Google Drive and Google Sheets (optional) Open the form, enter a domain, and run it Customization Change the target market dropdown to add more countries Swap Claude models for different cost/speed tradeoffs Add a Slack notification node at the end for team alerts
by Jitesh Dugar
Turn WhatsApp into an interactive personal classroom. This workflow automates the entire learning cycle—from generating AI-powered quizzes to tracking student progress in real-time—by combining WATI, OpenAI AI Agents, and Google Sheets. 🎯 What This Workflow Does Provides a complete end-to-end educational experience through simple WhatsApp commands: 📝 Interactive Quiz Trigger Student sends a topic (e.g., quiz photosynthesis) via WATI Trigger to start a session. 🚦 Intelligent Command Routing A Route Message switch node detects specific student intents: quiz: Triggers the AI generation of new questions. answer: Routes to the evaluation engine to grade student replies. progress: Triggers the historical performance report branch. 👁️ AI-Powered Content Creation An AI Agent using GPT-4o generates 3 tailored MCQ questions based on the requested topic, formatted as structured JSON. 📊 Automated Grading & Logging Evaluation: Compares student replies (e.g., answer 1a 2b 3c) against stored correct answers fetched from Google Sheets. Logging: Saves the final score, topic, and date to the master database. 📈 Progress Visualization Fetches all historical scores to calculate average performance, identifies the "Best Topic," and generates a visual progress bar. ✨ Key Features Dynamic AI Tutoring:** Quizzes are never repetitive; the AI generates fresh questions every time a topic is requested. Session Management:** Uses a "Session Key" (Phone + Date) to ensure students can only take one specific quiz per day per topic. Visual Performance Feedback:** Students receive formatted reports with emojis and visual bars (███░░) to track their improvement. Easy Answer Format:** Simple shorthand for students (e.g., 1a 2b) makes it accessible for mobile-first learning. Centralized Database:** All academic data is logged in Google Sheets, making it easy for teachers or parents to monitor results. 💼 Perfect For Students:** Quick self-testing on subjects before exams or competitive tests. Teachers:** Providing an automated "homework bot" for students to practice curriculum topics. Corporate Trainers:** Deploying bite-sized knowledge checks to employees via mobile. Language Learners:** Testing vocabulary or grammar rules on the go. 🔧 What You'll Need Required Integrations WATI** – To handle WhatsApp message triggers and automated feedback delivery. OpenAI API** – For the AI Agent and Chat Model (GPT-4o) to generate quizzes. Google Sheets** – To act as the database for active quiz sessions and permanent score history. Optional Customizations Difficulty Levels:** Adjust the AI Agent's system message to generate "Beginner," "Intermediate," or "Advanced" quizzes. Timed Challenges:** Use n8n Wait nodes to send "Time's Up" reminders if a student hasn't answered within an hour. 🚀 Quick Start Import Template – Copy the JSON and import it into your n8n instance. Set Credentials – Connect your WATI, OpenAI, and Google Sheets accounts. Setup Spreadsheet – Create two sheets: Active Quizzes: Headers for sessionKey, phone, topic, correctAnswers, questionCount, today. Scores: Headers for date, phone, senderName, topic, score, total, percentage. Start Learning – Send quiz solar system to your WhatsApp bot to begin! 🎨 Customization Options Question Count:** Modify the AI prompt and code nodes to generate 5 or 10 questions instead of 3. Persona Tweak:** Change the System Message in the AI Agent to make the tutor sound like a "Fun Scientist" or a "Strict Professor". Filtered Progress:** Edit the Build Progress Report code to show reports for only the last 30 days. 📈 Expected Results 100% automated quiz generation and grading—no manual teacher intervention required. Immediate student feedback providing correct answers for missed questions. Increased engagement through a familiar, low-friction WhatsApp interface. Actionable insights into which topics students struggle with most. 🏆 Use Cases Exam Preparation A high school student uses the bot to drill on "Biology" and "History" topics daily, tracking their score increase over time via the progress command. Employee Onboarding A new hire receives a "Company Policy" quiz via WhatsApp; their score is automatically logged for HR compliance. Community Education An NGO uses the bot to teach health and safety protocols in rural areas where WhatsApp is the primary digital tool. 💡 Pro Tips Strict JSON:** The AI Agent is prompted to return only JSON to ensure the workflow doesn't crash on conversational filler. Flexible Matching:** The Build Progress Report node uses String() casting to ensure phone numbers match correctly even if formatted differently in Sheets. Topic Extraction:** Use specific topics for best results; "Photosynthesis" works better than just "Biology". Ready to launch your AI classroom? Import this template and connect your OpenAI key to start generating quizzes today!
by Nitin Garg
How it works Schedule Trigger runs every 6 hours (customizable) Apify Scraper fetches Upwork jobs matching your criteria Deduplication filters out jobs you've already seen AI Scoring (GPT-4) evaluates fit, client quality, budget (0-100 score) Filter keeps only jobs scoring 60+ Proposal Generator creates personalized proposals Google Sheets logs all results Telegram sends summary notification Setup steps Time: ~15 minutes Create Google Sheet with "Job ID" column Get Apify account + Upwork scraper actor Get OpenAI API key Set environment variables: GOOGLE_SHEETS_DOC_ID APIFY_ACTOR_ID TELEGRAM_CHAT_ID Create credentials: Google Sheets, Apify (Header Auth), OpenAI, Telegram Connect credentials to workflow nodes Who is this for? Freelancers actively applying to Upwork jobs Agencies monitoring multiple job categories Consultants prioritizing high-quality leads Estimated costs Per run:** $0.50-3.00 (Apify + OpenAI) Monthly (4x/day):** $50-200
by 長谷 真宏
Who is this for This template is perfect for sales professionals, account managers, and business development teams who want to make memorable impressions on their clients. It automates the tedious task of researching gift shops and preparation spots before important meetings. What it does This workflow automatically prepares personalized recommendations for client visits by monitoring your Google Calendar, enriching data from Notion, and using AI to select the perfect options. How it works Trigger: Activates when a calendar event containing keywords like "visit," "meeting," "client," or "dinner" is created or updated Extract: Parses company name from the event title Enrich: Fetches customer preferences from your Notion database Search: Google Places API finds nearby gift shops and quiet cafes Analyze: GPT-4 recommends the best options based on customer preferences Notify: Sends a personalized message to Slack with recommendations Example Slack Output Here's what the final notification looks like: 🎁 Recommended Gift Shop Patisserie Sadaharu AOKI (★4.6) 3-5-2 Marunouchi, Chiyoda-ku 💡 Reason: The customer loves French desserts, so this patisserie's macarons would be perfect! ☕ Pre-Meeting Cafe Starbucks Reserve Roastery (★4.5) 5 min walk from meeting location Set up steps Setup time: approximately 15 minutes Google Calendar: Connect your Google Calendar account and select your calendar Notion Database: Create a customer database with "Company Name" (title) and "Preferences" (text) fields Google Places API: Get an API key from Google Cloud Console and add it to the Configuration node OpenAI: Connect your OpenAI account for AI-powered recommendations Slack: Connect your Slack workspace and update the channel ID in the final node Requirements Google Calendar account Notion account with a customer database Google Places API key (requires Google Cloud account) OpenAI API key Slack workspace with bot permissions How to customize Search radius: Adjust the searchRadius parameter in the Configuration node (default: 1000 meters) Event keywords: Modify the Filter node conditions to match your calendar naming conventions Notification channel: Change the Slack channel ID to your preferred channel
by Cheng Siong Chin
How It Works Every day at 8 AM, the workflow automatically retrieves the latest F1 data—including driver standings, qualifying results, race schedules, and circuit information. All sources are merged into a unified dataset, and driver performance metrics are computed using historical trends. An AI agent, enhanced with vectorized race history, evaluates patterns and generates race-winner predictions. When the confidence score exceeds the defined threshold, the system pushes an automated Slack alert and records the full analysis in the database and Google Sheets. Setup Steps Update the workflow configuration with: newsApiUrl, weatherApiUrl, historicalYears, and confidenceThreshold. Connect PostgreSQL using the schema: prediction_date, predicted_winner, confidence_score, prediction_source, data_version, full_analysis. Provide the Slack channel ID for sending high-confidence alerts. Specify the Google Sheets document ID and sheet name for prediction logging. Test connectivity to the Ergast API (no authentication required). Prerequisites OpenAI account (GPT-4o access), Slack workspace admin access, PostgreSQL instance, Google Sheets account, n8n instance with LangChain community nodes enabled. Customization Extend by adding constructor predictions (modify AI prompt). Integrate Discord or Teams instead of Slack. Benefits Saves time by automating data collection, improves accuracy using multiple performance metrics and historical patterns.
by Cheng Siong Chin
How It Works Automates monthly payroll processing and tax compliance by calculating employee payroll, applying accurate withholdings, generating comprehensive tax summaries, and producing compliance-ready documentation. The system fetches revenue and payroll data, performs detailed payroll calculations, applies AI-driven tax withholding rules, aggregates tax summary information, and verifies compliance using GPT-4 tax analysis. It generates structured HTML documents, converts them to PDF format, stores records in Google Sheets for audit trails, archives files to Google Drive, and sends summaries to tax agents. Designed for HR departments and payroll processing teams seeking automated, accurate, and fully compliant payroll management. Setup Steps Connect payroll data source and configure revenue fetch parameters. Set up OpenAI GPT-4 API for tax withholding logic and compliance analysis. Configure Google Sheets for audit storage and Google Drive for long-term archiving. Define tax withholding rules, compliance thresholds, and tax agent. Prerequisites Payroll data source; OpenAI API key; Google Sheets and Drive accounts Use Cases HR departments automating monthly payroll processing and tax compliance; Customization Adjust withholding rules by jurisdiction Benefits Eliminates manual payroll calculations
by Lachlan
Who’s it for This workflow is for: People who want to quickly launch simple landing pages without paying monthly fees to landing page creators. It’s ideal for rapid prototyping, generation of large amounts of landing pages, testing campaign ideas, or generating quick web mockups with AI. People launching products that compete in some way with the complete landing page solutions, and want to get an understanding of the basic building blocks of landing page creators How it works / What it does Retrieves or creates session data from n8n Tables Generates a vivid scene description for the hero image using GPT Creates a custom AI-generated hero image (using Gemini Palm or your preferred model) Builds a responsive landing page layout with GPT-4o-mini Saves the generated HTML to an n8n data table Deploys the landing page to Vercel automatically Returns the public live URL of the generated site This workflow combines OpenAI, Google Gemini,Cloudinary, Vercel, and n8n Tables to create, store, and publish your webpage seamlessly from a single prompt. How to set up Create an n8n Table with the following columns: sessionID (text) html (long text) Add your credentials: OpenAI (for text and image generation) Geminoogle Gemini (PaLM) - through the Google Cloud Platform (for text and image generation) Cloudinary (for image upload) Vercel (for live deployment) Update the placeholders as noted inside the workflow: Cloudinary cloud name and upload preset OpenAI model and API key n8n table name and column mapping (sessionID, html) Vercel Header Auth token Run the workflow. After configuration, it will generate, upload, deploy, and return the live landing page URL automatically. Inline notes are included throughout the workflow indicating where you must update values such as credentials, table names, or API keys to make the flow work end to end. Requirements OpenAI API key Google Gemini API key Cloudinary account Vercel account n8n Table with sessionID and html columns How to customize the workflow Modify the OpenAI model or prompt to change the tone, layout, or visual style of the generated landing page. Replace Vercel deployment with your preferred hosting platform (e.g., Netlify or GitHub Pages) if desired. Add extra input fields (e.g., title, CTA, description) to collect richer context before generating the page. Add ability to integreat with databases to turn into a full loveable/Base44 competitor Result After setup, this workflow automatically converts any idea into a fully designed and live landing page within seconds. It generates the hero image, builds the HTML layout, deploys it to Vercel, and provides the final shareable URL instantly. Optional Cleanup Subflow An additional utility subflow is included to help keep your Vercel project clean by deleting older deployments. It preserves the two most recent deployments and deletes the rest. Use with caution — only run it if you want to remove previous test pages and free up space in your Vercel account.
by Takumi Oku
Who is this for Space Enthusiasts & Music Lovers**: Discover new music paired with stunning cosmic visuals. Community Managers**: specific Slack channels with engaging, creative daily content. n8n Learners**: Learn how to chain Image Analysis (Vision), Logic, and API integrations (Spotify/Slack). How it works Schedule: The workflow runs every night at 10 PM. Mood Logic: It checks the day of the week to adjust the energy level (e.g., higher energy for Friday nights, calmer vibes for Mondays). Visual Analysis: OpenAI (GPT-4o) analyzes the NASA APOD image to determine its color palette, mood, and subject matter, converting these into musical parameters (Valence, Energy). Curation: Spotify searches for a track that matches these specific parameters. Creative Writing: OpenAI generates a short poem or caption linking the image to the song. Delivery: The image, track link, and poem are posted to Slack, and the track is automatically saved to a designated Spotify Playlist. Requirements NASA API Key** (Free) OpenAI API Key** (Must have access to GPT-4o model) Spotify Developer Credentials** (Client ID and Client Secret) Slack** Workspace and Bot Token How to set up Set up your credentials for NASA, OpenAI, Spotify, and Slack in n8n. Create a specific Playlist in Spotify and copy its Playlist ID. Copy the Channel ID from the Slack channel where you want to post. Paste these IDs into the respective nodes (marked with <PLACEHOLDER>) or use the Set Fields node to manage them globally.
by Guillaume Duvernay
Go beyond basic Retrieval-Augmented Generation (RAG) with this advanced template. While a simple RAG setup can answer straightforward questions, it often fails when faced with complex queries and can be polluted by irrelevant information. This workflow introduces a sophisticated architecture that empowers your AI agent to think and act like a true research assistant. By decoupling the agent from the knowledge base with a smart sub-workflow, this template enables multi-query decomposition, relevance-based filtering, and an intermediate reasoning step. The result is an AI agent that can handle complex questions, filter out noise, and synthesize high-quality, comprehensive answers based on your data in Supabase. Who is this for? AI and automation developers:** Anyone building sophisticated Q&A bots, internal knowledge base assistants, or complex research agents. n8n power users:** Users looking to push the boundaries of AI agents in n8n by implementing production-ready, robust architectural patterns. Anyone building a RAG system:** This provides a superior architectural pattern that overcomes the common limitations of basic RAG setups, leading to dramatically better performance. What problem does this solve? Handles complex questions:** A standard RAG agent sends one query and gets one set of results. This agent is designed to break down a complex question like "How does natural selection work at the molecular, organismal, and population levels?" into multiple, targeted sub-queries, ensuring all facets of the question are answered. Prevents low-quality answers:* A simple RAG agent can be fed irrelevant information if the semantic search returns low-quality matches. This workflow includes a crucial *relevance filtering** step, discarding any data chunks that fall below a set similarity score, ensuring the agent only reasons with high-quality context. Improves answer quality and coherence:* By introducing a dedicated *"Think" tool**, the agent has a private scratchpad to synthesize the information it has gathered from multiple queries. This intermediate reasoning step allows it to connect the dots and structure a more comprehensive and logical final answer. Gives you more control and flexibility:** By using a sub-workflow to handle data retrieval, you can add any custom logic you need (like filtering, formatting, or even calling other APIs) without complicating the main agent's design. How it works This template consists of a main agent workflow and a smart sub-workflow that handles knowledge retrieval. Multi-query decomposition: When you ask the AI Agent a complex question, its system prompt instructs it to first break it down into an array of multiple, simpler sub-queries. Decoupling with a sub-workflow: The agent doesn't have direct access to the vector store. Instead, it calls a "Query knowledge base" tool, which is a sub-workflow. It sends the entire array of sub-queries to this sub-workflow in a single tool call. Iterative retrieval & filtering (in the sub-workflow): The sub-workflow loops through each sub-query. For each one, it queries your Supabase Vector Store. It then checks the similarity score of the returned data chunks and uses a Filter node to discard any that are not highly relevant (the default is a score > 0.4). Intermediate reasoning step: The sub-workflow returns all the high-quality, filtered information to the main agent. The agent is then instructed to use its Think tool to review this information, synthesize the key points, and structure a plan for its final, comprehensive answer. Setup Connect your accounts: Supabase: In the sub-workflow ("RAG sub-workflow"), connect your Supabase account to the Supabase Vector Store node and select your table. OpenAI: Connect your OpenAI account in two places: to the Embeddings OpenAI node (in the sub-workflow) and to the OpenAI Chat Model node (in the main workflow). Customize the agent's purpose: In the main workflow, edit the AI Agent's system prompt. Change the context from a "biology course" to whatever your knowledge base is about. Adjust the relevance filter: In the sub-workflow, you can change the 0.4 threshold in the Filter node to be more or less strict about the quality of the information you want the agent to use. Activate the workflow and start asking complex questions! Taking it further Integrate different vector stores:** The logic is decoupled. You can easily swap the Supabase Vector Store node in the sub-workflow with a Pinecone, Weaviate, or any other vector store node without changing the main agent's logic. Add more tools:** Give the main agent other capabilities, like a web search a way to interact with your tech stack. The agent can then decide whether to use its internal knowledge base, search the web, or both, to answer a question. Better prompting:** You could further work on the Agent's system prompt to increase its capacity to provide high-quality answers by being even better at leveraging the provided chunks.
by Guillaume Duvernay
Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists:** Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts:** Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies:** Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. 💡 What problem does this solve? The Best of Both Worlds:** Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations":** Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility:* Automates the inclusion of source links from *both** your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage:** The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow:** Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! 🚀 Taking it further Automate Publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create draft posts in your CMS. Generate Content in Bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to generate a batch of articles from your content calendar. Customize the Writing Style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.
by Yusuke
🎯 Self-Learning X Content Engine (Creator RAG Booster) Learn your voice. Generate posts that sound like you — not AI. 🧩 Overview This n8n workflow builds a personal RAG (Retrieval-Augmented Generation) system for creators. It learns from your own past posts and generates new tweets, replies, and image prompts in your tone. ⚙️ How it works Step 1 — Ingest Use the “Add to KB” Form to upload your past posts or notes. Text + metadata (topic, style) are stored in Supabase as vectors. Step 2 — Generate Use the “Generate Posts” Form to create new post ideas. The Agent fetches the most relevant style snippets (via Supabase VectorStore) Output includes: 📝 post 💬 quote 💭 reply 🎨 image_prompt 🔧 Setup (3–5 min) Connect Supabase (URL + Key) Make sure the table name is documents Enable vector extension (pgvector) Connect OpenAI API Key Activate both Forms and open the URLs to test. Optionally replace Forms with Webhooks. 💡 Tip: RLS enabled? Ensure your API key allows insert/select for documents. 🧠 Tech Stack n8n (self-hosted) Supabase (Vector Store) OpenAI (gpt-4.1-mini) HTML-based completion form 🪄 Credits Built by Yusuke | @yskautomation License: MITView on GitHub