Notify Users When Features Ship with Semantic Search from Tally to Gmail

Who is this for? This workflow is for Product Managers, Indie Hackers, and Customer Success teams who collect feature requests but struggle to notify specific users when those features actually ship. It helps you turn old feedback into customer loyalty and potential upsells.

What it does This workflow creates a "Semantic Memory" of user requests. Instead of relying on exact keyword tags, it uses Vector Embeddings to understand the meaning of a request.

For example, if a user asks for "Night theme," and months later you release "Dark Mode," this workflow understands they are the same thing, finds that user, and drafts a personal email to them.

How it works Listen: Receives new requests via Tally Forms, vectorizes the text using Nomic Embed Text (via Ollama or OpenAI), and stores them in Supabase. Watch: Monitors your Changelog (RSS) or waits for a manual trigger when you ship a new feature. Match: Performs a Vector Similarity Search in Supabase to find users who requested semantically similar features in the past. Notify: An AI Agent drafts a hyper-personalized email connecting the user's specific past request to the new feature, saving it as a Gmail Draft (for safety).

Requirements Supabase Project:** You need a project with the vector extension enabled. AI Model:* This template is pre-configured for Ollama (Local)* to keep it free, but works perfectly with OpenAI. Tally Forms & Gmail:** For input and output.

Setup steps Database Setup (Crucial): Copy the SQL script provided in the workflow's Red Sticky Note and run it in your Supabase SQL Editor. This creates the necessary tables and the vector search function. Credentials: Add your credentials for Tally, Supabase, and Gmail. URL Config: Update the HTTP Request node with your specific Supabase Project URL.

SQL Script Open your Supabase SQL Editor and paste this script to set up the tables and search function:

-- 1. Enable Vector Extension create extension if not exists vector;

-- 2. Create Request Table (Smart Columns) create table feature_requests ( id bigint generated by default as identity primary key, content text, metadata jsonb, embedding vector(768), -- 768 for Nomic, 1536 for OpenAI created_at timestamp with time zone default timezone('utc'::text, now()), user_email text generated always as (metadata->>'user_email') stored, user_name text generated always as (metadata->>'user_name') stored );

-- 3. Create Search Function create or replace function match_feature_requests ( query_embedding vector(768), match_threshold float, match_count int ) returns table ( id bigint, user_email text, user_name text, content text, similarity float ) language plpgsql as $$ begin return query select feature_requests.id, feature_requests.user_email, feature_requests.user_name, feature_requests.content, 1 - (feature_requests.embedding <=> query_embedding) as similarity from feature_requests where 1 - (feature_requests.embedding <=> query_embedding) > match_threshold order by feature_requests.embedding <=> query_embedding limit match_count; end; $$;

⚠️ Dimension Warning: This SQL is set up for 768 dimensions (compatible with the local nomic-embed-text model included in the template).

If you decide to switch the Embeddings node to use OpenAI's text-embedding-3-small, you must change all instances of 768 to 1536 in the SQL script above before running it. How to customize Change Input:** Swap the Tally node for Typeform, Intercom, or Google Sheets. Change AI:** The template includes notes on how to swap the local Ollama nodes for OpenAI nodes if you prefer cloud hosting. Change Output:** Swap Gmail for Slack, SendGrid, or HubSpot to notify your sales team instead of the user directly.

0
Downloads
0
Views
8.78
Quality Score
intermediate
Complexity
Created:12/9/2025
Updated:2/11/2026

🔒 Please log in to import templates to n8n and favorite templates

Workflow Visualization

Loading...

Preparing workflow renderer

Comments (0)

Login to post comments