by Anderson Adelino
Voice Assistant Interface with n8n and OpenAI This workflow creates a voice-activated AI assistant interface that runs directly in your browser. Users can click on a glowing orb to speak with the AI, which responds with voice using OpenAI's text-to-speech capabilities. Who is it for? This template is perfect for: Developers looking to add voice interfaces to their applications Customer service teams wanting to create voice-enabled support systems Content creators building interactive voice experiences Anyone interested in creating their own "Alexa-like" assistant How it works The workflow consists of two main parts: Frontend Interface: A beautiful animated orb that users click to activate voice recording Backend Processing: Receives the audio transcription, processes it through an AI agent with memory, and returns voice responses The system uses: Web Speech API for voice recognition (browser-based) OpenAI GPT-4o-mini for intelligent responses OpenAI Text-to-Speech for voice synthesis Session memory to maintain conversation context Setup requirements n8n instance (self-hosted or cloud) OpenAI API key with access to: GPT-4o-mini model Text-to-Speech API Modern web browser with Web Speech API support (Chrome, Edge, Safari) How to set up Import the workflow into your n8n instance Add your OpenAI credentials to both OpenAI nodes Copy the webhook URL from the "Audio Processing Endpoint" node Edit the "Voice Assistant UI" node and replace YOUR_WEBHOOK_URL_HERE with your webhook URL Access the "Voice Interface Endpoint" webhook URL in your browser Click the orb and start talking! How to customize the workflow Change the AI personality**: Edit the system message in the "Process User Query" node Modify the visual style**: Customize the CSS in the "Voice Assistant UI" node Add more capabilities**: Connect additional tools to the AI Agent Change the voice**: Select a different voice in the "Generate Voice Response" node Adjust memory**: Modify the context window length in the "Conversation Memory" node Demo Watch the template in action: https://youtu.be/0bMdJcRMnZY
by Mantaka Mahir
How it works A complete AI-powered study assistant system that lets you chat naturally with your documents stored in Google Drive: The system has two connected workflows: 1. Document Indexing Pipeline (Sub-workflow): • Accepts Google Drive folder URLs • Automatically fetches all files from the folder • Converts documents to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores everything in Supabase vector database for semantic search 2. Study Chat Agent (Main workflow): • Provides a conversational chat interface • Automatically detects and processes Google Drive links shared in chat • Searches your indexed documents using semantic similarity • Maintains conversation history across sessions • Includes calculator for math problems • Responds naturally using Google Gemini 2.5 Pro Use Cases: Students studying for exams, researchers managing papers, professionals building knowledge bases, anyone needing to query large document collections conversationally. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Google Gemini API key (free tier available) • Supabase account with Postgres connection • ~15 minutes setup time Complete Setup: Part 1: Document Indexing Workflow Add Google Drive OAuth2 credentials to the Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Part 2: Study Agent Workflow Import the Study Agent workflow Verify the "Folder all file to vector" tool links to the indexing workflow Add Google Gemini API credentials to both Gemini nodes Configure Supabase API credentials in the Vector Store node Add Postgres credentials for Chat Memory Deploy and access the chat via webhook URL How to Use: Open the chat interface (webhook URL) Paste a Google Drive folder link in the chat Wait for indexing to complete (~1-2 minutes) Start asking questions about your documents The AI will search and answer from your materials Note: The indexing workflow runs automatically when you share Drive links in chat, or you can run it manually to pre-load documents. System Components: Main Agent:** Gemini 2.5 Pro with conversational AI Vector Search:** Supabase with pgvector (768-dim embeddings) Memory:** Postgres chat history (10-message context window) Tools:** Document retrieval, Drive indexing, calculator Embedding Model:** Google Gemini text-embedding-004
by jellyfish
Template Description This description details the template's purpose, how it works, and its key features. You can copy and use it directly. Overview This is a powerful n8n "meta-workflow" that acts as a Supervisor. Through a simple Telegram bot, you can dynamically create, manage, and delete countless independent, AI-driven market monitoring agents (Watchdogs). This template is a perfect implementation of the "Workflowception" (workflow managing workflows) concept in n8n, showcasing how to achieve ultimate automation by leveraging the the n8n API. How It Works ? Telegram Bot Interface: Execute all operations by sending commands to your own Telegram Bot: /add SYMBOL INTERVAL PROMPT: Add a new monitoring task. /delete SYMBOL: Delete an existing monitoring task. /list: List all currently running monitoring tasks. /help: Get help information. Use Telegram Bot to control The watchdog workfolw created in the below Dynamic Workflow Management: Upon receiving an /add command, the Supervisor system reads a "Watchdog" template, fills in your provided parameters (like trading pair and time interval), and then automatically creates a brand new, independent workflow via the n8n API and activates it. Persistent Storage: All monitoring tasks are stored in a PostgreSQL database, ensuring your configurations are safe even if n8n restarts. The ID of each newly created workflow is also written back to the database to facilitate future deletion operations. AI-Powered Analysis: Each created "Watchdog" workflow runs on schedule. It fetches the latest candlestick chart by calling a self-hosted tradingview-snapshot service. This service, available at https://github.com/0xcathiefish/tradingview-snapshot, works by simulating a login to your account and then using TradingView's official snapshot feature to generate an unrestricted, high-quality chart image. An example of a generated snapshot can be seen here: https://s3.tradingview.com/snapshots/u/uvxylM1Z.png. To use this, you need to download the Docker image from the packages in the GitHub repository mentioned above, and run it as a container. The n8n workflow then communicates directly with this container via an HTTP API to request and receive the chart snapshot. After obtaining the image, the workflow calls a multimodal AI model (Gemini). It sends both the chart image and your custom text-based conditions (e.g., "breakout above previous high on high volume" or "break below 4-hour MA20") to the AI for analysis, enabling truly intelligent chart interpretation and alert triggering. Key Features Workflowception: A prime example of one workflow using an API to create, activate, and delete other workflows. Full Control via Telegram: Manage your monitoring bots from anywhere, anytime, without needing to log into the n8n interface. AI Visual Analysis: Move beyond simple price alerts. Let an AI "read" the charts for you to enable complex, pattern-based, and indicator-based intelligent alerts. Persistent & Extensible: Built on PostgreSQL for stability and reliability. You can easily add more custom commands.
by Dataki
BigQuery RAG with OpenAI Embeddings This workflow demonstrates how to use Retrieval-Augmented Generation (RAG) with BigQuery and OpenAI. By default, you cannot directly use OpenAI Cloud Models within BigQuery. Try it This template comes with access to a *public BigQuery table** that stores part of the n8n documentation (about nodes and triggers), allowing you to try the workflow right away: n8n-docs-rag.n8n_docs.n8n_docs_embeddings* ⚠️ *Important:* BigQuery uses the *requester pays model.* The table is small (~40 MB), and BigQuery provides *1 TB of free processing per month**. Running 3–4 queries for testing should remain within the free tier, unless your project has already consumed its quota. More info here: BigQuery Pricing* Why this workflow? Many organizations already use BigQuery to store enterprise data, and OpenAI for LLM use cases. When it comes to RAG, the common approach is to rely on dedicated vector databases such as Qdrant, Pinecone, Weaviate, or PostgreSQL with pgvector. Those are good choices, but in cases where an organization already uses and is familiar with BigQuery, it can be more efficient to leverage its built-in vector capabilities for RAG. Then comes the question of the LLM. If OpenAI is the chosen provider, teams are often frustrated that it is not directly compatible with BigQuery. This workflow solves that limitation. Prerequisites To use this workflow, you will need: A good understanding of BigQuery and its vector capabilities A BigQuery table containing documents and an embeddings column The embeddings column must be of type FLOAT and mode REPEATED (to store arrays) A data pipeline that generates embeddings with the OpenAI API and stores them in BigQuery This template comes with a public table that stores part of the n8n documentation (about nodes and triggers), so you can try it out: n8n-docs-rag.n8n_docs.n8n_docs_embeddings How it works The system consists of two workflows: Main workflow** → Hosts the AI Agent, which connects to a subworkflow for RAG Subworkflow** → Queries the BigQuery vector table. The retrieved documents are then used by the AI Agent to generate an answer for the user.
by Thiago Vazzoler Loureiro
Description This workflow vectorizes the TUSS (Terminologia Unificada da Saúde Suplementar) table by transforming medical procedures into vector embeddings ready for semantic search. It automates the import of TUSS data, performs text preprocessing, and uses Google Gemini to generate vector embeddings. The resulting vectors can be stored in a vector database, such as PostgreSQL with pgvector, enabling efficient semantic queries across healthcare data. What Problem Does This Solve? Searching for medical procedures using traditional keyword matching is often imprecise. This workflow enhances the search experience by enabling semantic similarity search, which can retrieve more relevant results based on the meaning of the query instead of exact word matches. How It Works Import TUSS data: Load medical procedure entries from the TUSS table. Preprocess text: Clean and prepare the text for embedding. Generate embeddings: Use Google Gemini to convert each procedure into a semantic vector. Store vectors: Save the output in a PostgreSQL database with the pgvector extension. Prerequisites An n8n instance (self-hosted). A PostgreSQL database with the pgvector extension enabled. Access to the Google Gemini API. TUSS data in a structured format (CSV, database, or API source). Customization Tips You can adapt the preprocessing logic to your own language or domain-specific terms. Swap Google Gemini with another embedding model, such as OpenAI or Cohere. Adjust the chunking logic to control the granularity of semantic representation. Setup Instructions Prepare a source (database or CSV) with TUSS data. You need at least two fields: CD_ITEM (Medical procedure code) DS_ITEM (Medical procedure description) Configure your Oracle or PostgreSQL database credentials in the Credentials section of n8n. Make sure your PostgreSQL database has pgVector installed. Replace the placeholder table and column names with your actual TUSS table. Connect your Google Gemini credentials (via OpenAI proxy or official connector). Run the workflow to vectorize all medical procedure descriptions.
by Raz Hadas
This n8n template demonstrates how to automate stock market technical analysis to detect key trading signals and send real-time alerts to Discord. It's built to monitor for the Golden Cross (a bullish signal) and the Death Cross (a bearish signal) using simple moving averages. Use cases are many: Automate your personal trading strategy, monitor a portfolio for significant trend changes, or provide automated analysis highlights for a trading community or client group. 💡 Good to know This template relies on the Alpha Vantage API, which has a free tier with usage limits (e.g., API calls per minute and per day). Be mindful of these limits, especially if monitoring many tickers. The data provided by free APIs may have a slight delay and is intended for informational and analysis purposes. Disclaimer**: This workflow is an informational tool and does not constitute financial advice. Always do your own research before making any investment decisions. ⚙️ How it works The workflow triggers automatically every weekday at 5 PM, after the typical market close. It fetches a list of user-defined stock tickers from the Set node. For each stock, it gets the latest daily price data from Alpha Vantage via an HTTP Request and stores the new data in a PostgreSQL database to maintain a history. The workflow then queries the database for the last 121 days of data for each stock. A Code node calculates two Simple Moving Averages (SMAs): a short-term (60-day) and a long-term (120-day) average for both today and the previous day. Using If nodes, it compares the SMAs to see if a Golden Cross (short-term crosses above long-term) or a Death Cross (short-term crosses below long-term) has just occurred. Finally, a formatted alert message is sent to a specified Discord channel via a webhook. 🚀 How to use Configure your credentials for PostgreSQL and select them in the two database nodes. Get a free Alpha Vantage API Key and add it to the "Fetch Daily History" node. For best practice, create a Header Auth credential for it. Paste your Discord Webhook URL into the final "HTTP Request" node. Update the list of stock symbols in the "Set - Ticker List" node to monitor the assets you care about. The workflow is set to run on a schedule, but you can press "Test workflow" to trigger it manually at any time. ✅ Requirements An Alpha Vantage account for an API key. A PostgreSQL database to store historical price data. A Discord account and a server where you can create a webhook. 🎨 Customising this workflow Easily change the moving average periods (e.g., from 60/120 to 50/200) by adjusting the SMA_SHORT and SMA_LONG variables in the "Compute 60/120 SMAs" Code node. Modify the alert messages in the "Set - Golden Cross Msg" and "Set - Death Cross Msg" nodes. Swap out Discord for another notification service like Slack or Telegram by replacing the final HTTP Request node.
by Stephan Koning
Recruiter Mirror is a proof‑of‑concept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiter‑ready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiter‑ready magnet. Got it ✅ — based on your workflow (Webhook → LinkedIn CV/JD fetch → GhostGenius API → n8n parsing/transform → Groq LLM → Output to Webhook), here’s a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: 🔧 Tools & APIs Required 1. n8n (Automation Platform) Either n8n Cloud or self‑hosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. 2. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedIn_CV) and job posting (LinkedIn_JD) links submitted by the user. Acts as the starting point for the workflow. 3. GhostGenius API Endpoints Used: /v2/profile → Scrapes and returns structured CV/LinkedIn data. /v2/job → Scrapes and returns structured job description data. Auth**: Requires valid credentials (e.g., API key / header auth). 4. Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth**: Groq account + saved API credentials in n8n. 5. Code Node (JavaScript Transformation) Parses Groq’s JSON output safely (JSON.parse). Generates clean, recruiter‑ready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips 6. n8n Native Nodes Set & Aggregate Nodes** → Rebuild structured CV & JD objects. Merge Node** → Combine CV data with job description for comparison. If Node** → Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node** → Sends back the final recruiter‑ready insights in JSON (or HTML). ⚠️ Important Notes Credentials**: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is **not production‑ready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).
by Zain Khan
AI-Powered Quiz Generator for Instructors 📝🤖 Instantly turn any document into a shareable online quiz! This n8n workflow automates the entire quiz creation process: a new Jotform submission triggers the flow, the Google Gemini AI extracts key concepts and generates multiple-choice questions with correct answers, saves the questions to a Google Sheet for record-keeping, and finally creates a fully built, ready-to-share Jotform quiz using an HTTP request. How it Works This powerful workflow acts as a complete "document-to-quiz" automation tool, simplifying the process of creating educational or testing materials: Trigger & Input: The process starts when a user fills out the main Jotform submission form, providing a document (PDF/file upload), the desired Quiz Title, and the Number of Questions to generate. Create a jotform like this: https://form.jotform.com/252856893250062 having fields for Quiz Name, File Upload and Number of questions. Document Processing: The workflow retrieves the uploaded document via an HTTP request and uses the Extract from File node to parse and extract the raw text content from the file. AI Question Generation: The extracted text, quiz title, and desired question count are passed to the Google Gemini AI Agent. Following strict instructions, the AI analyzes the content and generates the specified number of multiple-choice questions (with four options and the correct answer indicated) in a precise JSON format. Data Structuring: The generated JSON is validated and formatted using a Structured Output Parser and split into individual items for each question. Record Keeping (Google Sheets): Each generated question, along with all its options and the confirmed correct answer, is appended as a new row in a designated Google Sheet for centralized record-keeping and review. Jotform Quiz Creation (HTTP Request): The workflow dynamically constructs the required API body, converting the AI-generated questions and options into the necessary fields for a new Jotform. It then uses an HTTP Request node to call the Jotform API, creating a brand-new, ready-to-use quiz form. Final Output: The final output provides the link to the newly created quiz, which can be shared immediately for submissions. Requirements To deploy this automated quiz generator, ensure you have the following accounts and credentials configured in your n8n instance: Jotform Credentials:* An *API Key* is required for both the *Jotform Trigger* (to start the workflow) and for the final *HTTP Request* (to create the new quiz form via the API). *Sign up for Jotform here:** https://www.jotform.com/?partner=zainurrehman Google Gemini API Key:* An API key for the *Google Gemini Chat Model* to power the *AI Agent** for question generation. Google Sheets Credentials:* An *OAuth2* or *API Key* credential for the *Google Sheets** node to save the generated questions. Initial Jotform:* A source Jotform that accepts the user input: a *File Upload* field, a *Text* field for the Quiz Title, and a *Number** field for the Number of Questions. Pro Tip: After the final HTTP Request, add an additional step (like an Email or Slack node) to automatically send the generated quiz link back to the user who submitted the initial request!
by Rahul Joshi
📘 Description This workflow turns raw product inputs into a complete, launch-ready AI-generated social media campaign package. It accepts product details via webhook, sanitizes messy fields, generates a strategic campaign blueprint, produces Instagram captions, creates discovery-optimized hashtags, generates photorealistic commercial images, computes optimal posting times, assembles all outputs into a unified JSON package, and finally delivers the entire campaign to Slack. Multiple AI agents work in sequence to generate structured outputs — each parsed and validated using strict JSON schemas. Images produced by DALL·E 3 are uploaded to Cloudinary for hosting. A post-processing module then merges captions, images, hashtags, and schedules into a final payload. A robust error handler ensures every failure is captured and sent to Slack with diagnostic information. This workflow replaces an entire marketing team’s creative production pipeline, producing consistent, multi-asset campaign kits in minutes. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Receive Product Details via Webhook Captures incoming product data including name, description, benefits, audience, and brand voice. 🧹 Clean & Normalize Product Input Fields Sanitizes escaped characters, trims whitespace, and prepares stable fields for AI consumption. 🧠 Generate Campaign Blueprint Using AI Creates a full strategic blueprint in structured JSON: • Article summary • Insights • Tone and target audience mapping • Platform-specific post objects 🧠 LLM Engine + Structured Parser for Blueprint Ensures blueprint output is clean, validated JSON aligned with schema. ✍️ Generate Instagram Captions Using AI Produces five short, conversion-ready captions + CTAs, based on blueprint insights. 🧠 Caption LLM + Structured Parser Validates caption schema for downstream use. #️⃣ Generate Hashtag Set Using AI Creates 12–18 optimized hashtags using discovery strategy (broad, mid, niche). 🧠 Hashtag LLM + Parser Validates and ensures hashtags follow correct JSON structure. 🎨 Split Campaign Posts for Image Generation Breaks out each post’s image prompt for independent asset creation. 🖼️ Generate Social Media Image Using AI Uses DALL·E 3 to create ultra-realistic, 8K-style commercial visuals tailored to the campaign. ☁️ Upload Generated Image to Cloudinary Uploads rendered images and retrieves secure public URLs. 🕒 Generate Optimal Posting Schedule Using AI Recommends best posting time per platform (Asia/Kolkata timezone) + reasoning. 🧠 Schedule LLM + Parser Ensures a structured schedule schema with platform, time, and rationale. 🔀 Combine All Campaign Assets Merges: • Cloudinary image URLs • Captions + CTAs • Hashtag set • Posting schedule into one final dataset. 🧩 Prepare Final Campaign Package JSON Constructs production-ready unified JSON: images, captions, hashtags, schedule. 💬 Send Final Campaign Package to Slack Delivers formatted campaign output with: • Image URLs • Captions + CTAs • Hashtags • Posting times for immediate creative review. 🚨 Error Handler Trigger → Slack Alert Captures workflow failures and sends structured debugging info to Slack. 🧩 Prerequisites • OpenAI API (GPT-4o + DALL·E 3) • Cloudinary account (image hosting) • Slack bot token • Valid webhook endpoint • Clean product input JSON 💡 Key Benefits ✔ Full AI-generated multi-asset campaign in minutes ✔ Eliminates manual copywriting, design, and planning ✔ Ensures structured, reliable JSON at every stage ✔ Creates polished commercial visuals instantly ✔ Produces posting strategy tailored to audience behavior ✔ Unified campaign delivery straight to Slack 👥 Perfect For Consumer brands launching fast cycles Agencies needing rapid campaign generation Teams without in-house designers/copywriters Influencers or D2C founders wanting automated content production
by Rahul Joshi
📘 Description This workflow automates document understanding by accepting uploaded PDF or TXT files, extracting their text, generating a structured summary and question–answer set using GPT-4o, validating the AI output, and returning a clean JSON response to the requester. It also sends an internal Slack preview and logs malformed outputs for debugging. It performs intelligent file-type detection, handles binary text extraction, enforces strict JSON formatting from the AI model, and ensures that the final response is clean, structured, and ready for use in downstream systems. All errors—missing text, invalid JSON, or malformed AI output—are captured automatically in Google Sheets. The workflow is designed as a plug-and-play document-analysis engine that converts any uploaded document into meaningful insights instantly. ⚙️ What This Workflow Does (Step-by-Step) 📥 Receive Document Upload via Webhook Captures incoming files (PDF or TXT) posted to the webhook endpoint. 🔍 Check If Uploaded File Is PDF / TXT Detects file extension and routes it correctly for extraction: PDF → PDF extractor TXT → text extractor Other file types are ignored. 📝 Extract Text from Document Extracts readable text from PDF binaries Reads raw plain text from TXT files The extracted text becomes input for the AI analysis. 🤖 Generate Summary & Q&A Using AI Uses GPT-4o to produce: A 150–200 word summary Five structured Q&A pairs Output must strictly follow the specified JSON schema. 🧠 LLM Engine + Memory Context GPT-4o provides the reasoning engine Memory buffer maintains short context for stability Output parser ensures schema compliance ⚠️ Validate AI Output Before Processing Checks whether output is non-empty and correctly structured. Invalid → logged to Google Sheets. 📊 Log Invalid AI Output to Google Sheet Records failures for audit, debugging, and retraining. 🧹 Unwrap AI Output Object Removes unnecessary array wrappers and normalizes the result. 📤 Prepare Final Response Payload Ensures the workflow responds with a single clean JSON object. 🔁 Send Final Summary & Q&A Response to Webhook Returns the final structured JSON to the requesting system. 💬 Send Summary Preview to Slack Shares a short preview (first 300 characters) for internal visibility. 🧩 Prerequisites Webhook endpoint configured for uploads Azure OpenAI GPT-4o credentials Google Sheets OAuth connection Slack bot token 💡 Key Benefits ✔ Fully automated PDF/TXT understanding ✔ AI-powered summary + structured Q&A ✔ Strict JSON compliance for downstream systems ✔ Error-proof: logs all failures for investigation ✔ Slack visibility for quick internal review ✔ Works with minimal human involvement 👥 Perfect For Research teams Documentation workflows Customer-support intelligence Interview screening document parsing Internal knowledge extraction systems
by Budi SJ
Automated Brand DNA Generator Using JotForm, Google Search, AI Extraction & Notion The Brand DNA Generator workflow automatically scans and analyzes online content to build a company’s Brand DNA profile. It starts with input from a form, then crawls the company’s website and Google search results to gather relevant information. Using AI-powered extraction, the system identifies insights such as value propositions, ideal customer profiles (ICP), pain points, proof points, brand tone, and more. All results are neatly formatted and automatically saved to a Notion database as a structured Brand DNA report, eliminating the need for manual research. 🛠️ Key Features Automated data capture, collects company data directly from form submissions and Google search results. Uses AI-powered insight extraction with LLMs to extract and summarize brand-related information from website content. Fetches clean text from multiple web pages using HTTP requests and a content extractor. Merges extracted data from multiple sources into a single Brand DNA JSON structure. Automatically creates a new page in Notion with formatted sections (headings, paragraphs, and bullet points). Handles parsing failures and processes multiple pages efficiently in batches. 🔧 Requirements JotForm API Key, to capture company data from form submissions. SerpAPI Key, to perform automated Google searches. OpenRouter / LLM API, for AI-based language understanding and information extraction. Notion Integration Token & Database ID, to save the final Brand DNA report to Notion. 🧩 Setup Instructions Connect your JotForm account and select the form containing the fields Company Name and Company Website. Add your SerpAPI Key. Configure the AI model using OpenRouter or LLM. Enter your Notion credentials and specify the databaseId in the Create a Database Page node. Customize the prompt in the Information Extractor node to modify the tone or structure of AI analysis (Optional). Activate the workflow, then submit data through the JotForm to test automatic generation and Notion integration. 💡 Final Output A complete Brand DNA Report containing: Company Description Ideal Customer Profile Pain Points Value Proposition Proof Points Brand Tone Suggested Keywords All generated automatically from the company’s online presence and stored in Notion with no manual input required.
by Cheng Siong Chin
HOW IT WORKS This workflow automates end-to-end data intelligence processing by ingesting structured data (CSV, JSON), enriching it through multiple AI analysis pathways, and generating actionable insights. Designed for business analysts, data scientists, and operations teams, it solves the problem of manual data enrichment and fragmented analysis by consolidating diverse AI models (GPT-4, LLM analysis, sentiment detection) into a unified pipeline. Data flows from source ingestion → enrichment/validation → branching into three specialized analysis paths (Competitive Intelligence, Sentiment Analysis, Market Insights) → aggregation → result storage (Google Sheets) and notifications (Slack, Gmail). Each path applies distinct AI models for comprehensive intelligence gathering. SETUP STEPS Configure OpenAI API key in credentials Set up Google Sheets connection with service account Add Slack webhook for notifications Connect Gmail for automated report distribution Configure NVIDIA API (if using specialized models) Map input data source (CSV upload or API endpoint) Test each branch independently before full deployment PREREQUISITES OpenAI API key, Google Sheets access, Slack workspace, Gmail account, basic n8n familiarity. USE CASES Market research automation, competitive intelligence monitoring, customer feedback analysis at scale CUSTOMIZATION Swap AI models (Claude, Gemini, Llama), add/remove analysis branches, modify output destinations BENEFITS Eliminates manual data processing (80% time savings), enables simultaneous multi-perspective analysis