by System Admin
Update the Zendesk ticket by adding the Jira issue key to the "Jira Issue Key" field.. if issue was created already in Jira
by Mirza Ajmal
📍Overview This no-code workflow is built for creators, agencies, and operators who want to automate the repurposing of Instagram Reels. It runs end-to-end and outputs structured insights and content-ready scripts—without touching a single tool manually. 🧰 What It Does Triggered simply by sending an Instagram Reel URL via Telegram. Downloads the Reel automatically. Converts video to audio using FreeConvert API. Transcribes speech to text using AssemblyAI. Analyzes both transcript and description using a connected LLM (OpenAI or Mistral). Extracts: Niche Core message 3 viral content hooks 3 ready-to-use short-form video scripts Saves all data to a Google Sheet for easy reuse by the creator or team. 🧪 APIs & Integrations Telegram Bot API (for triggering) FreeConvert API (MP4 to MP3 conversion) AssemblyAI (for transcription) OpenAI or Mistral (LLM for content analysis) Google Sheets API (for logging all outputs) ✅ Requirements An n8n instance (self-hosted or cloud) AssemblyAI API key FreeConvert API key Telegram Bot token Google service account credentials Your preferred LLM key (OpenAI or Mistral) 💡 Why Use This Workflow Runs entirely from Telegram—no dashboards required Helps you extract deep insights and reusable content from any Instagram Reel All tools used are free or very low cost Ideal for scaling personal brands or agency operations
by Axiomlab.dev
HubSpot Lead Refinement 🚀 How it works Triggers: HubSpot Trigger: Fires when contacts are created/updated. Manual Trigger: Run on demand for testing or batch checks. Get Recently Created/Updated Contacts: Pulls fresh contacts from HubSpot. Edit Fields (Set): Maps key fields (First Name, Last Name, Email) for the Agent. AI Agent: First reads your Google Doc (via the Google Docs tool) to learn the research steps and output format. Then uses SerpAPI (Google engine) to locate the contact’s likely LinkedIn profile and produce a concise result. Code – Remove Think Part: Cleans the model output (removes hidden “think” blocks / formatting) so only the final answer remains. HubSpot Update: Writes the cleaned LinkedIn URL to the contact (via email match). 🔑 Required Credentials: HubSpot App Token (Private App) — for Get/Update contact nodes. HubSpot Developer OAuth (optional) — if you use the HubSpot * Trigger node for event-based runs. Google Service Account — for the Google Docs tool (share your * playbook doc with this service account). OpenRouter — for the OpenRouter Chat Model used by the AI Agent. SerpAPI — for targeted Google searches from within the Agent. 🛠️ Setup Instructions HubSpot Create a Private App and copy the Access Token. Add or confirm the contact property linkedinUrl (Text). Plug the token into the HubSpot nodes. If using HubSpot Trigger, connect your Developer OAuth app and subscribe to contact create/update events. Google Docs (Living Instructions) ➡️ Sample configuration doc file Copy the sample doc file and modify to your need. Share the doc with your Google Service Account (Viewer is fine). In the Read Google Docs node, paste the Document URL. OpenRouter & SerpAPI Add your OpenRouter key to the OpenRouter Chat Model credential. Add your SerpAPI key to the SerpAPI tool node. (Optional) In your Google Doc or Agent prompt, set sensible defaults for SerpAPI (engine=google, hl=en, gl=us, num=5, max 1–2 searches). ✨ What you get Auto-enriched contacts with a LinkedIn URL and profile insights (clean, validated output). A research process you can change anytime by editing the Google Doc—no workflow changes needed. Tight, low-noise searches via SerpAPI to keep costs down. And that’s it—publish and let the Agent enrich new leads automatically while you refine the rules in your doc. It allows handing off to a team who wouldn't necessarily tweak the automation nodes.
by Jason Foster
Gets Google Calendar events for the day (12 hours from execution time), and filters out in-person meetings, Signal meetings, and meetings canceled by Calendly ("transparent").
by Zeinabsadat Mousavi Amin
Overview When designing user interfaces, toolbar icons often get overlooked, even though their placement and grouping dramatically impact usability and user flow. This workflow leverages Gemini AI to automatically analyze UI screens, classify toolbar icons based on Apple’s Human Interface Guidelines (HIG), and suggest optimal placements. By combining AI analysis with structured placement logic, this workflow helps designers build more consistent, efficient, and user-friendly interfaces—without spending hours manually arranging icons. 🚀 Features AI Classification**: Uses Gemini AI to analyze screenshots and classify icons into roles like .primaryAction, .navigation, .confirmationAction, and more. HIG-Based Placement**: Automatically assigns icons to the correct toolbar areas—Leading (Left), Trailing (Right), Center, Bottom, or System-decided. Usage-Aware Reordering**: Reorders icons based on frequency of use so the most relevant actions appear where users expect them. JSON Output**: Delivers structured results for seamless integration into design tools or documentation. 🔧 Setup Instructions Install the Workflow: Import the workflow into your n8n instance. Configure Input: Upload a screenshot of your UI. Upload a set of icons you want to classify and place. Set Up Gemini AI Node: Add your Gemini AI API key in the node’s credentials. Run the Workflow: Submit the inputs and let the AI classify and assign placements. Export Results: Copy the JSON output or connect the workflow to your preferred design/documentation tools. ⚙️ How It Works Form Submission – Capture screenshot + icons. Gemini AI Agent – Interprets screen context and classifies each icon. Placement Logic – Maps icons to the correct toolbar areas. Reordering – Adjusts order based on relevance and HIG standards. Structured Output – Produces clean JSON for further use. 🎨 Customization Change AI Prompts**: Modify the Gemini AI node prompts to reflect your app’s design language. Adjust Placement Rules**: Update logic to follow custom guidelines beyond Apple HIG. Integrate with Design Tools**: Send the JSON output directly to tools like Figma, Sketch, or internal systems. 💡 Why This Matters Consistency**: Ensures toolbar designs always follow Apple’s HIG. Efficiency**: Saves designers hours of manual icon placement. Scalability**: Works across multiple screens, flows, and apps. AI-Assisted Design**: Augments designer decisions with structured insights instead of replacing them.
by Davide
This workflow implements a Retrieval-Augmented Generation (RAG) system that integrates Google Drive and Qdrant. This setup creates a powerful, self-updating knowledge base that provides accurate, context-aware answers to user queries. Key Advantages Automated Knowledge Base Updates** No manual intervention is required—documents in Google Drive are automatically synchronized with Qdrant. Efficient Search and Retrieval** Vector embeddings enable fast and precise retrieval of relevant information. Scalable and Flexible** Works with multiple documents and supports continuous growth of your dataset. Seamless AI Integration** Combines OpenAI embeddings for vectorization and Google Gemini for high-quality natural language answers. Metadata-Enhanced Storage** Each document stores metadata (file ID and name), making it easy to manage and track document versions. End-to-End RAG Pipeline** From document ingestion to AI-powered Q\&A, everything is handled inside one n8n workflow. How It Works This workflow implements a Retrieval-Augmented Generation (RAG) system that automatically processes, stores, and retrieves document information for AI-powered question answering. Here’s how it functions: Document Processing & Vectorization: The system monitors a specified Google Drive folder for new or updated files. When a file is added or modified, it is downloaded and split into manageable chunks using a Recursive Character Text Splitter. Each chunk is converted into vector embeddings using OpenAI's embedding model. These vectors, along with metadata (file ID, file name), are stored in a Qdrant vector database. Automatic Updates: The workflow includes a mechanism to delete old vectors associated with an updated file before inserting the new ones, ensuring the knowledge base remains current. Query Handling & Response Generation: When a user sends a chat message (via a chat trigger), the system: Retrieves the most relevant document chunks from Qdrant based on the query's semantic similarity. Uses a Google Gemini language model to generate a context-aware answer grounded in the retrieved documents. This provides accurate, source-based responses instead of relying solely on the AI's internal knowledge. Initial Setup & Maintenance: The workflow can be triggered manually to create the Qdrant collection or clear all existing data. It processes all existing files in the Drive folder during initial setup, populating the vector store. Set Up Steps To configure this workflow, follow these steps: STEP 1: Create Qdrant Collection Replace QDRANTURL in the "Create collection" and "Clear collection" nodes with your Qdrant instance URL (e.g., http://your-qdrant-host:6333). Replace COLLECTION with your desired collection name. Ensure the Qdrant API credentials are correctly set in the respective HTTP Request nodes. STEP 2: Configure Google Drive Access Set up OAuth credentials for Google Drive to allow the workflow to: Read files from a specific folder . Download files for processing. Update the Folder ID in the "Search files" and "Update?" trigger nodes to point to your target Google Drive folder. STEP 3: Set Up AI Models Configure the OpenAI API credentials in the Embeddings nodes for generating text embeddings. Configure the Google Gemini (PaLM) API credentials in the Google Gemini Chat Model node for generating answers. STEP 4: Configure Metadata The system automatically attaches metadata (file_id, file_name) to each document chunk. This is set in the Default Data Loader nodes. This metadata is crucial for identifying the source of information and for the update mechanism. STEP 5: Test the RAG System The workflow includes a chat trigger ("When chat message received") for testing. Send a query to test the retrieval and answer generation process. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by jun shou
🔧 How It Works **This n8n workflow leverages an agentic AI solution, where multiple AI agents collaborate to process and generate tailored job application assets. ✅ Features Agent-based AI Coordination: Utilizes multiple AI agents working in sequence to analyze the job description and generate results. Outputs: A customized cover letter An optimized resume (CV) A list of interview preparation questions Automated Delivery: The final outputs are created as Google Docs and stored in your connected Google Drive folder. 🧾 Input Requirement Simply provide a LinkedIn job URL as the input. Example: https://www.linkedin.com/jobs/view/4184156975 ⚙️ Setup Instructions To deploy and run this workflow, you'll need to configure the following credentials: Google Cloud Platform (GCP) Enable the Google Drive API Set up OAuth credentials for n8n integration OpenAI API Key Needed for generating the content (cover letter, CV, and questions) BrightData (formerly Luminati) Used to scrape and extract job details from the LinkedIn job link ⚠️ Setup requires moderate technical familiarity with APIs and OAuth. A step-by-step configuration guide is recommended for beginners.
by Mohamed Abdelwahab
1. Overview The IngestionDocs workflow is a fully automated **document ingestion and knowledge management system* built with *n8n**. Its purpose is to continuously ingest organizational documents from Google Drive, transform them into vector embeddings using OpenAI, store them in Pinecone, and make them searchable and retrievable through an AI-powered Q&A interface. This ensures that employees always have access to the most up-to-date knowledge base without requiring manual intervention. 2. Key Objectives Automated Ingestion** → Seamlessly process new and updated documents from Google Drive.\ Change Detection** → Track and differentiate between new, updated, and previously processed documents.\ Knowledge Base Construction** → Convert documents into embeddings for semantic search.\ AI-Powered Assistance** → Provide an intelligent Q&A system for employees to query manuals.\ Scalable & Maintainable** → Modular design using n8n, LangChain, and Pinecone. 3. Workflow Breakdown A. Document Monitoring and Retrieval The workflow begins with two Google Drive triggers: File Created Trigger → Fires when a new document is uploaded.\ File Updated Trigger → Fires when an existing document is modified.\ A search operation lists the files in the designated Google Drive folder.\ Non-downloadable items (e.g., subfolders) are filtered out.\ For valid files: The file is downloaded.\ A SHA256 hash is generated to uniquely identify the file's content. B. Record Management (Google Sheets Integration) To keep track of ingestion states, the workflow uses a **Google Sheets--based Record Manager**:\ Each file entry contains:\ Id** (Google Drive file ID)\ Name** (file name)\ hashId** (SHA256 checksum)\ The workflow compares the current file's hash with the stored one:\ New Document** → File not found in records → Inserted into the Record Manager.\ Already Processed** → File exists and hash matches → Skipped.\ Updated Document** → File exists but hash differs → Record is updated. This guarantees that only new or modified content is processed, avoiding duplication. C. Document Processing and Vectorization Once a document is marked as new or updated:\ Default Data Loader extracts its content (binary files supported).\ Pages are split into individual chunks.\ Metadata such as file ID and name are attached.\ Recursive Character Text Splitter divides the content into manageable segments with overlap.\ OpenAI Embeddings (text-embedding-3-large) transform each text chunk into a semantic vector.\ Pinecone Vector Store stores these vectors in the configured index:\ For new documents, embeddings are inserted into a namespace based on the file name.\ For updated documents, the namespace is cleared first, then re-ingested with fresh embeddings. This process builds a scalable and queryable knowledge base. D. Knowledge Base Q&A Interface The workflow also provides an **interactive form-based user interface**:\ Form Trigger** → Collects employee questions.\ LangChain AI Agent**:\ Receives the question.\ Retrieves relevant context from Pinecone using vector similarity search.\ Processes the response using OpenAI Chat Model (gpt-4.1-mini).\ Answer Formatting**:\ Responses are returned in HTML format for readability.\ A custom CSS theme ensures a modern, user-friendly design.\ Answers may include references to page numbers when available. This creates a self-service knowledge base assistant that employees can query in natural language. 4. Technologies Used n8n** → Orchestration of the entire workflow.\ Google Drive API** → File monitoring, listing, and downloading.\ Google Sheets API** → Record manager for tracking file states.\ OpenAI API**: text-embedding-3-large for semantic vector creation.\ gpt-4.1-mini for conversational Q&A.\ Pinecone** → Vector database for embedding storage and retrieval.\ LangChain** → Document loaders, text splitters, vector store connectors, and agent logic.\ Crypto (SHA256)** → File hash generation for change detection.\ Form Trigger + Form Node** → Employee-facing Q&A submission and answer display.\ Custom CSS** → Provides a modern, responsive, styled UI for the knowledge base. 5. End-to-End Data Flow Employee uploads or updates a document → Google Drive detects the change.\ Workflow downloads and hashes the file → Ensures uniqueness and detects modifications.\ Record Manager (Google Sheets) → Decides whether to skip, insert, or update the record.\ Document Processing → Splitting + Embedding + Storing into Pinecone.\ Knowledge Base Updated → The latest version of documents is indexed.\ Employee asks a question via the web form.\ AI Agent retrieves embeddings from Pinecone + uses GPT-4.1-mini → Generates a contextual answer.\ Answer displayed in styled HTML → Delivered back to the employee through the form interface. 6. Benefits Always Up-to-Date** → Automatically syncs documents when uploaded or changed.\ No Duplicates** → Smart hashing ensures only relevant updates are reprocessed.\ Searchable Knowledge Base** → Employees can query documents semantically, not just by keywords.\ Enhanced Productivity** → Answers are immediate, reducing time spent browsing manuals.\ Scalable** → New documents and users can be added without workflow redesign. ✅ In summary, IngestionDocs is a **robust AI-driven document ingestion and retrieval system* that integrates *Google Drive, Google Sheets, OpenAI, and Pinecone* within *n8n**. It continuously builds and maintains a knowledge base of manuals while offering employees an intelligent, user-friendly Q&A assistant for fast and accurate knowledge retrieval.
by Robert Breen
This n8n workflow template automatically monitors your Google Sheets for new entries and uses AI to generate detailed descriptions for each topic. Perfect for content creators, researchers, project managers, or anyone who needs automatic content generation based on simple topic inputs. What This Workflow Does This automated workflow: Monitors a Google Sheet for new rows added to the "data" tab Takes the topic from each new row Uses OpenAI GPT to generate a detailed description of that topic Updates the same row with the AI-generated description Logs all activity in a separate "actions" tab for tracking The workflow runs every minute, checking for new entries and processing them automatically. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered description generation (GPT-4.1-mini) Google Sheets** - Data input, storage, and activity logging Google Sheets Trigger** - Real-time monitoring for new rows Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI description generation Google Account - For Google Sheets integration Google Sheets API Access - For both reading and writing to sheets Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Set Up Your Google Sheets Option 1: Use Our Pre-Made Template (Recommended) Copy our template: AI Description Generator Template Click "File" → "Make a copy" to create your own version Rename it as desired (e.g., "My AI Content Generator") Note your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Set up the main "data" tab: Rename "Sheet1" to "data" Set up column headers in row 1: A1: topic B1: description Create an "actions" tab: Add a new sheet and name it "actions" Set up column headers: A1: Update Copy your sheet's URL Step 3: Configure Google API Access Enable Google Sheets API Go to Google Cloud Console Create a new project or select existing one Enable "Google Sheets API" Enable "Google Drive API" Create Service Account (for N8N) In Google Cloud Console, go to "IAM & Admin" → "Service Accounts" Create a new service account Download the JSON credentials file Share your Google Sheet with the service account email address Step 4: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on the "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration For the Trigger Node: Click on "Row added - Google Sheet" node Set up Google Sheets Trigger OAuth2 credentials Select your spreadsheet from the dropdown Choose the "data" sheet Set polling to "Every Minute" (already configured) For the Update Node: Click on "Update row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "data" sheet Verify column mapping (topic → topic, description → AI output) For the Actions Log Node: Click on "Append row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "actions" sheet Step 5: Customize the AI Description Generator The workflow uses a simple prompt that can be customized: Click on the "Description Writer" node Modify the system message to change the AI behavior: write a description of the topic. output like this. { "description": "description" } Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: robert@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-breen-29429625/ Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
by Risper
🤖AI-Powered Appointment Scheduling with Google Calendar & Sheets Virtual Receptionist Automate customer conversations with an AI-powered virtual receptionist. This workflow can chat naturally with clients, answer general business questions (like services, location, and hours), check availability in Google Calendar, book appointments, and save customer details in Google Sheets. Fully customizable for any business type — salons, clinics, agencies, consultants, and more. 📖 How It Works Welcome the customer when the customer says hi AI greets warmly: “Hello! I’m [AI name] from [Business name].” Answer general questions Provides instant replies about services, pricing, business location, hours, and availability. Understand their need Identifies the service requested and preferred time. Check availability Queries Google Calendar for open slots. Gather customer details Collects name, phone, and email (optional). Confirm booking Creates the appointment in Google Calendar. Save records Logs booking and customer info into Google Sheets. ⚙️ Setup Steps (Quick) Connect your Google Calendar and Google Sheets accounts. Add your business details (name, type, services, hours, policies) to the Business Info Sheet. Configure your OpenAI API key (or use n8n free credits). Optional: Connect Twilio WhatsApp for direct chat responses. ⏱ Setup usually takes 15–20 minutes if accounts are ready. 🏢 Example Business Info (Google Sheet) | business_id | business_name | business_type | location | phone | email | services | calendar_id | timezone | currency | working_hours | ai_name | ai_personality | ai_role | emergency_available | booking_advance_days | cancellation_hours | |-------------|-----------------|---------------------|----------------------------------|-----------------|---------------------------|----------|-----------------------|----------|----------|--------------------------------|---------|-----------------------------------|------------------------------------------------------------------------------------------------|----------------------|----------------------|-------------------| |001| Luxe Hair Studio | Hair & Beauty Salon | 123 Main Street, New York, NY 10001 | 1 (XXX) XXX-XXXX | yourbusiness@email.com | “Haircut & Styling (60 minutes, $3500…)Hair Coloring (120 minutes, $8000…)…” | calendar-id-here | GMT -3 | USD | Mon–Sat: 9:00 AM – 7:00 PM, Sun: Closed | bella | Friendly, Stylish, Professional | Manages bookings, answers FAQs, recommends services, gives beauty tips, sends reminders, etc. | no | 10 | 24 | ✅ Purpose: Supplies context (services, pricing, hours, AI personality, booking policies). 💡 The AI uses this sheet to answer general business questions (e.g., “Where are you located?”, “Do you do hair colouring?”, “What are your working hours?”). 📊 Appointments Sheet Example | patient_number | patient_name | event_id | summary | services | |----------------|-------------|-----------|----------------------------------|----------| | 001 | Sarah Lee | evt-10293 | Appointment with Sarah Lee – Haircut & Styling | Haircut & Styling | | 002 | John Smith | evt-10294 | Appointment with John Smith – Highlights | Highlights | ✅ Purpose: Logs confirmed bookings with service details and links back to Google Calendar. 💡 Features ✅ AI receptionist with conversation memory ✅ Answers FAQs – location, services, hours, pricing ✅ Google Calendar integration for real-time availability ✅ Google Sheets integration for customer records & reporting ✅ Customizable AI name, role, and personality 🔑 Who It’s For Salons & Spas** – Manage bookings and FAQs Clinics & Health Services** – Automated scheduling + patient info Agencies & Consultants** – Answer inquiries + schedule meetings Any Service Business** – Save time, improve customer experience
by Robert Breen
This n8n workflow template creates an efficient data analysis system that uses Google Gemini AI to interpret user questions about spreadsheet data and processes them through a specialized sub-workflow for optimized token usage and faster responses. What This Workflow Does Smart Query Parsing**: Uses Gemini AI to understand natural language questions about your data Efficient Processing**: Routes calculations through a dedicated sub-workflow to minimize token consumption Structured Output**: Automatically identifies the column, aggregation type, and grouping levels from user queries Multiple Aggregation Types**: Supports sum, average, count, count distinct, min, and max operations Flexible Grouping**: Can aggregate data by single or multiple dimensions Token Optimization**: Processes large datasets without overwhelming AI context limits Tools Used Google Gemini Chat Model** - Natural language query understanding and response formatting Google Sheets Tool** - Data access and column metadata extraction Execute Workflow** - Sub-workflow processing for data calculations Structured Output Parser** - Converts AI responses to actionable parameters Memory Buffer Window** - Basic conversation context management Switch Node** - Routes to appropriate aggregation method Summarize Nodes** - Performs various data aggregations 📋 MAIN WORKFLOW - Query Parser What This Workflow Does The main workflow receives natural language questions from users and converts them into structured parameters that the sub-workflow can process. It uses Google Gemini AI to understand the intent and extract the necessary information. Prerequisites for Main Workflow Google Cloud Platform account with Gemini API access Google account with access to Google Sheets n8n instance (cloud or self-hosted) Main Workflow Setup Instructions 1. Import the Main Workflow Copy the main workflow JSON provided In your n8n instance, go to Workflows → Import from JSON Paste the JSON and click Import Save with name: "Gemini Data Query Parser" 2. Set Up Google Gemini Connection Go to Google AI Studio Sign in with your Google account Go to Get API Key section Create a new API key or use an existing one Copy the API key Configure in n8n: Click on Google Gemini Chat Model node Click Create New Credential Select Google PaLM API Paste your API key Save the credential 3. Set Up Google Sheets Connection for Main Workflow Go to Google Cloud Console Create a new project or select existing one Enable the Google Sheets API Create OAuth 2.0 Client ID credentials In n8n, click on Get Column Info node Create Google Sheets OAuth2 API credential Complete OAuth flow 4. Configure Your Data Source Option A: Use Sample Data The workflow is pre-configured for: Sample Marketing Data Make a copy to your Google Drive Option B: Use Your Own Sheet Update Get Column Info node with your Sheet ID Ensure you have a "Columns" sheet for metadata Update sheet references as needed 5. Set Up Workflow Trigger Configure how you want to trigger this workflow (webhook, manual, etc.) The workflow will output structured JSON for the sub-workflow ⚙️ SUB-WORKFLOW - Data Processor What This Workflow Does The sub-workflow receives structured parameters from the main workflow and performs the actual data calculations. It handles fetching data, routing to appropriate aggregation methods, and formatting results. Sub-Workflow Setup Instructions 1. Import the Sub-Workflow Create a new workflow in n8n Copy the sub-workflow JSON (embedded in the Execute Workflow node) Import as a separate workflow Save with name: "Data Processing Sub-Workflow" 2. Configure Google Sheets Connection for Sub-Workflow Apply the same Google Sheets OAuth2 credential you created for the main workflow Update the Get Data node with your Sheet ID Ensure it points to your data sheet (e.g., "Data" sheet) 3. Configure Google Gemini for Output Formatting Apply the same Gemini API credential to the Google Gemini Chat Model1 node This handles final result formatting 4. Link Workflows Together In the main workflow, find the Execute Workflow - Summarize Data node Update the workflow reference to point to your sub-workflow Ensure the sub-workflow is set to accept execution from other workflows Sub-Workflow Components When Executed by Another Workflow**: Trigger that receives parameters Get Data**: Fetches all data from Google Sheets Type of Aggregation**: Switch node that routes based on aggregation type Multiple Summarize Nodes**: Handle different aggregation types (sum, avg, count, etc.) Bring All Data Together**: Combines results from different aggregation paths Write into Table Output**: Formats final results using Gemini AI Example Usage Once both workflows are set up, you can ask questions like: Overall Metrics: "Show total Spend ($)" "Show total Clicks" "Show average Conversions" Single Dimension: "Show total Spend ($) by Channel" "Show total Clicks by Campaign" Two Dimensions: "Show total Spend ($) by Channel and Campaign" "Show average Clicks by Channel and Campaign" Data Flow Between Workflows Main Workflow: User question → Gemini AI → Structured JSON output Sub-Workflow: Receives JSON → Fetches data → Performs calculations → Returns formatted table Contact Information For support, customization, or questions about this template: Email**: robert@ynteractive.com LinkedIn**: Robert Breen Need help implementing these workflows, want to remove limitations, or require custom modifications? Reach out for professional n8n automation services and AI integration support.
by Robert Breen
This n8n workflow template creates an intelligent data analysis system that converts natural language questions into Google Sheets SQL queries using OpenAI's GPT-4o model. The system generates proper Google Sheets query URLs and executes them via HTTP requests for efficient data retrieval. What This Workflow Does Natural Language to SQL**: Converts user questions into Google Sheets SQL syntax Direct HTTP Queries**: Bypasses API limits by using Google Sheets' built-in query functionality Column Letter Mapping**: Automatically maps column names to their corresponding letters (A, B, C, etc.) Structured Query Generation**: Outputs properly formatted Google Sheets query URLs Real-time Data Access**: Retrieves live data directly from Google Sheets Memory Management**: Maintains conversation context for follow-up questions Tools Used OpenAI Chat Model (GPT-4o)** - SQL query generation and natural language understanding OpenAI Chat Model (GPT-4.1 Mini)** - Result formatting and table output Google Sheets Tool** - Column metadata extraction and schema understanding HTTP Request Node** - Direct data retrieval via Google Sheets query API Structured Output Parser** - Formats AI responses into executable queries Memory Buffer Window** - Conversation history management Chat Trigger** - Webhook-based conversation interface Step-by-Step Setup Instructions 1. Prerequisites Before starting, ensure you have: An n8n instance (cloud or self-hosted) An OpenAI account with API access and billing setup A Google account with access to Google Sheets The target Google Sheet must be publicly accessible or shareable via link 2. Import the Workflow Copy the workflow JSON provided In your n8n instance, go to Workflows → Import from JSON Paste the JSON and click Import Save with a descriptive name like "Google Sheets SQL Query Generator" 3. Set Up OpenAI Connections Get API Key: Go to OpenAI Platform Sign in or create an account Navigate to API Keys section Click Create new secret key Copy the generated API key Important: Add billing information and credits to your OpenAI account Configure Both OpenAI Nodes: OpenAI Chat Model1 (GPT-4o): Click on the node Click Create New Credential Select OpenAI API Paste your API key Save the credential OpenAI Chat Model2 (GPT-4.1 Mini): Apply the same OpenAI API credential This handles result formatting 4. Set Up Google Sheets Connection Create OAuth2 Credentials: Go to Google Cloud Console Create a new project or select existing one Enable the Google Sheets API Go to Credentials → Create Credentials → OAuth 2.0 Client IDs Set application type to Web Application Add authorized redirect URIs (get this from n8n credentials setup) Copy the Client ID and Client Secret Configure in n8n: Click on the Get Column Info2 node Click Create New Credential Select Google Sheets OAuth2 API Enter your Client ID and Client Secret Complete the OAuth flow by clicking Connect my account Authorize the required permissions 5. Prepare Your Google Sheet Option A: Use the Sample Data Sheet Access the pre-configured sheet: Sample Marketing Data Make a copy to your Google Drive Important**: Set sharing to "Anyone with the link can view" Critical: Set sharing to "Anyone with the link can view" for HTTP access Copy the Sheet ID from the URL Update the Get Column Info2 node with your Sheet ID and column metadata sheet 6. Configure Sheet References Get Column Info2 Node: Set Document ID to your Google Sheet ID Set Sheet Name to your columns metadata sheet (e.g., "Columns") This provides the AI with column letter mappings HTTP Request Node: No configuration needed - it uses dynamic URLs from the AI agent Ensure your sheet has proper sharing permissions 7. Update System Prompt (If Using Custom Sheet) If using your own Google Sheet, update the system prompt in the AI Agent3 node: Replace the URL in the system message with your Google Sheet URL Update the GID (sheet ID) to match your data sheet Keep the same query structure format Contact Information Robert Ynteractive For support, customization, or questions about this template: Email**: robert@ynteractive.com LinkedIn**: Robert Breen Need help implementing this workflow, want to add security features, or require custom modifications? Reach out for professional n8n automation services and AI integration support.