by Oneclick AI Squad
Simplify event planning with this automated n8n workflow. Triggered by incoming requests, it fetches speaker and audience data from Google Sheets, analyzes profiles and preferences, and generates optimized session recommendations. The workflow delivers formatted voice responses and updates tracking data, ensuring organizers receive real-time, tailored suggestions. 🎙️📊 Key Features Real-time analysis of speaker and audience data for personalized recommendations. Generates optimized session lineups based on profiles and preferences. Delivers responses via voice agent for a seamless experience. Logs maintain a detailed recommendation history in Google Sheets. Workflow Process The Webhook Trigger node initiates the workflow upon receiving voice agent or external system requests. Parse Voice Request** processes incoming voice data into actionable parameters. Fetch Database** retrieves speaker ratings, past sessions, and audience ratings from Google Sheets. Calculate & Analyze** combines voice request data with speaker profiles and audience insights for comprehensive matching. AI Optimization Engine** analyzes speaker-audience fit and recommends optimal session lineups. Format Recommendations** structures the recommendations for voice agent response. Voice Agent Response** returns formatted recommendations to the user with natural language summary and structured data. Update Tracking Sheet** saves recommendation history and analytics to Google Sheets. If errors occur, the Check for Errors node branches to: Format Error Response prepares an error message. Send Error Response delivers the error notification. Setup Instructions Import the workflow into n8n and configure Google Sheets OAuth2 for data access. Set up the Webhook Trigger with your voice agent or external system's API credentials. Configure the AI Optimization Engine node with a suitable language model (e.g., Anthropic Chat Model). Test the workflow by sending sample voice requests and verifying recommendations. Adjust analysis parameters as needed for specific event requirements. Prerequisites Google Sheets OAuth2 credentials Voice agent API or integration service AI/LLM service for optimization (e.g., Anthropic) Structured speaker and audience data in a Google Sheet Google Sheet Structure: Create a sheet with columns: Speaker Name Rating Past Sessions Audience Rating Preferences Updated At Modification Options Customize the Calculate & Analyze node to include additional matching criteria (e.g., topic expertise). Adjust the AI Optimization Engine to prioritize specific session formats or durations. Modify voice response templates in the Voice Agent Response node with branded phrasing. Integrate with event management tools (e.g., Eventbrite) for live data feeds. Set custom error handling rules in the Check for Errors node. Discover more workflows – Get in touch with us
by Jason Foster
Gets Google Calendar events for the day (12 hours from execution time), and filters out in-person meetings, Signal meetings, and meetings canceled by Calendly ("transparent").
by Easy8.ai
Auto-Routing Nicereply Feedback to Microsoft Teams by Team and Sentiment Automatically collect client feedback from Nicereply, analyze sentiment, and send it to the right Microsoft Teams channels — smartly split by team, tone, and comment presence. About this Workflow This workflow pulls customer satisfaction feedback from Nicereply, filters out irrelevant or test entries, and evaluates each item based on the team it belongs to and the sentiment of the response (Great, OK, Bad). It automatically routes the feedback to Microsoft Teams — either as a summary in a channel or a direct message — depending on the team's role and whether a comment is included. Perfect for support, delivery, consulting, and documentation teams that want to stay in the loop with customer sentiment. It ensures that positive feedback reaches the teams who earned it, and that negative feedback is escalated quickly to leads or management. Use Cases Send daily customer feedback directly to the responsible teams in MS Teams Automatically escalate negative responses to leads or managers Avoid clutter by filtering out unimportant or test entries Keep internal teams motivated by sharing only the most relevant praise How it works Schedule Trigger Starts the workflow on a set schedule (e.g., daily at 7:00 AM) Get Feedback Pulls customer feedback from Nicereply using survey ID Split Out Processes each feedback entry separately Edit Feedbacks Renames or adjusts fields for easier filtering and readability Change Survey ID Maps internal survey identifiers for accurate team routing (Survey ID can be found in Nicereply: Settings > Surveys > [Survey] > ID) Filter Excludes old responses Code Node Tag unknown clients Change Happiness Value Converts score into “Great”, “OK”, or “Bad” for routing logic Without Comment Checks if feedback includes a text comment or not Send Feedback Without Comment Routes simple feedback (no comment) to MS Teams based on team + score Send Feedback With Comment Routes full feedback with comment to MS Teams for closer review Feedback Routing Logic Each team receives only what’s most relevant: Support, Docs, Consulting* get only *Great** feedback to boost morale Team Leads* receive *OK and Bad** feedback so they can follow up Management* is only alerted to *Bad** feedback for critical response These rules can be freely customized. For example, you may want Support to receive all responses, or Management only when multiple Bad entries are received. The structure is modular and easily adjustable. How to Use Import the workflow Load the .json file into your Easy Redmine automation workspace Set up connections Nicereply API key or integration setup Microsoft Teams integration (chat and/or channel posting) Insert your Survey ID(s) You’ll find these in the Nicereply admin panel under Survey settings Customize team logic Adjust survey-to-team mappings and message routing as needed Edit Teams message templates Modify message text or formatting based on internal tone or content policies Test with real data Run manually and verify correct delivery to MS Teams Deploy and schedule Let it run on its own to automate the feedback cycle Requirements Nicereply account with active surveys Microsoft Teams account with permissions to post to channels or send chats Optional Enhancements Add AI to summarize long comments Store feedback history in external DB Trigger follow-up tasks or alerts for repeated Bad scores Localize messages for multilingual feedback systems Integrate additional tools like Slack, Easy Redmine, etc. Tips for a Clean Setup Keep team routing logic in one place for easy updates Rename all nodes clearly to reflect their function (e.g., Change Happiness Value) Add logging or alerting in case of failed delivery or empty feedback pull Use environment variables for tokens and survey IDs where possible
by Axiomlab.dev
HubSpot Lead Refinement 🚀 How it works Triggers: HubSpot Trigger: Fires when contacts are created/updated. Manual Trigger: Run on demand for testing or batch checks. Get Recently Created/Updated Contacts: Pulls fresh contacts from HubSpot. Edit Fields (Set): Maps key fields (First Name, Last Name, Email) for the Agent. AI Agent: First reads your Google Doc (via the Google Docs tool) to learn the research steps and output format. Then uses SerpAPI (Google engine) to locate the contact’s likely LinkedIn profile and produce a concise result. Code – Remove Think Part: Cleans the model output (removes hidden “think” blocks / formatting) so only the final answer remains. HubSpot Update: Writes the cleaned LinkedIn URL to the contact (via email match). 🔑 Required Credentials: HubSpot App Token (Private App) — for Get/Update contact nodes. HubSpot Developer OAuth (optional) — if you use the HubSpot * Trigger node for event-based runs. Google Service Account — for the Google Docs tool (share your * playbook doc with this service account). OpenRouter — for the OpenRouter Chat Model used by the AI Agent. SerpAPI — for targeted Google searches from within the Agent. 🛠️ Setup Instructions HubSpot Create a Private App and copy the Access Token. Add or confirm the contact property linkedinUrl (Text). Plug the token into the HubSpot nodes. If using HubSpot Trigger, connect your Developer OAuth app and subscribe to contact create/update events. Google Docs (Living Instructions) ➡️ Sample configuration doc file Copy the sample doc file and modify to your need. Share the doc with your Google Service Account (Viewer is fine). In the Read Google Docs node, paste the Document URL. OpenRouter & SerpAPI Add your OpenRouter key to the OpenRouter Chat Model credential. Add your SerpAPI key to the SerpAPI tool node. (Optional) In your Google Doc or Agent prompt, set sensible defaults for SerpAPI (engine=google, hl=en, gl=us, num=5, max 1–2 searches). ✨ What you get Auto-enriched contacts with a LinkedIn URL and profile insights (clean, validated output). A research process you can change anytime by editing the Google Doc—no workflow changes needed. Tight, low-noise searches via SerpAPI to keep costs down. And that’s it—publish and let the Agent enrich new leads automatically while you refine the rules in your doc. It allows handing off to a team who wouldn't necessarily tweak the automation nodes.
by jun shou
🔧 How It Works **This n8n workflow leverages an agentic AI solution, where multiple AI agents collaborate to process and generate tailored job application assets. ✅ Features Agent-based AI Coordination: Utilizes multiple AI agents working in sequence to analyze the job description and generate results. Outputs: A customized cover letter An optimized resume (CV) A list of interview preparation questions Automated Delivery: The final outputs are created as Google Docs and stored in your connected Google Drive folder. 🧾 Input Requirement Simply provide a LinkedIn job URL as the input. Example: https://www.linkedin.com/jobs/view/4184156975 ⚙️ Setup Instructions To deploy and run this workflow, you'll need to configure the following credentials: Google Cloud Platform (GCP) Enable the Google Drive API Set up OAuth credentials for n8n integration OpenAI API Key Needed for generating the content (cover letter, CV, and questions) BrightData (formerly Luminati) Used to scrape and extract job details from the LinkedIn job link ⚠️ Setup requires moderate technical familiarity with APIs and OAuth. A step-by-step configuration guide is recommended for beginners.
by Iternal Technologies
Blockify® Technical Manual Data Optimization Workflow Blockify Optimizes Data for Technical Manual RAG and Agents - Giving Structure to Unstructured Data for ~78X Accuracy, when pairing Blockify Ingest and Blockify Distill Learn more at https://iternal.ai/blockify Get Free Demo API Access here: https://console.blockify.ai/signup Read the Technical Whitepaper here: https://iternal.ai/blockify-results See example Accuracy Comparison here: https://iternal.ai/case-studies/medical-accuracy/ Blockify is a data optimization tool that takes messy, unstructured text, like hundreds of sales‑meeting transcripts or long proposals, and intelligently optimizes the data into small, easy‑to‑understand "IdeaBlocks." Each IdeaBlock is just a couple of sentences in length that capture one clear idea, plus a built‑in contextualized question and answer. With this approach, Blockify improves accuracy of LLMs (Large Language Models) by an average aggregate 78X, while shrinking the original mountain of text to about 2.5% of its size while keeping (and even improving) the important information. When Blockify's IdeaBlocks are compared with the usual method of breaking text into equal‑sized chunks, the results are dramatic. Answers pulled from the distilled IdeaBlocks are roughly 40X more accurate, and user searches return the right information about 52% more accurate. In short, Blockify lets you store less data, spend less on computing, and still get better answers- turning huge documents into a concise, high‑quality knowledge base that anyone can search quickly. Blockify works by processing chunks of text to create structured data from an unstructured data source. Blockify® replaces the traditional "dump‑and‑chunk" approach with an end‑to‑end pipeline that cleans and organizes content before it ever hits a vector store. Admins first define who should see what, then the system ingests any file type—Word, PDF, slides, images—inside public cloud, private cloud, or on‑prem. A context‑aware splitter finds natural breaks, and a series of specially developed Blockify LLM model turns each segment into a draft IdeaBlock. GenAI systems fed with this curated data return sharper answers, hallucinate far less, and comply with security policies out of the box. The result: higher trust, lower operating cost, and a clear path to enterprise‑scale RAG without the cleanup headaches that stall most AI rollouts.
by Rahul Joshi
Description: Keep your customer knowledge base up to date with this n8n automation template. The workflow connects Zendesk with Google Sheets, automatically fetching tickets tagged as “howto,” enriching them with requester details, and saving them into a structured spreadsheet. This ensures your internal or public knowledge base reflects the latest customer how-to queries—without manual copy-pasting. Perfect for customer support teams, SaaS companies, and service providers who want to streamline documentation workflows. What This Template Does (Step-by-Step) ⚡ Manual Trigger or Scheduling Run the workflow manually for testing/troubleshooting, or configure a schedule trigger for daily/weekly updates. 📥 Fetch All Zendesk Tickets Connects to your Zendesk account and retrieves all available tickets. 🔍 Filter for "howto" Tickets Only Processes only tickets that contain the “howto” tag, ensuring relevance. 👤 Enrich User Data Fetches requester details (name, email, profile info) to provide context. 📊 Update Google Sheets Knowledge Base Saves ticket data—including Ticket No., Description, Status, Tag, Owner Name, and Email. ✔️ Smart update prevents duplicates by matching on description. 🔁 Continuous Sync Each new or updated “howto” ticket is synced automatically into your knowledge base sheet. Key Features 🔍 Tag-based filtering for precise categorization 📊 Smart append-or-update logic in Google Sheets ⚡ Zendesk + Google Sheets integration with OAuth2 ♻️ Keeps knowledge base fresh without manual effort 🔐 Secure API credential handling Use Cases 📖 Maintain a live “how-to” guide from real customer queries 🎓 Build self-service documentation for support teams 📩 Monitor and track recurring help topics 💼 Equip knowledge managers with a ready-to-export dataset Required Integrations Zendesk API (for ticket fetch + user info) Google Sheets (for storing/updating records) Why Use This Template? ✅ Automates repetitive data entry ✅ Ensures knowledge base accuracy & freshness ✅ Reduces support team workload ✅ Easy to extend with more tags, filters, or sheet logic
by DIGITAL BIZ TECH
SharePoint → Supabase → Google Drive Sync Workflow Overview This workflow is a multi-system document synchronization pipeline built in n8n, designed to automatically sync and back up files between Microsoft SharePoint, Supabase/Postgres, and Google Drive. It runs on a scheduled trigger, compares SharePoint file metadata against your Supabase table, downloads new or updated files, uploads them to Google Drive, and marks records as completed — keeping your databases and storage systems perfectly in sync. Workflow Structure Data Source:** SharePoint REST API for recursive folder and file discovery. Processing Layer:** n8n logic for filtering, comparison, and metadata normalization. Destination Systems:** Supabase/Postgres for metadata, Google Drive for file backup. SharePoint Sync Flow (Frontend Flow) Trigger:** Schedule Trigger Runs at fixed intervals (customizable) to start synchronization. Fetch Files:** Microsoft SharePoint HTTP Request Recursively retrieves folders and files using SharePoint’s REST API: /GetFolderByServerRelativeUrl(...)?$expand=Files,Folders,Folders/Files,Folders/Folders/Folders/Files Filter Files:** filter files A Code node that flattens nested folders and filters unwanted file types: Excludes system or temporary files (~$) Excludes extensions: .db, .msg, .xlsx, .xlsm, .pptx Normalize Metadata:** normalize last modified date Ensures consistent Last_modified_date format for accurate comparison. Fetch Existing Records:** Supabase (Get) Retrieves current entries from n8n_metadata to compare against SharePoint files. Compare Datasets:** Compare Datasets Detects new or modified files based on UniqueId, Last_modified_date, and Exists. Routes only changed entries forward for processing. File Processing Engine (Backend Flow) Loop:** Loop Over Items2 Iterates through each new or updated file detected. Build Metadata:** get metadata and Set metadata Constructs final metadata fields: file_id, file_title, file_url, file_type, foldername, last_modified_date Generates fileUrl using UniqueId and ServerRelativeUrl if missing. Upsert Metadata:** Insert Document Metadata Inserts or updates file records in Supabase/Postgres (n8n_metadata table). Operation: upsert with id as the primary matching key. Download File:** Microsoft SharePoint HTTP Request1 Fetches the binary file directly from SharePoint using its ServerRelativeUrl. Rename File:** rename files Renames each downloaded binary file to its original file_title before upload. Upload File:** Upload file Uploads the renamed file to Google Drive (My Drive → root folder). Mark Complete:** Postgres Updates the Supabase/Postgres record setting Loading Done = true. Optional Cleanup:** Supabase1 Deletes obsolete or invalid metadata entries when required. Integrations Used | Service | Purpose | Credential | |----------|----------|-------------| | Microsoft SharePoint | File retrieval and download | microsoftSharePointOAuth2Api | | Supabase / Postgres | Metadata storage and synchronization | Supabase account 6 ayan | | Google Drive | File backup and redundancy | Google Drive account 6 rn dbt | | n8n Core | Flow control, dataset comparison, batch looping | Native | System Prompt Summary > “You are a SharePoint document synchronization workflow. Fetch all files, compare them to database entries, and only process new or modified files. Download files, rename correctly, upload to Google Drive, and mark as completed in Supabase.” Workflow rule summary: > “Maintain data integrity, prevent duplicates, handle retries gracefully, and continue on errors. Skip excluded file types and ensure reliable backups between all connected systems.” Key Features Scheduled automatic sync across SharePoint, Supabase, and Google Drive Intelligent comparison to detect only new or modified files Idempotent upsert for consistent metadata updates Configurable file exclusion filters Safe rename + upload pipeline for clean backups Error-tolerant and fully automated operation Summary > A reliable, SharePoint-to-Google Drive synchronization workflow built with n8n, integrating Supabase/Postgres for metadata management. It automates file fetching, filtering, downloading, uploading, and marking as completed — ensuring your data stays mirrored across platforms. Perfect for enterprises managing document automation, backup systems, or cross-cloud data synchronization. Need Help or More Workflows? Want to customize this workflow for your organization? Our team at Digital Biz Tech can extend it for enterprise-scale document automation, RAGs and social media automation. We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Mohamed Abdelwahab
1. Overview The IngestionDocs workflow is a fully automated **document ingestion and knowledge management system* built with *n8n**. Its purpose is to continuously ingest organizational documents from Google Drive, transform them into vector embeddings using OpenAI, store them in Pinecone, and make them searchable and retrievable through an AI-powered Q&A interface. This ensures that employees always have access to the most up-to-date knowledge base without requiring manual intervention. 2. Key Objectives Automated Ingestion** → Seamlessly process new and updated documents from Google Drive.\ Change Detection** → Track and differentiate between new, updated, and previously processed documents.\ Knowledge Base Construction** → Convert documents into embeddings for semantic search.\ AI-Powered Assistance** → Provide an intelligent Q&A system for employees to query manuals.\ Scalable & Maintainable** → Modular design using n8n, LangChain, and Pinecone. 3. Workflow Breakdown A. Document Monitoring and Retrieval The workflow begins with two Google Drive triggers: File Created Trigger → Fires when a new document is uploaded.\ File Updated Trigger → Fires when an existing document is modified.\ A search operation lists the files in the designated Google Drive folder.\ Non-downloadable items (e.g., subfolders) are filtered out.\ For valid files: The file is downloaded.\ A SHA256 hash is generated to uniquely identify the file's content. B. Record Management (Google Sheets Integration) To keep track of ingestion states, the workflow uses a **Google Sheets--based Record Manager**:\ Each file entry contains:\ Id** (Google Drive file ID)\ Name** (file name)\ hashId** (SHA256 checksum)\ The workflow compares the current file's hash with the stored one:\ New Document** → File not found in records → Inserted into the Record Manager.\ Already Processed** → File exists and hash matches → Skipped.\ Updated Document** → File exists but hash differs → Record is updated. This guarantees that only new or modified content is processed, avoiding duplication. C. Document Processing and Vectorization Once a document is marked as new or updated:\ Default Data Loader extracts its content (binary files supported).\ Pages are split into individual chunks.\ Metadata such as file ID and name are attached.\ Recursive Character Text Splitter divides the content into manageable segments with overlap.\ OpenAI Embeddings (text-embedding-3-large) transform each text chunk into a semantic vector.\ Pinecone Vector Store stores these vectors in the configured index:\ For new documents, embeddings are inserted into a namespace based on the file name.\ For updated documents, the namespace is cleared first, then re-ingested with fresh embeddings. This process builds a scalable and queryable knowledge base. D. Knowledge Base Q&A Interface The workflow also provides an **interactive form-based user interface**:\ Form Trigger** → Collects employee questions.\ LangChain AI Agent**:\ Receives the question.\ Retrieves relevant context from Pinecone using vector similarity search.\ Processes the response using OpenAI Chat Model (gpt-4.1-mini).\ Answer Formatting**:\ Responses are returned in HTML format for readability.\ A custom CSS theme ensures a modern, user-friendly design.\ Answers may include references to page numbers when available. This creates a self-service knowledge base assistant that employees can query in natural language. 4. Technologies Used n8n** → Orchestration of the entire workflow.\ Google Drive API** → File monitoring, listing, and downloading.\ Google Sheets API** → Record manager for tracking file states.\ OpenAI API**: text-embedding-3-large for semantic vector creation.\ gpt-4.1-mini for conversational Q&A.\ Pinecone** → Vector database for embedding storage and retrieval.\ LangChain** → Document loaders, text splitters, vector store connectors, and agent logic.\ Crypto (SHA256)** → File hash generation for change detection.\ Form Trigger + Form Node** → Employee-facing Q&A submission and answer display.\ Custom CSS** → Provides a modern, responsive, styled UI for the knowledge base. 5. End-to-End Data Flow Employee uploads or updates a document → Google Drive detects the change.\ Workflow downloads and hashes the file → Ensures uniqueness and detects modifications.\ Record Manager (Google Sheets) → Decides whether to skip, insert, or update the record.\ Document Processing → Splitting + Embedding + Storing into Pinecone.\ Knowledge Base Updated → The latest version of documents is indexed.\ Employee asks a question via the web form.\ AI Agent retrieves embeddings from Pinecone + uses GPT-4.1-mini → Generates a contextual answer.\ Answer displayed in styled HTML → Delivered back to the employee through the form interface. 6. Benefits Always Up-to-Date** → Automatically syncs documents when uploaded or changed.\ No Duplicates** → Smart hashing ensures only relevant updates are reprocessed.\ Searchable Knowledge Base** → Employees can query documents semantically, not just by keywords.\ Enhanced Productivity** → Answers are immediate, reducing time spent browsing manuals.\ Scalable** → New documents and users can be added without workflow redesign. ✅ In summary, IngestionDocs is a **robust AI-driven document ingestion and retrieval system* that integrates *Google Drive, Google Sheets, OpenAI, and Pinecone* within *n8n**. It continuously builds and maintains a knowledge base of manuals while offering employees an intelligent, user-friendly Q&A assistant for fast and accurate knowledge retrieval.
by Davide
🤖🎵 This workflow automates the creation, storage, and cataloging of AI-generated music using the Eleven Music API, Google Sheets, and Google Drive. Key Advantages ✅ Fully Automated Music Generation Pipeline Once started, the workflow automatically: Reads track parameters Generates music via API Uploads the file Updates your spreadsheet No manual steps needed after initialization. ✅ Centralized Track Management A single Google Sheet acts as your project control center, letting you organize: Prompts Durations Generated URLs This avoids losing track of files and creates a ready-to-share catalog. ✅ Seamless Integration with Google Services The workflow: Reads instructions from Google Sheets Saves the MP3 to Google Drive Updates the same Sheet with the final link This ensures everything stays synchronized and easy to access. ✅ Scalable and Reliable Processing The loop-with-delay mechanism: Processes tracks sequentially Prevents API overload Ensures stable execution This is especially helpful when generating multiple long tracks. ✅ Easy Customization Because the prompts and durations come from Google Sheets: You can edit prompts at any time You can add more tracks without modifying the workflow You can clone the Sheet for different projects ✅ Ideal for Creators and Businesses This workflow is perfect for: Content creators generating background music Agencies designing custom soundtracks Businesses needing AI-generated audio assets Automated production pipelines How It Works The process operates as follows: The workflow starts manually via the "Execute workflow" trigger It retrieves a list of music track requests from a Google Sheets spreadsheet containing track titles, text prompts, and duration specifications The system processes each track request individually through a batch loop For each track, it sends the text prompt and duration to ElevenLabs Music API to generate studio-quality music The generated MP3 file (in 44100 Hz, 128 kbps format) is automatically uploaded to a designated Google Drive folder Once uploaded, the workflow updates the original Google Sheets with the direct URL to the generated music file A 1-minute wait period between each track generation prevents API rate limiting The process continues until all track requests in the spreadsheet have been processed Set Up Steps Prerequisites: ElevenLabs paid account with Music API access enabled Google Sheets spreadsheet with specific columns: TITLE, PROMPT, DURATION (ms), URL Google Drive folder for storing generated music files Configuration Steps: ElevenLabs API Setup: Enable Music Generation access in your ElevenLabs account Generate an API key from the ElevenLabs developer dashboard Configure HTTP Header authentication in n8n with name "xi-api-key" and your API value Google Sheets Preparation: Create or clone the music tracking spreadsheet with required columns Fill in track titles, detailed text prompts, and durations in milliseconds (10,000-300,000 ms) Configure Google Sheets OAuth credentials in n8n Update the document ID in the Google Sheets nodes Google Drive Configuration: Create a dedicated folder for music uploads Set up Google Drive OAuth credentials in n8n Update the folder ID in the upload node Workflow Activation: Ensure all API credentials are properly configured Test with a single track entry in the spreadsheet Verify music generation, upload, and spreadsheet update work correctly Execute the workflow to process all pending track requests The workflow automatically names files with timestamp prefixes (song_yyyyMMdd) and handles the complete lifecycle from prompt to downloadable music file. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Margo Rey
Generate and send MadKudu Account Brief into Outreach This workflow generates an account brief tailored to your company using MadKudu MCP and OpenAI and syncs it to a custom field in Outreach. Its for Sales who want to give reps rich account context right inside Outreach, and draft Outreach email with Outreach Revenue Agent based on MadKudu account brief. ✨ Who it's for RevOps or GTM teams using MadKudu + Salesforce + Outreach Sales teams needing dynamic, AI-generated context for target accounts 🔧 How it works 1. Select Accounts: Use a Salesforce node to define which accounts to brief. Filter logic can be updated to match ICP or scoring rules (e.g., MadKudu Fit + LTB). 2. Generate Brief with MadKudu MCP & AI MadKudu MCP provides the account brief instructions, research online for company recent news and provides structured account context from your integrations connected to MadKudu + external signals (firmographics, past opportunities, active contacts, job openings...) The AI agent (OpenAI model) turns this into a readable account brief. 3. Send to Outreach Match account in Outreach via domain. Update a custom field (e.g., custom49) with the brief text. 📋 How to set up Connect your Salesforce account Used to pull accounts that need a brief. Set your OpenAI credentials Required for the AI Agent to generate the brief. Create a n8n Variable to store your MadKudu API key named madkudu_api_key used for the MadKudu MCP tool The AI Agent pulls the account brief instructions and all the context necessary to generate the briefs. Create an Oauth2 API credential to connect your Outreach account Used to sync to brief to Outreach. Customize the Salesforce filter In the “Get accounts” node, define which accounts should get a brief (e.g. Fit > 90). Map your Outreach custom field Update the JSON Body request with your actual custom field ID (e.g. custom49). 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click “Add” to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write Save in Outreach Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Salesforce Oauth Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the targeting logic** Edit the Salesforce filter to control which accounts are eligible. Rewrite the prompt** Tweak the prompt in the AI Agent node to adjust format, tone, or insights included in the brief. Change the Outreach account field** Update the Outreach field where the brief is sync-ed if you're using a different custom field (e.g. custom48, custom32, etc). Use a different trigger** Swap the manual trigger for a Schedule or Webhook to automate the flow end-to-end.
by Robert Breen
This n8n workflow template automatically monitors your Google Sheets for new entries and uses AI to generate detailed descriptions for each topic. Perfect for content creators, researchers, project managers, or anyone who needs automatic content generation based on simple topic inputs. What This Workflow Does This automated workflow: Monitors a Google Sheet for new rows added to the "data" tab Takes the topic from each new row Uses OpenAI GPT to generate a detailed description of that topic Updates the same row with the AI-generated description Logs all activity in a separate "actions" tab for tracking The workflow runs every minute, checking for new entries and processing them automatically. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered description generation (GPT-4.1-mini) Google Sheets** - Data input, storage, and activity logging Google Sheets Trigger** - Real-time monitoring for new rows Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI description generation Google Account - For Google Sheets integration Google Sheets API Access - For both reading and writing to sheets Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Set Up Your Google Sheets Option 1: Use Our Pre-Made Template (Recommended) Copy our template: AI Description Generator Template Click "File" → "Make a copy" to create your own version Rename it as desired (e.g., "My AI Content Generator") Note your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Set up the main "data" tab: Rename "Sheet1" to "data" Set up column headers in row 1: A1: topic B1: description Create an "actions" tab: Add a new sheet and name it "actions" Set up column headers: A1: Update Copy your sheet's URL Step 3: Configure Google API Access Enable Google Sheets API Go to Google Cloud Console Create a new project or select existing one Enable "Google Sheets API" Enable "Google Drive API" Create Service Account (for N8N) In Google Cloud Console, go to "IAM & Admin" → "Service Accounts" Create a new service account Download the JSON credentials file Share your Google Sheet with the service account email address Step 4: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on the "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration For the Trigger Node: Click on "Row added - Google Sheet" node Set up Google Sheets Trigger OAuth2 credentials Select your spreadsheet from the dropdown Choose the "data" sheet Set polling to "Every Minute" (already configured) For the Update Node: Click on "Update row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "data" sheet Verify column mapping (topic → topic, description → AI output) For the Actions Log Node: Click on "Append row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "actions" sheet Step 5: Customize the AI Description Generator The workflow uses a simple prompt that can be customized: Click on the "Description Writer" node Modify the system message to change the AI behavior: write a description of the topic. output like this. { "description": "description" } Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: robert@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-breen-29429625/ Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.