by Milo Bravo
Automated Contract Signing: Tally, Airtable & DocuSign Who is this for? Business who manually prep/route DocuSign envelopes and want zero-touch contract signing from form submission. What problem is this workflow solving? Contract chaos kills velocity: Manual DocuSign prep (30min/envelope) Signer routing errors Data re-entry across tools No audit trail This workflow auto-generates + routes DocuSign from Tally forms + retrieves and updates Airtable . What this workflow does Normalizes Tally payload + lookups service provider Routes smartly: Both signers / Primary only / Secondary only Pre-fills DocuSign from Airtable data Tracks everything: Status, signers, timestamps in dashboard *3 Signing Modes: Both**: Dual signer envelopes Primary**: Client-only signing Secondary**: Provider-only Setup (10 minutes) Airtable: 3 tables (Contracts / Providers / Logs) DocuSign: OAuth2 + 3 envelope templates Tally: Form webhook → this workflow URL Config: Replace BASE_ID / TABLE_IDs / ACCOUNT_ID Test: Submit Tally form → watch DocuSign magic *Airtable Schema: Contracts: ID, Client, Provider, Status, EnvelopeID, Signers Providers: Name, DocuSignEmail, Role Logs: Timestamp, Action, Details How to customize to your needs Signing Flows: Agency → Client NDA (Primary only) Partner → Mutual MSA (Both signers) Internal → Approval (Secondary only) Scale Up: CRM Sync**: HubSpot/Salesforce status update Payments**: Stripe link post-signing Multi-language**: Template per locale Notifications**: Slack/Teams on completion Pro Features: Sequential signing order Void/correct envelopes Audit log dashboard Field validation ROI: 30min → 30sec per contract $0 (vs HelloSign $15+/mo) 100% tracked (no lost envelopes) Audit-ready (logs + timestamps) GDPR compliant (data mapping) Proven: 2k+ contracts signed, 98% completion rate. Need help customizing?: Contact me for consulting and support: LinkedIn / Message Keywords: DocuSign automation, contract signing automation, e-signature workflow, sales contract automation
by Hugo Le Poole
AI-powered "Second Brain" that can answer questions about any YouTube channel's content using Neo4j Graph Database and RAG. Turn any YouTube channel into a searchable knowledge base. The AI agent understands relationships between videos, topics, tools, and concepts - enabling powerful queries like "Which videos talk about automation AND mention n8n?" or "What are the most discussed topics?". Good to know Neo4j Aura Free Tier is sufficient for most channels (up to 200k nodes) Apify credits are required for YouTube scraping (~$5 for 500 videos) LLM costs are minimal (~$0.01 per video for entity extraction) How it works Ingestion Flow**: Scrapes YouTube videos via Apify, including titles, descriptions, and transcripts. Entity Extraction**: GPT-4o-mini analyzes each video and extracts Topics, Tools, and Concepts mentioned. Graph Storage**: Data is stored in Neo4j with relationships: Video → COVERS → Topic, Video → USES → Tool, Video → EXPLAINS → Concept. AI Agent**: Receives user questions, generates Cypher queries to search the graph, and returns natural language responses with relevant video links. How to use Set up a free Neo4j Aura instance and save credentials Convert your Neo4j username:password to Base64 for authentication Configure Apify with your target YouTube channel URL Run the ingestion workflow to populate the database Chat with the AI agent to query your video knowledge base Requirements Neo4j Aura account (free tier available) Apify account for YouTube scraping OpenAI API key (GPT-4o-mini) for entity extraction Anthropic API key (Claude) or OpenAI for the AI agent Customization Modify the system prompt to change response style or language Add more entity types (e.g., People, Companies, Frameworks) Connect multiple YouTube channels into one knowledge base Extend to other content sources (blogs, podcasts, Notion docs)
by Calistus Christian
How it works • Webhook → urlscan.io → GPT-4o mini → Gmail • Payload example: { "url": "https://example.com" } • urlscan.io returns a Scan ID and raw JSON. • AI node classifies the scan as malicious / suspicious / benign, assigns a 1-10 risk score, and writes a two-sentence summary. • Gmail sends an alert that includes the URL, Scan ID, AI verdict, screenshot link, and full report link. Set-up steps (~5 min) • Create three credentials in n8n urlscan.io API key OpenAI API key (GPT-4o mini access) Gmail OAuth (or SMTP) • Replace those fields in the nodes, or reference env vars like {{ $env.OPENAI_API_KEY }}. • Switch the Webhook to Production → copy the live URL. • Test with: curl -X POST <your-webhook-url> \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com" }'
by Robert Breen
This workflow automates the process of writing tailored cover letters for job applications. It: Uses Apify’s Indeed Scraper to pull live job postings based on your chosen search term. Sends the job description along with your resume into OpenAI, which writes an optimized cover letter — one paragraph plus bullet points — only using details from your resume. Perfect for quickly generating professional, customized cover letters for each role you want to apply to. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Set Up Apify Connection Go to Apify Console and sign up/login Get your API token here: Apify API Keys Set up this scraper in your Apify account: Indeed Scraper In n8n, create a HTTP Query Auth credential Query Key: token Value: YOUR_APIFY_API_KEY Attach this credential to the HTTP Request node (Search Indeed) 📬 Contact Information Need help customizing this workflow or building similar automations? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Robert Breen
This workflow creates a multilingual eCommerce chatbot that automatically detects the customer’s language and provides tailored responses. It is designed for online shops to improve customer support in English, Spanish, and French. The chatbot is powered by OpenAI’s GPT-5 Nano and runs entirely inside n8n, with built-in memory to keep conversations contextual and helpful. 🔑 Key Features Language Detection**: Identifies customer language automatically (English, Spanish, or French). Localized Responses**: Uses pre-defined system prompts for each language. Customer Support Ready**: Handles product questions, order tracking, returns, and general inquiries. Human Handoff**: If details are missing, it guides the user to contact human support. Conversation Memory**: Tracks sessions for smoother, contextual replies. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Get API Key Go to OpenAI Platform Go to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Configure Your Languages & Prompts Open the Set Node named Ecommerce Language Prompts. Update or expand the list of languages and their system prompts. Example already includes: English Spanish French That’s it! Your chatbot is ready to run 🎉 📬 Contact Information Need help customizing this workflow or building similar automations? 📧 Email: robert@ynteractive.com 🔗 LinkedIn: Robert Breen 🌐 Website: ynteractive.com
by Recrutei Automações
Overview: Recrutei ATS - Ultimate Internal AI Agent This workflow transforms Slack into a powerful command center for recruitment. Using an AI Agent (LangChain) integrated with the Recrutei ATS API and MCP, your team can manage candidates, vacancies, tags and a lot more directly through Slack messages. Key Features Natural Language Processing:** Use GPT-4 to interpret complex requests like "Find candidates for the Python role and tag them as 'Senior'". Candidate Management:** Create prospects, disqualify candidates, or move them between pipeline stages. Vacancy Insights:** Add and read comments on vacancies directly from the chat. Tagging System:** Create, list, and delete tags dynamically. Setup Instructions Slack Trigger: Connect your Slack account and invite the bot to your desired channel. OpenAI: Configure your API Key. This agent uses GPT-4o-mini (or GPT-4) for high-reasoning capabilities. HTTP Request Tools: Every "Tool" node (Pink nodes) needs your Recrutei API Token. Replace the Authorization header with your Bearer YOUR_TOKEN. Update the Base URL if necessary. Slack Post: Ensure the bot has permissions to write in the channel.
by InfyOm Technologies
✅ What problem does this workflow solve? Call centers often record conversations for quality control and training, but reviewing every transcript manually is tedious and inefficient. This workflow automates sentiment analysis for each call, providing structured feedback across multiple key categories, so managers can focus on improving performance and training. ⚙️ What does this workflow do? Accepts a Google Sheet containing: Call transcript Agent name Customer name Analyzes each call transcript across multiple sentiment dimensions: 👋 Greeting Sentiment 🧑💼 Agent Friendliness ❓ Problem-Solving Sentiment 🙂 Customer Sentiment 👋 Closing Sentiment ✅ Issue Resolved (Yes/No) Add Conversation Topics discussed in a call Calculates an overall call rating based on combined analysis. Updates the Google Sheet with: Individual sentiment scores Issue resolution status Final call rating 🔧 Setup Instructions 📄 Google Sheets Prepare a sheet with the following columns: Transcript Agent Name Customer Name The workflow will append results in new columns automatically: Greeting Sentiment Closing Sentiment Agent Friendliness Problem Solving Customer Sentiment Issue Resolved Overall Call Rating (out of 5 or 10) 🧠 OpenAI Setup Connect OpenAI API to perform NLP-based sentiment classification. For each transcript, use structured prompts to analyze individual components. 🧠 How it Works – Step-by-Step Sheet Scan – The workflow reads rows from the provided Google Sheet. Loop Through Calls – For each transcript, it: Sends prompts to OpenAI to analyze: Greeting tone (friendly/neutral/rude) Problem-solving quality (clear/confusing/helpful) Closing sentiment Agent attitude Customer satisfaction Whether the issue was resolved Calculates a composite rating from all factors. Update Sheet – All analyzed data is written back into the Google Sheet. 📊 Example Output https://docs.google.com/spreadsheets/d/1aWU28D_73nvkDMPfTkPkaV53kHgX7cg0W4NwLzGFEGU/edit?gid=0#gid=0 👤 Who can use this? This workflow is ideal for: ☎️ Call Centers 🎧 Customer Support Teams 🧠 Training & QA Departments 🏢 BPOs or Support Vendors If you want deeper insight into every customer interaction, this workflow delivers quantified, actionable sentiment metrics automatically. 🛠 Customization Ideas 📅 Add scheduled runs (daily/weekly) to auto-analyze new calls. 📝 Export flagged or low-rated calls into a review dashboard. 🧩 Integrate with Slack or email to send alerts for low-score calls. 🗂 Filter by agent, category, or score to track performance trends. 🚀 Ready to Use? Just connect: ✅ Google Sheets (with transcript data) ✅ OpenAI API …and this workflow will automatically turn your raw call transcripts into actionable sentiment insights.
by Robert Breen
💬 Chat with Your Trello Board (n8n + OpenAI) 📖 Description Turn your Trello board into a conversational assistant. This workflow pulls your board → lists → cards, aggregates the context, and lets you ask natural-language questions (“what’s overdue?”, “summarize In Progress”, “what changed this week?”). OpenAI reasons over the live board data and replies with concise answers or summaries. Great for standups, planning, and quick status checks—without opening Trello. > Setup steps are already embedded in the workflow (Trello API + OpenAI + board URL). Just follow the sticky notes inside the canvas. 🧪 Example prompts “Give me a one-paragraph summary of the board.” “List all cards due this week with their lists.” “What’s blocking items in ‘In Progress’?” “Show new cards added in the last 2 days.” ⚙️ Setup Instructions 1️⃣ Connect Trello (Developer API) Get your API key: https://trello.com/app-key Generate a token (from the same page → Token) In n8n → Credentials → New → Trello API, paste API Key and Token, save. Open each Trello node (Get Board, Get Lists, Get Cards) and select your Trello credential. 2️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 3️⃣ Add Your Board URL to “Get Board” Copy your Trello board URL (e.g., https://trello.com/b/DCpuJbnd/administrative-tasks). Open the Get Board node → Resource: Board, Operation: Get. In ID, choose URL mode and paste the board URL. The node will resolve the board and output its id → used by Get Lists / Get Cards. 📬 Contact Need help customizing this or adding Slack/Email outputs? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Alex Berman
Who is this for Sales development reps, growth marketers, and recruiters who need to find verified business email addresses at scale from a list of contacts -- without manual lookups or guesswork. How it works A Set node holds your list of contacts (first name, last name, and company domain). An HTTP Request node POSTs the contacts to the ScraperCity email-finder API, which returns a runId. A second Set node stores the runId for use in subsequent requests. The workflow waits 30 seconds, then polls the ScraperCity status endpoint in a loop until the job status is SUCCEEDED. Once complete, the results are downloaded via the ScraperCity download endpoint. A Code node parses the response and formats each contact row. Results are written to Google Sheets, giving you a clean, ready-to-use email list. How to set up Create a ScraperCity account at scrapercity.com and copy your API key. In n8n, create an HTTP Header Auth credential named ScraperCity API Key with header Authorization and value Bearer YOUR_KEY. Set your Google Sheets document ID and sheet name in the Google Sheets node. Update the contacts list in the Set Contact List node with your real contacts. Requirements ScraperCity account with email-finder credits Google Sheets OAuth2 credential configured in n8n How to customize the workflow Replace the manual contact list with a Google Sheets Get Rows node to process a dynamic list. Add a Slack or email notification node after the results are written to alert your team. Add a Filter node to keep only contacts where an email was successfully found.
by Guillaume Duvernay
Create a Telegram bot that answers questions using AI-powered web search from Linkup and an LLM agent (GPT-4.1). This template handles both text and voice messages (voice transcribed via a Mistral model by default), routes queries through an agent that can call a Linkup tool to fetch up-to-date information from the web, and returns concise, Telegram-friendly replies. A security switch lets you restrict use to a single Telegram username for private testing, or remove the filter to make the bot public. Who is this for? Anyone needing quick answers:** Build a personal assistant that can look up current events, facts, and general knowledge on the web. Support & ops teams:** Provide quick, web-sourced answers to user questions without leaving Telegram. Developers & automation engineers:** Use this as a reference for integrating agents, transcription, and web search tools inside n8n. No-code builders:** Quickly deploy a chat interface that uses Linkup for accurate, source-backed answers from the web. What it does / What problem does this solve? Provides accurate, source-backed answers:* Routes queries to *Linkup** so replies are grounded in up-to-date web search results instead of the LLM's static knowledge. Handles voice & text transparently:* Accepts Telegram voice messages, transcribes them (via the *Mistral** API node by default), and treats transcripts the same as typed text. Simple agent + tool architecture:* Uses a *LangChain AI Agent* with a *Web search** tool to separate reasoning from information retrieval. Privacy control:* Includes a *Myself?** filter to restrict access to a specific Telegram username for safe testing. How it works Trigger: Telegram Trigger receives incoming messages (text or voice). Route: Message Router detects voice vs text. Voice files are fetched with Get Audio File. Transcribe: Mistral transcribe receives the audio file and returns a transcript; the transcript or text is normalized into preset_user_message and consolidated in Consolidate user message. Agent: AI Agent (GPT-4.1-mini configured) runs with a system prompt that instructs it to call the Web search tool when up-to-date knowledge is required. Respond: The agent output is sent back to the user via Telegram answer. How to set up Create a Linkup account: Sign up at https://linkup.so to get your API key. They offer a free tier with monthly credits. Add credentials in n8n: Configure Telegram API, OpenAI (or your LLM provider), and Mistral Cloud credentials in n8n. Configure Linkup tool: In the Web search node, find the "Headers" section. In the Authorization header, replace Bearer <your-linkup-api-key> with your actual Linkup API Key. Set Telegram privacy (optional): Edit the Myself? If node and replace <Replace with your Telegram username> with your username to restrict access. Remove the node to allow public use. Adjust transcription (optional): Swap the Mistral transcribe HTTP node for another provider (OpenAI, Whisper, etc.). Connect LLM: In OpenAI Chat Model node, add your OpenAI API key (or configure another LLM node) and ensure the AI Agent node references this model. Activate workflow: Activate the workflow and test by messaging your bot in Telegram. Requirements An n8n instance (cloud or self-hosted) A Telegram Bot token added in n8n credentials A Linkup account and API Key An LLM provider account (OpenAI or equivalent) for the OpenAI Chat Model node A Mistral API key (or other transcription provider) for voice transcription How to take it further Add provenance & sources:** Parse Linkup responses and include short citations or source links in the agent replies. Rich replies:** Use Telegram media (images, files) or inline keyboards to create follow-up actions (open web pages, request feedback, escalate to humans). Multi-user access control:** Replace the single-username filter with a list or role-based access system (Airtable or Google Sheets lookup) to allow multiple trusted users. Logging & analytics:* Save queries and agent responses to *Airtable* or *Google Sheets** for monitoring, quality checks, and prompt improvement.
by Lucas Hideki
How it works Webhook receives a job ID and list of candidate IDs from your database If the job has no template yet, Prompt 0 reads the job description and automatically extracts mandatory requirements, differentials, behavioral competencies and sets the weight of each criterion For each candidate, 3 prompts run sequentially with accumulated context: Prompt 1 scores the candidate (0–100) against the job template using calibration anchors to avoid score inflation, plus a breakdown score per criterion Prompt 2 receives the score as context and identifies strengths with concrete resume evidence, separating critical gaps (missing mandatory requirements) from secondary gaps (missing differentials) Prompt 3 receives the gaps as context and generates personalized interview questions for that specific candidate — not generic HR templates Results are saved directly to PostgreSQL after each candidate When all candidates are processed, Prompt 4 automatically generates an executive summary of the entire pool with recommendations on who to interview Set up steps Add your OpenAI credentials to all AI nodes (~2 min) Add your PostgreSQL credentials to all Postgres nodes (~2 min) Create the required tables using the SQL schema provided in the workflow sticky note (~5 min) Trigger via POST /webhook/cv-analyze with { "job_id": 1, "candidate_ids": [1, 2, 3] }
by Lucas Hideki
How it works Any external system triggers a reminder via webhook with a tenant token — the workflow validates the token, fetches the tenant's channel config and message template from PostgreSQL, renders the message with event variables, and sends it immediately A schedule trigger runs every minute and queries events approaching their deadline window per tenant — idempotency via a reminders_sent table ensures the same reminder is never sent twice A built-in n8n form lets you register new tenants with their channel, message template and timing rules — no external backend needed Every send attempt is logged to the database with status, message sent and error details Set up steps Add your PostgreSQL credentials to all Postgres nodes (~2 min) Add your Telegram credentials to the Send Message node (~2 min) Create the required tables using the SQL schema provided in the workflow sticky note (~10 min) Register your first tenant at /form/multi-tenant-register Send events via POST /webhook/multi-tenant-webhook with x-tenant-token header