by iamvaar
This workflow automates the process of analyzing a contract submitted via a web form. It extracts the text from an uploaded PDF, uses AI to identify potential red flags, and sends a summary report to a Telegram chat. Prerequisites Before you can use this workflow, you'll need a few things set up. 1. JotForm Form You need to create a form in JotForm with at least two specific fields: Email Address**: A standard field to collect the user's email. File Upload**: This field will be used to upload the contract or NDA. Make sure to configure it to allow .pdf files. 2. API Keys and IDs JotForm API Key**: You can generate this from your JotForm account settings under the "API" section. Gemini API Key**: You'll need an API key from Google AI Studio to use the Gemini model. Telegram Bot Token**: Create a new bot by talking to the @BotFather on Telegram. It will give you a unique token. Telegram Chat ID**: This is the ID of the user, group, or channel you want the bot to send messages to. You can get this by using a bot like @userinfobot. Node-by-Node Explanation Here is a breakdown of what each node in the workflow does, in the order they execute. 1. JotForm Trigger What it does**: This node kicks off the entire workflow. It actively listens for new submissions on the specific JotForm you select. How it works**: When someone fills out your form and hits "Submit," JotForm sends the submission data (including the email and a link to the uploaded file) to this node. 2. Grab Attachment Details (HTTP Request) What it does**: The initial data from JotForm doesn't contain a direct download link for the file. This node takes the submissionID from the trigger and makes a request to the JotForm API to get the full details of that submission. How it works**: It constructs a URL using the submissionID and your JotForm API key to fetch the submission data, which includes the proper download URL for the uploaded contract. 3. Grab the Attached Contract (HTTP Request) What it does**: Now that it has the direct download link, this node fetches the actual PDF file. How it works**: It uses the file URL obtained from the previous node to download the contract. The node is set to expect a "file" as the response, so it saves the PDF data in binary format for the next step. 4. Extract Text from PDF File What it does**: This node takes the binary PDF data from the previous step and extracts all the readable text from it. How it works**: It processes the PDF and outputs plain text, stripping away any formatting or images. This raw text is now ready to be analyzed by the AI. 5. AI Agent (with Google Gemini Chat Model) What it does**: This is the core analysis engine of the workflow. It takes the extracted text from the PDF and uses a powerful prompt to analyze it. The "Google Gemini Chat Model" node is connected as its "brain." How it works**: It sends the contract text to the Gemini model. The prompt instructs Gemini to act as an expert contract analyst. It specifically asks the AI to identify major red flags and hidden/unfair clauses. It also tells the AI to format the output as a clean report using Telegram's MarkdownV2 style and to keep the response under 1500 characters. 6. Send a text message (Telegram) What it does**: This is the final step. It takes the formatted analysis report generated by the AI Agent and sends it to your specified Telegram chat. How it works**: It connects to your Telegram bot using your Bot Token and sends the AI's output ($json.output) to the Chat ID you've provided. Because the AI was instructed to format the text in MarkdownV2, the message will appear well-structured in Telegram with bolding and bullet points.
by Anderson Adelino
Voice Assistant Interface with n8n and OpenAI This workflow creates a voice-activated AI assistant interface that runs directly in your browser. Users can click on a glowing orb to speak with the AI, which responds with voice using OpenAI's text-to-speech capabilities. Who is it for? This template is perfect for: Developers looking to add voice interfaces to their applications Customer service teams wanting to create voice-enabled support systems Content creators building interactive voice experiences Anyone interested in creating their own "Alexa-like" assistant How it works The workflow consists of two main parts: Frontend Interface: A beautiful animated orb that users click to activate voice recording Backend Processing: Receives the audio transcription, processes it through an AI agent with memory, and returns voice responses The system uses: Web Speech API for voice recognition (browser-based) OpenAI GPT-4o-mini for intelligent responses OpenAI Text-to-Speech for voice synthesis Session memory to maintain conversation context Setup requirements n8n instance (self-hosted or cloud) OpenAI API key with access to: GPT-4o-mini model Text-to-Speech API Modern web browser with Web Speech API support (Chrome, Edge, Safari) How to set up Import the workflow into your n8n instance Add your OpenAI credentials to both OpenAI nodes Copy the webhook URL from the "Audio Processing Endpoint" node Edit the "Voice Assistant UI" node and replace YOUR_WEBHOOK_URL_HERE with your webhook URL Access the "Voice Interface Endpoint" webhook URL in your browser Click the orb and start talking! How to customize the workflow Change the AI personality**: Edit the system message in the "Process User Query" node Modify the visual style**: Customize the CSS in the "Voice Assistant UI" node Add more capabilities**: Connect additional tools to the AI Agent Change the voice**: Select a different voice in the "Generate Voice Response" node Adjust memory**: Modify the context window length in the "Conversation Memory" node Demo Watch the template in action: https://youtu.be/0bMdJcRMnZY
by Shashwat Singh
Never lose an inbound lead because someone missed the phone. This workflow captures missed inbound calls from Twilio, logs them, notifies your team, and instantly sends a context aware SMS to the caller. It automatically adapts messaging based on business hours, ensuring fast response during working hours and clear expectations after hours. Built for small service businesses, agencies, clinics, and local operators who rely heavily on phone calls. How it works Receives inbound call events from Twilio via webhook Filters for failed, busy, or no answer call statuses Checks whether the call happened during business hours Logs call details to Google Sheets for tracking and follow up Sends an automatic SMS reply to the caller During business hours, promises a quick callback After hours, sets next business day expectations Notifies your team in Slack with caller details This creates instant acknowledgment, internal visibility, and a structured follow up trail. Setup steps Connect your Twilio account and configure the webhook URL Connect Google Sheets and select your tracking sheet Connect Slack and choose a notification channel Adjust business hours and timezone inside the code node Customize SMS copy to match your brand voice Setup typically takes 10 to 15 minutes.
by BytezTech
Generate 360° product videos from a single photo using Google Veo 3 and Telegram 📌 Overview This workflow turns any product photo into a cinematic 360° orbit video using Google Vertex AI (Veo 3) — fully automated and delivered straight to Telegram. Send a product image to your Telegram bot and the workflow handles everything: image validation, Google Cloud authentication, AI video generation, and delivery. No manual steps, no dashboard — just send a photo and receive a professional video. Built for e-commerce sellers, product photographers, and marketers who want studio-quality 360° product videos without expensive equipment or editing software. ⚙️ How it works User sends a product photo to the Telegram bot The workflow validates the image (minimum 480px resolution) A Service Account stored in Google Sheets is used to authenticate with Google Cloud and generate a short-lived OAuth token The image is sent to Vertex AI Veo 3 with a cinematic 360° orbit camera prompt The workflow polls every 2 minutes until the video is ready (up to 10 minutes) The finished video is delivered back to the user in Telegram 🛠️ Setup steps Create a Telegram bot via @BotFather and add the bot credentials in n8n Enable the Vertex AI API in your Google Cloud project Request access to the Veo 3 preview model in Google Cloud Console Create a Google Service Account with the role roles/aiplatform.user Download the Service Account JSON key Create a Google Sheet (Sheet1) with these columns: client_email | private_key | project_id | scope Paste your Service Account JSON values into the sheet Update the 1. Get Service Account Details node with your Google Sheet ID Connect your Google Sheets and Telegram credentials in n8n Activate the workflow and send a product photo to your bot 🚀 Features AI-powered video generation Generates cinematic 360° orbit product videos from a single photo Uses Google Veo 3 (latest AI video generation model) Adds studio lighting and clean white background automatically Supports optional product caption as additional AI context Audio generation included by default Smart error handling Validates image resolution before processing (minimum 480px) Catches and reports image conversion failures Timeout protection after 10 minutes with user-friendly error message All errors are sent back to the user as Telegram messages Secure authentication Service Account credentials stored safely in Google Sheets JWT signed locally — no third-party auth services required Fresh OAuth token generated on every request 📋 Requirements n8n (self-hosted or cloud) Telegram Bot (via @BotFather) Google Cloud project with Vertex AI API enabled Google Veo 3 preview access (request via Google Cloud Console) Google Service Account with roles/aiplatform.user Google Sheets (to store Service Account credentials) 🎯 Benefits No expensive equipment or video editing software needed Fully automated — send a photo, receive a video Works for any physical product Scales to multiple users via Telegram Videos ready in 3–5 minutes on average 👨💻 Author BytezTech Pvt Ltd
by Thesys
Build your own Shopify Store Agent with Shopify MCP and C1 by Thesys This n8n template can setup a embeddable web chat widget for your Shopify store. Check out a working demo of this template here. What this workflow does A user sends a message in the n8n Chat UI (public chat trigger). The AI Agent interprets the request. The agent calls CoinGecko Free MCP to fetch market data (prices, coins, trending, etc.). The model responds through C1 by Thesys with a streaming, UI answer. Example prompts you can try right away Copy/paste any of these into the chat: “What are the products in the catalog?” "Purchase white shirt" "Checkout my cart" How it works User sends a prompt C1 model based on prompt will use Shopify MCP to fetch live catalog C1 Model generates a UI Schema Response Schema is rendered as UI using Thesys GenUI SDK on the frontend Setup Make sure you have the following: 1. Thesys API Key You’ll need an API key to authenticate and use Thesys services. Get your key here 2. Shopify MCP URL You’ll need the URL of your Shopify Storefront MCP to access the catalog. For more information, please refer to the Shopify MCP Docs. What is C1 by Thesys? C1 by Thesys is an API middleware that augments LLMs to respond with interactive UI (charts, buttons, forms) in real time instead of text. Facing setup issues? If you get stuck or have questions: :speech_bubble: Join the Thesys Community :e-mail: Email support: support@thesys.dev
by 3D Measure Up
Description This workflow helps developers and automation teams route measurement data from a webhook to multiple destinations using n8n. It is designed for measurement engines, 3D scanning systems, QA tools, and API-based platforms that send structured JSON data and need a fast, flexible way to store, display, or process it without building a custom backend. The workflow starts with a Webhook Trigger that receives incoming measurement data and validates the payload. A processing step normalizes the fields into a clean, consistent format so the data can be reused across different outputs. A routing node then allows users to choose how they want to handle the data. You can append measurements to Google Sheets for reporting, display them as a formatted HTML page for quick viewing, export them as a CSV and send them by email, or run custom JavaScript logic to forward the data to APIs, cloud storage, or other n8n workflows. Clear sticky notes inside the workflow guide users through setup, credential configuration, and customization, making this template beginner-friendly and production-ready. How it works A Webhook Trigger receives measurement data as a JSON payload. The workflow validates and normalizes the data structure. A Switch node routes the data to the selected output path. The data is sent to Google Sheets, an HTML response, a CSV email, or a custom JavaScript step. How to set up Import and activate the workflow in n8n. Copy the Webhook URL from the Webhook Trigger node. Open the 3D Measure Up Web Application: https://3dmeasureup.ai/ Log in and navigate to Settings → Webhooks. Paste the webhook URL into the Webhook URL field and click Save. Connect Google Sheets and email credentials if needed. Use the configuration fields and sticky notes inside the workflow to select and customize your output path. Setup usually takes 5–10 minutes. Requirements n8n (cloud or self-hosted) A webhook URL generated by this workflow 3D Measure Up** can send measurement data as JSON to the webhook URL Google account (optional, for Google Sheets output) Email credentials (optional, for CSV email delivery) How to update the Webhook URL in 3D Measure Up Open the 3D Measure Up Web Application: https://3dmeasureup.ai/ Log in to your account. Navigate to Settings → Webhooks. Copy the Webhook Trigger URL from this n8n workflow. Paste the URL into the Webhook URL field in 3D Measure Up. Click Save to apply the configuration. Once saved, the webhook becomes active immediately. From now on, whenever new measurements are generated, the data will be sent automatically to your n8n workflow. How to customize the workflow Adjust the processing node to match your measurement schema. Enable or disable outputs using the Switch node. Edit the HTML template for branding or layout. Add API calls or storage logic inside the JavaScript node.
by Eugene
Run a multi-agent SEO domain audit with SE Ranking and Claude Who is this for SEO agencies running competitor analysis for clients Content teams planning editorial strategies Marketing teams tracking competitive performance What this workflow does Enter any domain and get a full SEO strategy report. Five AI agents analyze your technical health, backlinks, keywords, AI visibility, and competitors — then a Strategy Director builds a prioritized 90-day action plan. What you'll get Domain performance baseline (keywords, traffic, traffic value) Technical SEO audit with health score and Core Web Vitals Backlink profile with anchor text analysis Top competitors discovered by keyword overlap AI visibility across ChatGPT, Perplexity, Gemini, and AI Overviews Prioritized 90-day action plan from the Strategy Director Full report in Google Drive + metrics in Google Sheets How it works You enter a domain, business description, and target market via a form Pulls domain overview, keywords, competitors, backlinks, and audit data in parallel Checks AI search visibility across 4 engines Four specialist agents analyze the data (Technical SEO, Links, Keywords, AI Visibility) A Strategy Director agent reviews everything and builds a unified plan Saves the report to Google Drive and metrics to Google Sheets Requirements SE Ranking community node v1.3.5+ (Install from npm) SE Ranking API token (Get one here) Anthropic API key (for Claude) Google Drive + Sheets accounts (optional) Setup Install the SE Ranking community node v1.3.5+ Add your SE Ranking and Anthropic API credentials Connect Google Drive and Google Sheets (optional) Open the form, enter a domain, and run it Customization Change the target market dropdown to add more countries Swap Claude models for different cost/speed tradeoffs Add a Slack notification node at the end for team alerts
by Masaki Go
Slack Bot for n8n Template Search with AI Tips, Cache and Analytics Search n8n workflow templates directly from Slack with AI-powered suggestions. Mention the bot with what you need in English, Spanish or Japanese and get matching templates plus actionable tips to improve your automation. Who is this for Teams using n8n who want to find workflow templates without leaving Slack. Great for multilingual teams and onboarding new members. What this workflow does Detects user intent (search, help, or browse categories) and routes accordingly Extracts keywords from 200+ known services and translates 150+ Japanese business terms to English Checks a Google Sheets cache before calling the n8n Templates API Uses OpenAI (gpt-4o-mini) to generate contextual tips based on the search results and use case When no templates are found, the AI suggests alternative keywords and how to build the workflow from scratch Logs every search to Google Sheets and posts a weekly usage report to Slack Setup Create a Slack App with app_mentions:read and chat:write scopes Set Slack credentials in n8n Create an HTTP Header Auth credential for OpenAI (name: Authorization, value: Bearer sk-your-key) Create a Google Sheet with two tabs: Cache (SearchQuery, CachedResponse, ResultCount, Timestamp) and Analytics (Timestamp, User, Query, Keywords, ResultCount, Intent, FromCache) Connect the Google Sheet in all four Sheets nodes Select your Slack channel in the Trigger, Error Reply and Weekly Summary nodes Activate and test with a mention How to customize Add services to knownServices or extend jaToEn for more languages Edit the AI system prompts to change tone or tip style Adjust the weekly report schedule in the Schedule Trigger node Replace Google Sheets cache with Redis for better performance at scale
by Naveen Choudhary
Automatically gather hundreds of real customer reviews from five major platforms in one run using Thordata API and Proxy — Trustpilot, Capterra, Chrome Web Store, TrustRadius, and Product Hunt — then let GPT-4.1 perform deep collective sentiment analysis, uncover common praises & complaints, flag critical issues, assess churn risk, and deliver actionable recommendations straight to your inbox as a stunning executive HTML report. Who’s it for Product managers & founders Growth and marketing teams Customer success & support leads Agencies delivering competitor or product review reports How it works Submit product URLs via form, webhook, or use defaults Smart, Cloudflare-safe scraping with automatic pagination Universal parser standardizes every review format Global deduplication using deterministic unique IDs GPT-4.1 analyzes all reviews collectively (not one-by-one) Beautiful responsive HTML email with sentiment badges, stats, and recommendations Requirements Thordata API key (free tier works) → set as HTTP Header Auth credential OpenAI API key Gmail account (or replace with any email node) How to set up Add your Thordata and OpenAI credentials Connect Gmail Click “Execute Workflow” – instantly tests with Thordata’s own reviews How to customize Edit default product in “Prepare Review Sources” node Modify the AI prompt or email design anytime Add more sources or change the output format easily Zero browser automation · Rate-limit safe · Fully deduplicated · Plug-and-play in minutes.
by Jinash Rouniyar
PROBLEM Evaluating and comparing responses from multiple LLMs (OpenAI, Claude, Gemini) can be challenging when done manually. Each model produces outputs that differ in clarity, tone, and reasoning structure. Traditional evaluation metrics like ROUGE or BLEU fail to capture nuanced quality differences. Human evaluations are inconsistent, slow, and difficult to scale. This workflow automates LLM response quality evaluation using Contextual AI’s LMUnit, a natural language unit testing framework that provides systematic, fine-grained feedback on response clarity and conciseness. > Note: LMUnit offers natural language-based evaluation with a 1–5 scoring scale, enabling consistent and interpretable results across different model outputs. How it works A chat trigger node collects responses from multiple LLMs such as OpenAI GPT-4.1, **Claude 4.5 Sonnet, and Gemini 2.5 Flash. Each model receives the same input prompt to ensure fair comparison, which is then aggregated and associated with each test cases We use Contextual AI's LMUnit node to evaluate each response using predefined quality criteria: “Is the response clear and easy to understand?” - Clarity “Is the response concise and free from redundancy?” - Conciseness LMUnit** then produces evaluation scores (1–5) for each test Results are aggregated and formatted into a structured summary showing model-wise performance and overall averages. How to set up Create a free Contextual AI account and obtain your CONTEXTUALAI_API_KEY. In your n8n instance, add this key as a credential under “Contextual AI.” Obtain and add credentials for each model provider you wish to test: OpenAI API Key: platform.openai.com/account/api-keys Anthropic API Key: console.anthropic.com/settings/keys Gemini API Key: ai.google.dev/gemini-api/docs/api-key Start sending prompts using chat interface to automatically generate model outputs and evaluations. How to customize the workflow Add more evaluation criteria (e.g., factual accuracy, tone, completeness) in the LMUnit test configuration. Include additional LLM providers by duplicating the response generation nodes. Adjust thresholds and aggregation logic to suit your evaluation goals. Enhance the final summary formatting for dashboards, tables, or JSON exports. For detailed API parameters, refer to the LMUnit API reference. If you have feedback or need support, please email feedback@contextual.ai.
by Cheng Siong Chin
How It Works This workflow automates comprehensive enterprise risk assessment and mitigation planning for organizations managing complex operational, financial, and compliance risks. Designed for risk managers, internal audit teams, and executive leadership, it solves the challenge of continuously evaluating multi-dimensional risks, validating threat severity, and coordinating appropriate mitigation strategies across diverse business functions. The system triggers on-demand or scheduled assessments, generates sample credential data for testing, deploys a Coordination Agent to orchestrate specialized risk evaluations through parallel AI agents (Credential Validation verifies identity risks, Credential Verification confirms data accuracy, Risk Assessment evaluates threat levels), routes findings by severity (critical/high/medium/low), and merges outputs into consolidated reports. By combining multi-agent risk analysis with intelligent prioritization and unified reporting, organizations achieve 360-degree risk visibility, reduce assessment cycles from weeks to hours, ensure consistent evaluation frameworks, and enable proactive mitigation before risks materialize into losses. Setup Steps Connect Manual Trigger for on-demand assessments or configure Schedule Trigger for routine evaluations Configure risk data sources Add AI model API keys to Coordination Agent and all specialized agents Define risk scoring criteria and severity thresholds in agent prompts aligned with company risk appetite Configure routing conditions for each risk level with appropriate handling workflows Set up reporting output format and distribution channels for consolidated risk reports Prerequisites Enterprise risk management system access, AI service accounts Use Cases Cybersecurity risk assessments, fraud risk evaluations, third-party vendor risk reviews Customization Modify agent prompts for industry-specific risk frameworks (NIST, ISO 31000, COSO) Benefits Reduces risk assessment time from weeks to hours, provides 360-degree risk visibility
by Daniel Turgeman
How it works Triggers when a contact property changes in HubSpot (e.g., added to a sequence) Lusha enriches the contact with verified email, direct phone, and seniority A prospect record is built and validated — contacts with email are sent to your outreach tool and updated in HubSpot Contacts missing email are logged and a Slack notification alerts the team Set up steps Install the Lusha community node Add your Lusha API, HubSpot, and Slack credentials Configure the HubSpot trigger to listen for the property change that signals sequence enrollment Update the outreach HTTP node with your engagement platform's API endpoint