by Angel Menendez
Enhance Security Operations with the Qualys Slack Shortcut Bot! Our Qualys Slack Shortcut Bot is strategically designed to facilitate immediate security operations directly from Slack. This powerful tool allows users to initiate vulnerability scans and generate detailed reports through simple Slack interactions, streamlining the process of managing security assessments. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Qualys to execute vulnerability scans and create reports based on user-specified parameters. Real-Time Feedback**: Offers instant feedback within Slack, updating users about the status of their requests and delivering reports directly through Slack channels. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other sub-workflows like 'Qualys Start Vulnerability Scan' or 'Qualys Create Report' for detailed processing. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore our Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of vulnerabilities and streamlined reporting. To handle the actual processing of requests, you will also need to deploy these two subworkflows: Qualys Start Vulnerability Scan Qualys Create Report To simplify deployment, use this Slack App manifest to quickly create an app with the correct permissions: { "display_information": { "name": "Qualys n8n Bot", "description": "n8n Integration for Qualys", "background_color": "#2a2b2e" }, "features": { "bot_user": { "display_name": "Qualys n8n Bot", "always_online": false }, "shortcuts": [ { "name": "Scan Report Generator", "type": "global", "callback_id": "qualys-scan-report", "description": "Generate a report from the latest scan to review vulnerabilities and compliance." }, { "name": "Launch Qualsys VM Scan", "type": "global", "callback_id": "trigger-qualys-vmscan", "description": "Start a Qualys Vulnerability scan from the comfort of your Slack Workspace" } ] }, "oauth_config": { "scopes": { "bot": [ "commands", "channels:join", "channels:history", "channels:read", "chat:write", "chat:write.customize", "files:read", "files:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"", "message_menu_options_url": "Replace everything inside the double quotes with your workflow message options webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a call analyzer. .png) Who is this for? This workflow is ideal for sales teams, customer support managers, and online education services that conduct follow-up calls with clients. It’s designed for those who want to leverage AI to gain deeper insights into client needs and upsell opportunities from recorded calls. What problem does this workflow solve? Many follow-up sales calls lack structured analysis, making it challenging to identify client needs, gauge interest levels, or uncover upsell opportunities. This workflow enables automated call transcription and AI-driven analysis to generate actionable insights, helping teams improve sales performance, refine client communication, and streamline upselling strategies. What this workflow does This workflow transcribes and analyzes sales calls using AssemblyAI, OpenAI, and Supabase to store structured data. The workflow processes recorded calls as follows: Transcribe Call with AssemblyAI: Converts audio into text with speaker labels for clarity. Analyze Transcription with OpenAI: Using a predefined JSON schema, OpenAI analyzes the transcription to extract metrics like client intent, interest score, upsell opportunities, and more. Store and Access Results in Supabase: Stores both transcription and analysis data in a Supabase database for further use and display in interfaces. Setup Preparation Create Accounts: Set up accounts for N8N, Supabase, AssemblyAI, and OpenAI. Get Call Link: Upload audio files to public Supabase storage or Dropbox to generate a direct link for transcription. Prepare Artifacts for OpenAI: Define Metrics: Identify business metrics you want to track from call analysis, such as client needs, interest score, and upsell potential. Generate JSON Schema: Use GPT to design a JSON schema for structuring OpenAI’s responses, enabling efficient storage, analysis, and display. Create Analysis Prompt: Write a detailed prompt for GPT to analyze calls based on your metrics and JSON schema. Scenario 1: Transcribe Call with AssemblyAI Set Up Request: Header Authentication: Set Authorization with AssemblyAI API key. URL: POST to https://api.assemblyai.com/v2/transcript/. Parameters: audio_url: Direct URL of the audio file. webhook_url: URL for an N8N webhook to receive the transcription result. Additional Settings: speaker_labels (true/false): Enables speaker diarization. speakers_expected: Specify expected number of speakers. language_code: Set language (default: en_us). Scenario 2: Process Transcription with OpenAI Webhook Configuration: Set up a POST webhook to receive AssemblyAI’s transcription data. Get Transcription: Header Authentication: Set Authorization with AssemblyAI API key. URL: GET https://api.assemblyai.com/v2/transcript/<transcript_id>. Send to OpenAI: URL: POST to https://api.openai.com/v1/chat/completions. Header Authentication: Set Authorization with OpenAI API key. Body Parameters: Model: Use gpt-4o-2024-08-06 for JSON Schema support, or gpt-4o-mini for a less costly option. Messages: system: Contains the main analysis prompt. user: Combined speakers’ utterances to analyze in text format. Response Format: type: json_schema. json_schema: JSON schema for structured responses. Save Results in Supabase: Operation: Create a new record. Table Name: demo_calls. Fields: Input: Transcription text, audio URL, and transcription ID. Output: Parsed JSON response from OpenAI’s analysis.
by Agent Studio
Overview This workflow aims to provide data visualization capabilities to a native SQL Agent. Together, they can help foster data analysis and data visualization within a team. It uses the native SQL Agent that works well and adds visualization capabilities thanks to OpenAI’s Structured Output and Quickchart.io. How it works Information Extraction: The Information Extractor identifies and extracts the user's question. If the question includes a visualization aspect, the SQL Agent alone may not respond accurately. SQL Querying: It leverages a regular SQL Agent: it connects to a database, queries it, and translates the response into a human-readable format. Chart Decision: The Text Classifier determines whether the user would benefit from a chart to support the SQL Agent's response. Chart Generation: If a chart is needed, the sub-workflow dynamically generates a chart and appends it to the SQL Agent’s response. If not, the SQL Agent’s response is output as is. Calling OpenAI for Chart Definition: The sub-workflow calls OpenAI via the HTTP Request node to retrieve a chart definition. Building and Returning the Chart: In the "Set Response" node, the chart definition is appended to a Quickchart.io URL, generating the final chart image. The AI Agent returns the response along with the chart. How to use it Use an existing database or create a new one. For example, I've used this Kaggle dataset and uploaded it to a Supabase DB. Add the PostgreSQL or MySQL credentials. Alternatively, you can use SQLite binary files (check this template). Activate the workflow. Start chatting with the AI SQL Agent. If the Text Classifier determines a chart would be useful, it will generate one in addition to the SQL Agent's response. Notes The full Quickchart.io specifications have not been fully integrated, so there may be some glitches (e.g., radar graphs may not display properly due to size limitations).
by Dominik Baranowski
N8N for Beginners: Looping Over Items Description This workflow is designed for n8n beginners to understand how n8n handles looping (iteration) over multiple items. It highlights two key behaviors: Built-In Looping:** By default, most n8n nodes iterate over each item in an input array. Explicit Looping:* The *Loop Over Items* node allows controlled iteration, enabling *custom batch processing** and multi-step workflows. This workflow demonstrates the difference between processing an unsplit array of strings (single item) vs. a split array (multiple items). Setup 1. Input Data To begin, paste the following JSON into the Manual Trigger node: { "urls": [ "https://www.reddit.com", "https://www.n8n.io/", "https://n8n.io/", "https://supabase.com/", "https://duckduckgo.com/" ] } 📌 Steps to Paste Data: Double-click** the "Manual Trigger" node. Click "Edit Output" (top-right corner). Paste the JSON and Save. The node turns purple, indicating that test data is pinned. 1. Click "Test Workflow" button at the bottom of the canvas Explanation of the n8n Nodes in the Workflow | Node Name | Purpose | Documentation Link | |-----------|---------|--------------------| | Manual Trigger | Starts the workflow manually and sends test data | Docs | | Split Out | Converts an array of strings into separate JSON objects | Docs | | Loop Over Items (Loop Over Items 1) | Demonstrates how an unsplit array is treated as one item | Docs | | Loop Over Items (Loop Over Items 2) | Iterates over each item separately | Docs | | Wait | Introduces a delay per iteration (set to 1 second) | Docs | | Code | Adds a constant parameter (param1) to each item | Docs | | NoOp (Result Nodes) | Displays output for inspection | Docs | Execution Details 1. How the Workflow Runs Manual Trigger starts execution** with the pasted JSON data. The workflow follows two paths: Unsplit Array Path → Loop Over Items 1 Processes the entire array as a single item. Result1 & Result5: Show that the array was not split. Split Array Path → Split Out → Loop Over Items 2 Splits the array into separate objects. Result2, Result3, Result4: Show that each item is processed individually. A Wait node (1 sec delay) demonstrates controlled execution. Code nodes modify the JSON, adding a parameter (param1). 2. What You Will See | Node | Expected Output | |------|---------------| | Result1 & Result5 | The entire array is processed as one item. | | Result2, Result3, Result4 | The array is split and processed as individual items. | | Wait Node | Adds a 1-second delay per item in Loop Over Items 2. | Use Cases This workflow is useful for: ✅ API Data Processing: Loop through API responses containing arrays. ✅ Web Scraping: Process multiple URLs individually. ✅ Task Automation: Execute a sequence of actions per item. ✅ Workflow Optimization: Control execution order, delays, and dependencies. Notes Sticky notes are included in the workflow for easy reference. The Wait node is optional—remove it for faster execution. This template is structured for beginners but serves as a building block for more advanced automations.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for recruitment agencies, HR professionals, and hiring managers looking to automate the initial screening of CVs. It is especially useful for organizations handling large volumes of applications and seeking to streamline their recruitment process. What problem does this workflow solve? Manually screening resumes is time-consuming and prone to human error. This workflow automates the process, providing consistent and objective analysis of CVs against job descriptions. It helps filter out unsuitable candidates early, reducing workload and improving the overall efficiency of the recruitment process. What this workflow does This workflow automates the resume screening process using OpenAI for analysis. It provides a matching score, a summary of candidate suitability, and key insights into why the candidate fits (or doesn’t fit) the job. Retrieve Resume: The workflow downloads CVs from a direct link (e.g., Supabase storage or Dropbox). Extract Data: Extracts text data from PDF or DOC files for analysis. Analyze with OpenAI: Sends the extracted data and job description to OpenAI to: Generate a matching score. Summarize candidate strengths and weaknesses. Provide actionable insights into their suitability for the job. Setup Preparation Create Accounts: N8N: For workflow automation. OpenAI: For AI-powered CV analysis. Get CV Link: Upload CV files to Supabase storage or Dropbox to generate a direct link for processing. Prepare Artifacts for OpenAI: Define Metrics: Identify the metrics you want from the analysis (e.g., matching percentage, strengths, weaknesses). Generate JSON Schema: Use OpenAI to structure responses, ensuring compatibility with your database. Write a Prompt: Provide OpenAI with a clear and detailed prompt to ensure accurate analysis. N8N Scenario Download File: Fetch the CV using its direct URL. Extract Data: Use N8N’s PDF or text extraction nodes to retrieve text from the CV. Send to OpenAI: URL: POST to OpenAI’s API for analysis. Parameters: Include the extracted CV data and job description. Use JSON Schema to structure the response. Summary This workflow provides a seamless, automated solution for CV screening, helping recruitment agencies and HR teams save time while maintaining consistency in candidate evaluation. It enables organizations to focus on the most suitable candidates, improving the overall hiring process.
by KumoHQ
Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
by Agent Studio
Overview This workflow helps you compare Claude 3.5 Sonnet and Gemini 2.0 Flash when extracting data from a PDF This workflow extracts and processes the data within a PDF in one single step, instead of calling an OCR and then an LLM” How it works The initial 2 steps download the PDF and convert it to base64. This base64 string is then sent to both Claude 3.5 Sonnet and Gemini 2.0 Flash to extract information. This workflow is made to let you compare results, latency, and cost (in their dedicated dashboard). How to use it Set up your Google Drive if not already done Select a document on your Google Drive Modify the prompt in "Define Prompt" to extract the information you need and transform it as wanted. Get a Claude API key and/or Gemini API key Note that you can deactivate one of the 2 API calls if you don't want to try both Test the Workflow
by Jaruphat J.
Who is this for? This workflow is perfect for digital content creators, marketers, and social media managers who regularly create engaging short-form videos featuring inspirational or motivational quotes. While the workflow is universally applicable, it specifically highlights Thai as an example to demonstrate effective language and font integration. What problem is this workflow solving? Creating consistent and engaging multilingual video content manually, including attractive fonts and proper video formatting, is time-consuming and repetitive. Additionally, managing files, background music, and updating statuses manually can be tedious and prone to errors. What this workflow does Automatically fetches background video and music files stored on Google Drive. Randomly selects a quote (demonstrated with Thai language) and author information from Google Sheets. Dynamically combines the selected quote and author text using appealing fonts, such as the Thai font "Kanit," directly onto the video using FFmpeg on your n8n local environment. Creates visually engaging videos with a 9:16 aspect ratio, optimized for YouTube Shorts and other vertical video platforms. Automatically uploads the finalized video to YouTube. Updates the status and YouTube URL back into your Google Sheet, ensuring you have up-to-date records. Setup Requirements: This workflow requires a self-hosted n8n instance, as the execution of FFmpeg commands is not supported on n8n Cloud. Ensure FFmpeg is installed on your self-hosted environment. Google Sheets Setup: Your Google Sheet must include at least these columns: Index: (Unique identifier for each quote) Quote: (Text of the quote) Author: (Author of the quote) CreateStatus: (Track video creation status; values like 'DONE' or blank for pending) YoutubeURL: (Automatically updated after upload) To help you get started quickly, you can use this template spreadsheet. Next steps: Organize your video and music files in separate folders in Google Drive. Authenticate your Google Sheets, Google Drive, and YouTube accounts in n8n. Ensure fonts compatible with your target languages (such as Kanit for Thai) are available in your FFmpeg installation. How to customize this workflow to your needs Fonts:** Adjust font styles and sizes within the workflow's code node. Ensure the fonts you choose fully support the language you wish to use. Quote Management:** Easily add or remove quotes and authors in your Google Sheets document. Media Files:** Change or update background videos and music by modifying the files in your Google Drive folders. Video Specifications:** Customize video dimensions, text positioning, opacity, and music volume directly in the provided FFmpeg commands. Benefits of Using Localized Fonts and Quotes Utilizing fonts specific to your target language, as demonstrated with Thai, significantly increases audience engagement by making your content more relatable, shareable, and visually appealing. Ensure you select fonts that properly support the language you're targeting.
by Jimleuk
This n8n template monitors an Outlook mailbox for invoices, automatically parses/extracts data from them and then uploads the output to an Excel Workbook. One of my top workflow requests, this template can save many hours of manual labour for you or your finance/accounts team. How it works A scheduled trigger is set to fetch recent Outlook messages to the Accounts receivable mailbox. Each message is analysed to determine whether or not it from a supplier and is issuing/contains an invoice. For each valid message, the attachments are downloaded and non-invoice documents are filtered out via AI Vision classification. Invoices are then processed through a AI vision model again to extract the details. The extracted data can then be used for reconciliation or otherwise. For this demonstration, we'll just append the row to an Excel sheet for now. How to use Ensure your Microsoft365 credential points to the correct mailbox. If a shared folder is used, toggle "shared folder" option to "on" and for the principal ID, use the email address. If you receive lots of other types of messages such as replies and forwards, you may want to implement additional checks to prevent processing invoices twice. The "remove duplicates" node can help with this. Requirements Outlook for Mailbox Google Gemini for Document Understanding and Invoice Extraction Excel for Data Storage Customising this workflow Note the assumption for this template is that all invoices will come as a PDF attachment. In real life, this is rarely the case! Adding in document conversion to cover all invoice formats. Human feedback is also an important factor in AI workflows. Try tagging emails as a way to notify team members that the invoice was processed.
by irfan saeed
Auto-Generate YouTube Chapters with AI-Powered Transcript Analysis Overview This workflow uses YouTube Data API v3 and Google Gemini 1.5 Flash AI to automatically generate timestamped chapters for videos by analyzing SRT captions. It enhances viewer navigation, improves SEO , and saves creators time by automating manual tasks. Prerequisites YouTube API Setup Create a Google Cloud Project Go to the Google Cloud Console. Click Select a project > New Project and name it (e.g., "YouTube Chapters Automation") . Enable YouTube Data API v3 Navigate to APIs & Services > Library. Search for "YouTube Data API v3" and click Enable . Configure OAuth Consent Screen Go to APIs & Services > OAuth consent screen. Select External (public) or Internal (testing), then add required details (app name, support email) . Generate OAuth 2.0 Credentials Under Credentials, click Create Credentials > OAuth client ID. Choose Web app, then download the JSON key file . Add Credentials to n8n Other Requirements Google Gemini API**: Configure access for the gemini-1.5-flash-8b-exp-0924 model by getting the api key. Workflow Steps Set Video ID Input the target video ID (e.g., r1wqsrW2vmE) using the Set Video ID node. Fetch Video Metadata Use the YouTube API node to retrieve the video’s title, category, and existing description . Download SRT Captions Get Caption ID: Call https://www.googleapis.com/youtube/v3/captions to fetch the caption track ID . Download Transcript: Use the ID to retrieve SRT data via https://www.googleapis.com/youtube/v3/captions/{{ID}}?tfmt=srt . Analyze Transcript with Gemini AI Process the SRT file with Google Gemini AI to identify chapters using a prompt like: "Classify this transcript into timestamped chapters (e.g., 00:00 - Introduction)." Validate output with a structured parser (e.g., Structured Captions node) . Update Video Description Append chapters to the description using the YouTube API’s videos.update method . Value Proposition Viewer Experience**: Chapters improve navigation and reduce drop-off rates . SEO Benefits**: Structured descriptions enhance search visibility . Time Savings**: Eliminates manual chapter creation .
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Google Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Google Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Google Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Google Search Data Extractor uses LLM based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Google Search query as you wish by navigating to the Set Google Search Query node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize This Workflow to your needs 1. Change the Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by Ranjan Dailata
Who this is for? This workflow automates the process of Wikipedia data extraction using the Bright Data Web Unlocker, parsing and cleaning the data, and then sending the results to a specified webhook URL for downstream processing, reporting, or integration. What problem is this workflow solving? Researchers who need structured information from Wikipedia pages regularly. Data Engineers building knowledge bases or enriching datasets with factual data. Digital Marketers or Content Writers automating fact-checking or content sourcing. Automation Enthusiasts who want to trigger external systems with rich context from Wikipedia. What this workflow does This workflow addresses the challenges of manually retrieving, structuring, and using data from Wikipedia at scale. Workflow Breakdown Trigger Type: Scheduled or Manual Purpose: Starts the workflow either on a fixed schedule (e.g., daily) or on-demand via a manual trigger or incoming webhook. Bright Data Wikipedia Scraping Tool Used: Bright Data Web Unlocker Action: Scrape the HTML content of one or multiple Wikipedia article URLs. Parse & Extract Structured Data The Basic LLM Chain node is responsible for producing a human readable content. Summarization Summarize the Wikipedia content by utilizing the Summarization Chain node. Send to Webhook Initiates a Webhook notification to the specified URL as part of the "Summary Webhook Notifier" node. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set Wikipedia URL with Bright Data Zone node with the Wikipedia URL and Bright Data Zone. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Wikipedia URL Replace with your own Wikipedia URL of your interest. Make sure to set the Wikipedia URL as part of the "Set Wikipedia URL with Bright Data Zone" node. Modify Data Extraction Logic Extract entire article content or just specific sections by extending the "LLM Data Extractor" node prompt. Extend AI Summarization Extract key bullet points or entities. Create short-form summaries by extending the "Concise Summary Generator" node. Extend Summary Webhook Notifier Send to Slack, Discord, Telegram, MS Teams via the Webhook notification mechanism. Connect to your internal database/API via the Webhook notification mechanism.