by Mutasem
Use Case This workflow aims to enrich new contacts in HubSpot. The more relevant the HubSpot profile, the more useful it is. Once active, this n8n workflow will update the social profiles, contact data (phone, email) as well as location data from ExactBuyer. Setup Add HubSpot trigger credential (be careful, scopes must be exactly as in n8n docs ) Add your Exact Buyer API key Add HubSpot credential for update node (be careful, scopes must be same as n8n docs for this. This is different from the trigger cred) Activate workflow How to adjust this template There's plenty of interesting info that ExactBuyer returns that could be helpful. Take a look and update this workflow to add what you need.
by Joachim Brindeau
What it does The workflow is a simple yet efficient way to automate the process of indexing your website on Google using the Google Indexing API. How it works It works by extracting information from your sitemap, converting it into a JSON file, and looping through each URL to submit it for indexing. Here's a brief rundown of the workflow: The workflow can be triggered manually via the "Execute Workflow" button or scheduled to run at a specific time using the "Schedule Trigger" node. The sitemap of your website is fetched using the "sitemap_set" node with a HTTP Request to the sitemap URL. This XML sitemap is then converted into a JSON file using the "sitemap_convert" node. The "sitemap_parse" node splits the JSON file into individual URLs. The "url_set" node then prepares each URL to be sent to the Google Indexing API. A loop is created using the "loop" node to process each URL individually and make a POST request to Google Indexing API indicating that the URL has been updated. If the POST request is successful and the URL has been updated, the workflow waits for 2 seconds before moving to the next URL. In case the daily limit for the Google Indexing API is reached (200/day by default), an error message is triggered using the "Stop and Error" node. Before you use the workflow Activate the indexing API Create an account with Google Cloud Platform > Console and then create a new project Search for the Indexing API in the Library Activate the API Create a Service Account and get credentials Open the Service accounts page. If prompted, select a project. Click add Create Service Account, enter a name and description for the service account. You can use the default service account ID, or choose a different, unique one. When done click Create. On the Grant users access to this service account screen, scroll down to the Create key section. Click add Create key. In the side panel that appears, select the JSON format Click Create. Your new public/private key pair is generated and downloaded to your machine. Open the file and copy the private key. Add the credentials in the url_index node Add the user as owner of the site Beware, for each site you need to add the user as a owner like this: Set your sitemap Open the sitemap_set node and add the url to your sitemap. Now you should be able to ensure that Google is always up-to-date with the latest content on your website, improving your website's visibility and SEO rankings, have fun!
by Milorad Filipovic
This workflow will translate all your PDF documents from specified Google Drive folder to the desired language. Translated files will be automatically uploaded to the original folder. Required accounts 1️⃣ Google Drive account 2️⃣ DeepL developer account and API key How to setup? 1️⃣ Setup your google drive folder url, target and source language in the configuration node 2️⃣ Connect your Google Drive account with all Google Drive nodes 3️⃣ Setup HTTP header credentials that should be used for HTTP nodes in the template (replace yourAuthKey with your DeepL API key) 4️⃣ Set your DeepL header credentials in all HTTP nodes
by Jonathan
How it works This workflow watches a selected Google Drive folder for any images added to it. It then takes that image, sends it the the tinypng.com service which optimises and reduces its size (where possible) Tinypng then returns the updated image which is then automatically saved in your chose Google Drive folder Setting things up It's pretty simple to configure and should only take around 5-10mins. You only need to set up credentials for Google Drive and Tinypng.com For Tinypng.com you can sign up for their free tier API access which gives you 500 optimisations per month Once you have those two things, you just need choose your 'input' folder to watch for images and your 'output' folder for where these images should be stored There are a few more optional things you can do such as the naming of your final image and also lots more you could do with the Tinypng API for more advanced image optimisation
by Mihai Farcas
How it works: The workflow starts by sending a request to a website to retrieve its HTML content. It then parses the HTML extracting the relevant information The extracted data is storted and converted into a CSV file. The CSV file is attached to an email and sent to your specified address. The data is simultaneously saved to both Google Sheets and Microsoft Excel for further analysis or use. Set-up steps: Change the website to scrape in the "Fetch website content" node Configure Microsoft Azure credentials with Microsoft Graph permissions (required for the Save to Microsoft Excel 365 node) Configure Google Cloud credentials with access to Google Drive, Google Sheets and Gmail APIs (the latter is required for the Send CSV via e-mail node).
by Jay Hartley
What this workflow does This workflow allows you to monitor multiple Github repos simultaneously without polling due to use of Webhooks. It programmatically allows for adding and deleting of repos to your watchlist to make management convenient. Description Can monitor multiple repos simultaneously. Programmatically register or unregister repos from a list. No need for manual work. Webhook notification means no constant polling necessary. Setup 1. Creating Credentials on Github Generate a personal access token on github by following these esteps; Right hand side of page -> Settings -> scroll to bottom -> Developer Settings > Personal Access Token > Tokens (classic) > Generate New Token Give scopes: admin:repo_hook repo (if you want to use it for your own private repo) if you need more help, see here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens 2. Setting Credentials in n8n In Register Github Webhook Authenticaion > Generic Credential Type Generic Auth Type > Header Auth Header Auth > Create New Credential with Name set to 'Authorization' and Value set to 'Bearer '. (You can reuse this for Delete Github Webhook and Get Existing Webhooks). Now in Register Github Webhook, scroll down to Send Body > JSON and inside the JSON, change the value of "url" to the webhook address given as Production URL in the node Webhook Trigger. 3. Notification settings In the third row, link up the Webhook Trigger to any API of your choice. Slack and Telegram are given as examples. You can also format the notification message as you wish. Setup time: roughly 10 minutes. Instructions Video: https://vimeo.com/1013473758 Test 1. Register Webhooks In Repos to Monitor, add any repo you want to monitor changes for. Disable Webhook Trigger, Click Test Workflow and if your Github credentials were set correctly, it will automatically register the webhooks. - You can test this by running the single node Get Existing Webhook and confirming it outputs the repo addresses. 2. Handle Github Events Now that you have registered the webhooks, re-enable Webhook Trigger and activate the workflow. Make a commit to any of the registered repos. Confirm that the notification went through. That's it!
by Angel Menendez
Enhance Security Operations with the Qualys Slack Shortcut Bot! Our Qualys Slack Shortcut Bot is strategically designed to facilitate immediate security operations directly from Slack. This powerful tool allows users to initiate vulnerability scans and generate detailed reports through simple Slack interactions, streamlining the process of managing security assessments. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Qualys to execute vulnerability scans and create reports based on user-specified parameters. Real-Time Feedback**: Offers instant feedback within Slack, updating users about the status of their requests and delivering reports directly through Slack channels. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other sub-workflows like 'Qualys Start Vulnerability Scan' or 'Qualys Create Report' for detailed processing. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore our Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of vulnerabilities and streamlined reporting. To handle the actual processing of requests, you will also need to deploy these two subworkflows: Qualys Start Vulnerability Scan Qualys Create Report To simplify deployment, use this Slack App manifest to quickly create an app with the correct permissions: { "display_information": { "name": "Qualys n8n Bot", "description": "n8n Integration for Qualys", "background_color": "#2a2b2e" }, "features": { "bot_user": { "display_name": "Qualys n8n Bot", "always_online": false }, "shortcuts": [ { "name": "Scan Report Generator", "type": "global", "callback_id": "qualys-scan-report", "description": "Generate a report from the latest scan to review vulnerabilities and compliance." }, { "name": "Launch Qualsys VM Scan", "type": "global", "callback_id": "trigger-qualys-vmscan", "description": "Start a Qualys Vulnerability scan from the comfort of your Slack Workspace" } ] }, "oauth_config": { "scopes": { "bot": [ "commands", "channels:join", "channels:history", "channels:read", "chat:write", "chat:write.customize", "files:read", "files:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"", "message_menu_options_url": "Replace everything inside the double quotes with your workflow message options webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a call analyzer. .png) Who is this for? This workflow is ideal for sales teams, customer support managers, and online education services that conduct follow-up calls with clients. It’s designed for those who want to leverage AI to gain deeper insights into client needs and upsell opportunities from recorded calls. What problem does this workflow solve? Many follow-up sales calls lack structured analysis, making it challenging to identify client needs, gauge interest levels, or uncover upsell opportunities. This workflow enables automated call transcription and AI-driven analysis to generate actionable insights, helping teams improve sales performance, refine client communication, and streamline upselling strategies. What this workflow does This workflow transcribes and analyzes sales calls using AssemblyAI, OpenAI, and Supabase to store structured data. The workflow processes recorded calls as follows: Transcribe Call with AssemblyAI: Converts audio into text with speaker labels for clarity. Analyze Transcription with OpenAI: Using a predefined JSON schema, OpenAI analyzes the transcription to extract metrics like client intent, interest score, upsell opportunities, and more. Store and Access Results in Supabase: Stores both transcription and analysis data in a Supabase database for further use and display in interfaces. Setup Preparation Create Accounts: Set up accounts for N8N, Supabase, AssemblyAI, and OpenAI. Get Call Link: Upload audio files to public Supabase storage or Dropbox to generate a direct link for transcription. Prepare Artifacts for OpenAI: Define Metrics: Identify business metrics you want to track from call analysis, such as client needs, interest score, and upsell potential. Generate JSON Schema: Use GPT to design a JSON schema for structuring OpenAI’s responses, enabling efficient storage, analysis, and display. Create Analysis Prompt: Write a detailed prompt for GPT to analyze calls based on your metrics and JSON schema. Scenario 1: Transcribe Call with AssemblyAI Set Up Request: Header Authentication: Set Authorization with AssemblyAI API key. URL: POST to https://api.assemblyai.com/v2/transcript/. Parameters: audio_url: Direct URL of the audio file. webhook_url: URL for an N8N webhook to receive the transcription result. Additional Settings: speaker_labels (true/false): Enables speaker diarization. speakers_expected: Specify expected number of speakers. language_code: Set language (default: en_us). Scenario 2: Process Transcription with OpenAI Webhook Configuration: Set up a POST webhook to receive AssemblyAI’s transcription data. Get Transcription: Header Authentication: Set Authorization with AssemblyAI API key. URL: GET https://api.assemblyai.com/v2/transcript/<transcript_id>. Send to OpenAI: URL: POST to https://api.openai.com/v1/chat/completions. Header Authentication: Set Authorization with OpenAI API key. Body Parameters: Model: Use gpt-4o-2024-08-06 for JSON Schema support, or gpt-4o-mini for a less costly option. Messages: system: Contains the main analysis prompt. user: Combined speakers’ utterances to analyze in text format. Response Format: type: json_schema. json_schema: JSON schema for structured responses. Save Results in Supabase: Operation: Create a new record. Table Name: demo_calls. Fields: Input: Transcription text, audio URL, and transcription ID. Output: Parsed JSON response from OpenAI’s analysis.
by Agent Studio
Overview This workflow aims to provide data visualization capabilities to a native SQL Agent. Together, they can help foster data analysis and data visualization within a team. It uses the native SQL Agent that works well and adds visualization capabilities thanks to OpenAI’s Structured Output and Quickchart.io. How it works Information Extraction: The Information Extractor identifies and extracts the user's question. If the question includes a visualization aspect, the SQL Agent alone may not respond accurately. SQL Querying: It leverages a regular SQL Agent: it connects to a database, queries it, and translates the response into a human-readable format. Chart Decision: The Text Classifier determines whether the user would benefit from a chart to support the SQL Agent's response. Chart Generation: If a chart is needed, the sub-workflow dynamically generates a chart and appends it to the SQL Agent’s response. If not, the SQL Agent’s response is output as is. Calling OpenAI for Chart Definition: The sub-workflow calls OpenAI via the HTTP Request node to retrieve a chart definition. Building and Returning the Chart: In the "Set Response" node, the chart definition is appended to a Quickchart.io URL, generating the final chart image. The AI Agent returns the response along with the chart. How to use it Use an existing database or create a new one. For example, I've used this Kaggle dataset and uploaded it to a Supabase DB. Add the PostgreSQL or MySQL credentials. Alternatively, you can use SQLite binary files (check this template). Activate the workflow. Start chatting with the AI SQL Agent. If the Text Classifier determines a chart would be useful, it will generate one in addition to the SQL Agent's response. Notes The full Quickchart.io specifications have not been fully integrated, so there may be some glitches (e.g., radar graphs may not display properly due to size limitations).
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for recruitment agencies, HR professionals, and hiring managers looking to automate the initial screening of CVs. It is especially useful for organizations handling large volumes of applications and seeking to streamline their recruitment process. What problem does this workflow solve? Manually screening resumes is time-consuming and prone to human error. This workflow automates the process, providing consistent and objective analysis of CVs against job descriptions. It helps filter out unsuitable candidates early, reducing workload and improving the overall efficiency of the recruitment process. What this workflow does This workflow automates the resume screening process using OpenAI for analysis. It provides a matching score, a summary of candidate suitability, and key insights into why the candidate fits (or doesn’t fit) the job. Retrieve Resume: The workflow downloads CVs from a direct link (e.g., Supabase storage or Dropbox). Extract Data: Extracts text data from PDF or DOC files for analysis. Analyze with OpenAI: Sends the extracted data and job description to OpenAI to: Generate a matching score. Summarize candidate strengths and weaknesses. Provide actionable insights into their suitability for the job. Setup Preparation Create Accounts: N8N: For workflow automation. OpenAI: For AI-powered CV analysis. Get CV Link: Upload CV files to Supabase storage or Dropbox to generate a direct link for processing. Prepare Artifacts for OpenAI: Define Metrics: Identify the metrics you want from the analysis (e.g., matching percentage, strengths, weaknesses). Generate JSON Schema: Use OpenAI to structure responses, ensuring compatibility with your database. Write a Prompt: Provide OpenAI with a clear and detailed prompt to ensure accurate analysis. N8N Scenario Download File: Fetch the CV using its direct URL. Extract Data: Use N8N’s PDF or text extraction nodes to retrieve text from the CV. Send to OpenAI: URL: POST to OpenAI’s API for analysis. Parameters: Include the extracted CV data and job description. Use JSON Schema to structure the response. Summary This workflow provides a seamless, automated solution for CV screening, helping recruitment agencies and HR teams save time while maintaining consistency in candidate evaluation. It enables organizations to focus on the most suitable candidates, improving the overall hiring process.
by KumoHQ
Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
by Joseph LePage
Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies ☁️. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses 💬. How it works 🔄 💭 Chat messages trigger the workflow 🧠 Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model) 📊 Responses are formatted as structured JSON ⚡ Error handling ensures robust operation Set up steps 🛠️ 📥 Install N8N and Ollama ⚙️ Download Ollama 3.2 model (or other model) 🔑 Configure Ollama API credentials ✨ Import and activate workflow This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure 🚀.