by Bela
In this automation we first make a screenshot with a screenshot API called URLbox and then send this screenshot into the OpenAI API and analyze it. You can extend this automation by the way you want to ingest the website url's & names into this workflow. Options as data source: Postgres Google Sheets Your CRM ... Setup: Replace Website & URL in Setup Node Put in your URLbox API Key Put in your OpenAI credentials Click here for a blog article with more information on the automation.
by Deborah
How it works This workflow shows how to set credentials dynamically using expressions. It accepts an API key via a form, then uses it in the NASA node to authenticate a request. Setup steps First, set up your NASA credential: Create a new NASA credential. Hover over API Key. Toggle Expression on. In the API Key field, enter {{ $json["Enter your NASA API key"] }}. Then, test the workflow: Get an API key from NASA Select Test workflow Enter your key using the form. The workflow runs and sends you to the NASA picture of the day. For more information on expressions, refer to n8n documentation | Expressions.
by Nicolas Chourrout
This workflow automatically generates draft replies in Gmail. It's designed for anyone who manages a high volume of emails or often face writer's block when crafting responses. Since it doesn't send the generated message directly, you're still in charge of editing and approving emails before they go out. How It Works: Email Trigger: activates when new emails reach the Gmail inbox Assessment: uses OpenAI gpt-4o and a JSON parser to determine if a response is necessary. Reply Generation: crafts a reply with OpenAI GPT-4 Turbo Draft Integration: after converting the text to html, it places the draft into the Gmail thread as a reply to the first message Set Up Overview (~10 minutes): OAuth Configuration (follow n8n instructions here): Setup Google OAuth in Google Cloud console. Make sure to add Gmail API with the modify scope. Add Google OAuth credentials in n8n. Make sure to add the n8n redirect URI to the Google Cloud Console consent screen settings. OpenAI Configuration: add OpenAI API Key in the credentials Tweaking the prompt: edit the system prompt in the "Generate email reply" node to suit your needs Detailed Walkthrough Check out this blog post where I go into more details on how I built this workflow. Reach out to me here if you need help building automations for your business.
by Mihai Farcas
How it works: The workflow starts by sending a request to a website to retrieve its HTML content. It then parses the HTML extracting the relevant information The extracted data is storted and converted into a CSV file. The CSV file is attached to an email and sent to your specified address. The data is simultaneously saved to both Google Sheets and Microsoft Excel for further analysis or use. Set-up steps: Change the website to scrape in the "Fetch website content" node Configure Microsoft Azure credentials with Microsoft Graph permissions (required for the Save to Microsoft Excel 365 node) Configure Google Cloud credentials with access to Google Drive, Google Sheets and Gmail APIs (the latter is required for the Send CSV via e-mail node).
by David Roberts
The built-in Gmail node doesn't yet support embedding images within the body of the email, but you can pull this off using the HTTP node, and this template shows you how. Requirements A Gmail account How it works The workflow downloads an image, converts it into the format that the Gmail API expects (base64), packages it into a multipart MIME email and uses the HTTP node to send it.
by Jimleuk
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/f_auto,q_auto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.
by Yaron Been
Transform YouTube comments into actionable insights with automated AI analysis and professional email reports. This intelligent workflow monitors your Google Sheets for YouTube video IDs, fetches comments using YouTube API, performs comprehensive AI sentiment analysis, and delivers formatted email reports with viewer insights - helping content creators understand their audience and improve engagement. 🚀 What It Does Smart Video Monitoring: Watches Google Sheets for new YouTube video IDs marked as "Pending" and triggers automated analysis Complete Comment Collection: Fetches up to 100 top comments per video using YouTube API with relevance-based ordering AI-Powered Analysis: Uses GPT-4 to analyze comments for sentiment, themes, questions, feedback, and actionable insights Professional Email Reports: Generates detailed HTML reports with statistics, sentiment breakdown, and improvement recommendations Automated Status Tracking: Updates spreadsheet status to prevent duplicate processing and maintain organized workflow 🎯 Key Benefits ✅ Deep Audience Insights: Understand what viewers really think about your content ✅ Save Hours of Manual Work: Automated comment analysis vs reading hundreds of comments ✅ Improve Content Strategy: Get actionable feedback for better video performance ✅ Track Sentiment Trends: Monitor positive/negative feedback patterns ✅ Professional Reporting: Receive formatted analysis reports via email ✅ Scalable Analysis: Process multiple videos automatically 🏢 Perfect For Content Creators & YouTubers Individual creators tracking audience engagement Educational channels analyzing learning feedback Entertainment creators understanding viewer preferences Business channels monitoring brand sentiment Marketing & Business Applications Brand Monitoring**: Track sentiment on branded content and partnerships Audience Research**: Understand viewer demographics and preferences Content Optimization**: Identify what resonates with your audience Competitor Analysis**: Analyze comments on competitor videos (where allowed) ⚙️ What's Included Complete Analytics Workflow: Ready-to-deploy YouTube comment analysis system Google Sheets Integration: Simple spreadsheet-based video management YouTube API Integration: Automated comment fetching with proper authentication AI Analysis Engine: GPT-4 powered sentiment and insight generation Email Reporting System: Professional HTML-formatted reports Status Management: Automatic processing tracking and duplicate prevention 🔧 Setup Requirements n8n Platform**: Cloud or self-hosted instance YouTube API Credentials**: Google Cloud Console API access OpenAI API**: GPT-4 access for comment analysis Google Sheets**: Video ID management and status tracking Gmail Account**: For receiving analysis reports 📊 Required Google Sheets Structure | ID | Video Title | YouTube Video ID | Status | |----|-------------|------------------|---------| | 1 | My Tutorial | dQw4w9WgXcQ | Pending | | 2 | Product Demo| abc123def456 | Mail Sent | | 3 | Weekly Vlog | xyz789uvw012 | Draft | Status Options: Draft → Pending → Mail Sent 📧 Sample Analysis Report 📺 YouTube Comments Analysis Report Video: "How to Build Your First Website" 📊 Quick Statistics: • Total Comments Analyzed: 87 • Average Likes per Comment: 3.2 • Total Replies: 156 • Sentiment Summary: Positive: 65%, Negative: 10%, Neutral: 25% ❓ Common Questions: • "What hosting service do you recommend?" • "Can I do this without coding experience?" • "How much does domain registration cost?" 💡 Key Feedback Points: • Tutorial pace is perfect for beginners • More examples of finished websites requested • Viewers want follow-up video on advanced features 🎯 Actionable Insights: • Create hosting comparison video • Add timestamps for different skill levels • Consider beginner-friendly series expansion 🎨 Customization Options Analysis Depth: Adjust AI prompts for different analysis focuses (engagement, education, entertainment) Comment Limits: Modify maximum comments processed (default: 100, AI analysis: 50) Report Recipients: Send reports to multiple team members or clients Custom Metrics: Add specific analysis criteria for your content niche Multi-Channel: Process videos from multiple YouTube channels Scheduling: Set up regular analysis of your latest videos 🏷️ Tags & Categories #youtube-analytics #comment-analysis #content-creator-tools #ai-sentiment-analysis #video-insights #audience-research #youtube-api #content-optimization #social-media-analytics #creator-economy #video-marketing #engagement-analysis #content-strategy #ai-reporting #youtube-automation 💡 Use Case Examples Educational Channel: Analyze tutorial comments to identify confusing concepts and improve teaching methods Product Reviews: Monitor sentiment on review videos to understand customer satisfaction trends Entertainment Creator: Track audience reactions to different content formats and optimize future videos
by Jay Hartley
What this workflow does This workflow allows you to monitor multiple Github repos simultaneously without polling due to use of Webhooks. It programmatically allows for adding and deleting of repos to your watchlist to make management convenient. Description Can monitor multiple repos simultaneously. Programmatically register or unregister repos from a list. No need for manual work. Webhook notification means no constant polling necessary. Setup 1. Creating Credentials on Github Generate a personal access token on github by following these esteps; Right hand side of page -> Settings -> scroll to bottom -> Developer Settings > Personal Access Token > Tokens (classic) > Generate New Token Give scopes: admin:repo_hook repo (if you want to use it for your own private repo) if you need more help, see here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens 2. Setting Credentials in n8n In Register Github Webhook Authenticaion > Generic Credential Type Generic Auth Type > Header Auth Header Auth > Create New Credential with Name set to 'Authorization' and Value set to 'Bearer '. (You can reuse this for Delete Github Webhook and Get Existing Webhooks). Now in Register Github Webhook, scroll down to Send Body > JSON and inside the JSON, change the value of "url" to the webhook address given as Production URL in the node Webhook Trigger. 3. Notification settings In the third row, link up the Webhook Trigger to any API of your choice. Slack and Telegram are given as examples. You can also format the notification message as you wish. Setup time: roughly 10 minutes. Instructions Video: https://vimeo.com/1013473758 Test 1. Register Webhooks In Repos to Monitor, add any repo you want to monitor changes for. Disable Webhook Trigger, Click Test Workflow and if your Github credentials were set correctly, it will automatically register the webhooks. - You can test this by running the single node Get Existing Webhook and confirming it outputs the repo addresses. 2. Handle Github Events Now that you have registered the webhooks, re-enable Webhook Trigger and activate the workflow. Make a commit to any of the registered repos. Confirm that the notification went through. That's it!
by simonscrapes
What this workflow does: This flow uses an AI node to generate Seed Keywords to focus SEO efforts on based on your ideal customer profile. You can use these keywords to form part of your SEO strategy. Outputs: List of 20 Seed Keywords Setup Fill the Set Ideal Customer Profile (ICP) Connect with your credentials Replace the Connect to your own database with your own database Pre-requisites / Dependencies You know your ideal customer profile (ICP) An AI API account (either OpenAI or Anthropic recommended) More templates and n8n workflows >>> @simonscrapes
by Angel Menendez
Enhance Security Operations with the Qualys Slack Shortcut Bot! Our Qualys Slack Shortcut Bot is strategically designed to facilitate immediate security operations directly from Slack. This powerful tool allows users to initiate vulnerability scans and generate detailed reports through simple Slack interactions, streamlining the process of managing security assessments. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Qualys to execute vulnerability scans and create reports based on user-specified parameters. Real-Time Feedback**: Offers instant feedback within Slack, updating users about the status of their requests and delivering reports directly through Slack channels. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other sub-workflows like 'Qualys Start Vulnerability Scan' or 'Qualys Create Report' for detailed processing. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore our Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of vulnerabilities and streamlined reporting. To handle the actual processing of requests, you will also need to deploy these two subworkflows: Qualys Start Vulnerability Scan Qualys Create Report To simplify deployment, use this Slack App manifest to quickly create an app with the correct permissions: { "display_information": { "name": "Qualys n8n Bot", "description": "n8n Integration for Qualys", "background_color": "#2a2b2e" }, "features": { "bot_user": { "display_name": "Qualys n8n Bot", "always_online": false }, "shortcuts": [ { "name": "Scan Report Generator", "type": "global", "callback_id": "qualys-scan-report", "description": "Generate a report from the latest scan to review vulnerabilities and compliance." }, { "name": "Launch Qualsys VM Scan", "type": "global", "callback_id": "trigger-qualys-vmscan", "description": "Start a Qualys Vulnerability scan from the comfort of your Slack Workspace" } ] }, "oauth_config": { "scopes": { "bot": [ "commands", "channels:join", "channels:history", "channels:read", "chat:write", "chat:write.customize", "files:read", "files:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"", "message_menu_options_url": "Replace everything inside the double quotes with your workflow message options webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a call analyzer. .png) Who is this for? This workflow is ideal for sales teams, customer support managers, and online education services that conduct follow-up calls with clients. It’s designed for those who want to leverage AI to gain deeper insights into client needs and upsell opportunities from recorded calls. What problem does this workflow solve? Many follow-up sales calls lack structured analysis, making it challenging to identify client needs, gauge interest levels, or uncover upsell opportunities. This workflow enables automated call transcription and AI-driven analysis to generate actionable insights, helping teams improve sales performance, refine client communication, and streamline upselling strategies. What this workflow does This workflow transcribes and analyzes sales calls using AssemblyAI, OpenAI, and Supabase to store structured data. The workflow processes recorded calls as follows: Transcribe Call with AssemblyAI: Converts audio into text with speaker labels for clarity. Analyze Transcription with OpenAI: Using a predefined JSON schema, OpenAI analyzes the transcription to extract metrics like client intent, interest score, upsell opportunities, and more. Store and Access Results in Supabase: Stores both transcription and analysis data in a Supabase database for further use and display in interfaces. Setup Preparation Create Accounts: Set up accounts for N8N, Supabase, AssemblyAI, and OpenAI. Get Call Link: Upload audio files to public Supabase storage or Dropbox to generate a direct link for transcription. Prepare Artifacts for OpenAI: Define Metrics: Identify business metrics you want to track from call analysis, such as client needs, interest score, and upsell potential. Generate JSON Schema: Use GPT to design a JSON schema for structuring OpenAI’s responses, enabling efficient storage, analysis, and display. Create Analysis Prompt: Write a detailed prompt for GPT to analyze calls based on your metrics and JSON schema. Scenario 1: Transcribe Call with AssemblyAI Set Up Request: Header Authentication: Set Authorization with AssemblyAI API key. URL: POST to https://api.assemblyai.com/v2/transcript/. Parameters: audio_url: Direct URL of the audio file. webhook_url: URL for an N8N webhook to receive the transcription result. Additional Settings: speaker_labels (true/false): Enables speaker diarization. speakers_expected: Specify expected number of speakers. language_code: Set language (default: en_us). Scenario 2: Process Transcription with OpenAI Webhook Configuration: Set up a POST webhook to receive AssemblyAI’s transcription data. Get Transcription: Header Authentication: Set Authorization with AssemblyAI API key. URL: GET https://api.assemblyai.com/v2/transcript/<transcript_id>. Send to OpenAI: URL: POST to https://api.openai.com/v1/chat/completions. Header Authentication: Set Authorization with OpenAI API key. Body Parameters: Model: Use gpt-4o-2024-08-06 for JSON Schema support, or gpt-4o-mini for a less costly option. Messages: system: Contains the main analysis prompt. user: Combined speakers’ utterances to analyze in text format. Response Format: type: json_schema. json_schema: JSON schema for structured responses. Save Results in Supabase: Operation: Create a new record. Table Name: demo_calls. Fields: Input: Transcription text, audio URL, and transcription ID. Output: Parsed JSON response from OpenAI’s analysis.
by Tom Sekula
Who is this workflow for? This workflow is the baseline workflow for anyone who needs to automate the process of posting 1-4 images to Bluesky using the Bluesky API. It is ideal for anyone looking to streamline their social media posting process, saving time and ensuring consistent content delivery. Use Case / Problem Solved Manually posting images and captions on Instagram can be time-consuming, especially for businesses and content creators managing multiple accounts. This workflow automates the process from image preparation to publishing, reducing manual effort and increasing efficiency. What this workflow does Trigger Initialization: The workflow starts with a manual trigger that can be adapted to other triggers (e.g., HTTP webhook or schedule). Set Parameters: The workflow includes a node that sets essential parameters, such as the Bluesky account ID, image URLs, and caption. Prepare Bluesky Session: A node creates the authenticated session data used by the upload and post operations later in the workflow. Publish Media: Nodes retrieve image files from the specified URLS and uploads them as blobs to Bluesky, a necessary pre-requisite for a Bluesky post to have images attached. Post text caption + images: A node does the final call to the Bluesky API, including the text caption and relevant image references. Setup Sign-in to Bluesky and create an App Password Add your username and App Password to the Define Credentials node. Set the caption (text content) of your post in the Set Caption node. Set 1-4 image URLs in the Set Images node Adapt the initial trigger as needed to fit your workflow's requirements (e.g., schedule, webhook). Adapt the caption and images nodes to accept dynamic parameters. Limitations This workflow assumes a minimum of 1 image URL to function. If you want a text-only post, remove the whole embed section from the JSON in last Post to Bluesky node, as well as relevant image attachment nodes. The 300-character limit in Bluesky includes the caption + hashtags + image alt text. Going over 300 will return a Record/description must not be longer than 300 graphemes error.