by Angel Menendez
Enhance Security Operations with the Qualys Slack Shortcut Bot! Our Qualys Slack Shortcut Bot is strategically designed to facilitate immediate security operations directly from Slack. This powerful tool allows users to initiate vulnerability scans and generate detailed reports through simple Slack interactions, streamlining the process of managing security assessments. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Qualys to execute vulnerability scans and create reports based on user-specified parameters. Real-Time Feedback**: Offers instant feedback within Slack, updating users about the status of their requests and delivering reports directly through Slack channels. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other sub-workflows like 'Qualys Start Vulnerability Scan' or 'Qualys Create Report' for detailed processing. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore our Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of vulnerabilities and streamlined reporting. To handle the actual processing of requests, you will also need to deploy these two subworkflows: Qualys Start Vulnerability Scan Qualys Create Report To simplify deployment, use this Slack App manifest to quickly create an app with the correct permissions: { "display_information": { "name": "Qualys n8n Bot", "description": "n8n Integration for Qualys", "background_color": "#2a2b2e" }, "features": { "bot_user": { "display_name": "Qualys n8n Bot", "always_online": false }, "shortcuts": [ { "name": "Scan Report Generator", "type": "global", "callback_id": "qualys-scan-report", "description": "Generate a report from the latest scan to review vulnerabilities and compliance." }, { "name": "Launch Qualsys VM Scan", "type": "global", "callback_id": "trigger-qualys-vmscan", "description": "Start a Qualys Vulnerability scan from the comfort of your Slack Workspace" } ] }, "oauth_config": { "scopes": { "bot": [ "commands", "channels:join", "channels:history", "channels:read", "chat:write", "chat:write.customize", "files:read", "files:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"", "message_menu_options_url": "Replace everything inside the double quotes with your workflow message options webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad"" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.
by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computing’s promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command execution—all driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. 🔑 Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends “typing…” indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8n’s flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a call analyzer. .png) Who is this for? This workflow is ideal for sales teams, customer support managers, and online education services that conduct follow-up calls with clients. It’s designed for those who want to leverage AI to gain deeper insights into client needs and upsell opportunities from recorded calls. What problem does this workflow solve? Many follow-up sales calls lack structured analysis, making it challenging to identify client needs, gauge interest levels, or uncover upsell opportunities. This workflow enables automated call transcription and AI-driven analysis to generate actionable insights, helping teams improve sales performance, refine client communication, and streamline upselling strategies. What this workflow does This workflow transcribes and analyzes sales calls using AssemblyAI, OpenAI, and Supabase to store structured data. The workflow processes recorded calls as follows: Transcribe Call with AssemblyAI: Converts audio into text with speaker labels for clarity. Analyze Transcription with OpenAI: Using a predefined JSON schema, OpenAI analyzes the transcription to extract metrics like client intent, interest score, upsell opportunities, and more. Store and Access Results in Supabase: Stores both transcription and analysis data in a Supabase database for further use and display in interfaces. Setup Preparation Create Accounts: Set up accounts for N8N, Supabase, AssemblyAI, and OpenAI. Get Call Link: Upload audio files to public Supabase storage or Dropbox to generate a direct link for transcription. Prepare Artifacts for OpenAI: Define Metrics: Identify business metrics you want to track from call analysis, such as client needs, interest score, and upsell potential. Generate JSON Schema: Use GPT to design a JSON schema for structuring OpenAI’s responses, enabling efficient storage, analysis, and display. Create Analysis Prompt: Write a detailed prompt for GPT to analyze calls based on your metrics and JSON schema. Scenario 1: Transcribe Call with AssemblyAI Set Up Request: Header Authentication: Set Authorization with AssemblyAI API key. URL: POST to https://api.assemblyai.com/v2/transcript/. Parameters: audio_url: Direct URL of the audio file. webhook_url: URL for an N8N webhook to receive the transcription result. Additional Settings: speaker_labels (true/false): Enables speaker diarization. speakers_expected: Specify expected number of speakers. language_code: Set language (default: en_us). Scenario 2: Process Transcription with OpenAI Webhook Configuration: Set up a POST webhook to receive AssemblyAI’s transcription data. Get Transcription: Header Authentication: Set Authorization with AssemblyAI API key. URL: GET https://api.assemblyai.com/v2/transcript/<transcript_id>. Send to OpenAI: URL: POST to https://api.openai.com/v1/chat/completions. Header Authentication: Set Authorization with OpenAI API key. Body Parameters: Model: Use gpt-4o-2024-08-06 for JSON Schema support, or gpt-4o-mini for a less costly option. Messages: system: Contains the main analysis prompt. user: Combined speakers’ utterances to analyze in text format. Response Format: type: json_schema. json_schema: JSON schema for structured responses. Save Results in Supabase: Operation: Create a new record. Table Name: demo_calls. Fields: Input: Transcription text, audio URL, and transcription ID. Output: Parsed JSON response from OpenAI’s analysis.
by ist00dent
This n8n template allows you to automatically create shortened URLs using the TinyURL API by simply sending a webhook request. It's a quick and efficient way to integrate URL shortening into your automated workflows, ideal for sharing long links in social media, emails, or other applications. 🔧 How it works Receive Link Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing the url to be shortened and your api_key for TinyURL. Create TinyURL: This node sends a POST request to the TinyURL API, passing the long URL and your API key. It can also accept optional parameters like domain, alias, and description to customize the shortened link. Respond with Shortened URL: This node sends the response from the TinyURL API (which includes the new shortened URL) back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Content Managers & Marketers: Quickly shorten links for campaigns, social media posts, or tracking. Developers: Automate the process of link shortening within applications or scripts. Automation Enthusiasts: Integrate a URL shortener into various n8n workflows (e.g., after generating a report, before sending a notification). Anyone needing on-demand short links: A flexible solution for ad-hoc link shortening. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "api_key": "YOUR_TINYURL_API_KEY", "url": "https://www.verylongwebsite.com/path/to/specific/page?param1=value1¶m2=value2", "domain": "tinyurl.com", // Optional: defaults to tinyurl.com "alias": "myCustomAlias", // Optional: desired custom alias for the link "description": "My project link" // Optional: description for the link } The workflow will return the JSON response directly from the TinyURL API, which will include the short_url and other details about the newly created link. ⚙️ Setup Instructions Obtain TinyURL API Key: Before importing, make sure you have an API key from TinyURL. You can typically get this by signing up for an account on their website. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Link Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /shorten-link). Activate Workflow: Save and activate the workflow. 📝 Tips Dynamic Inputs: The workflow is set up to dynamically use the url, api_key, alias, and description from the incoming webhook data. This makes it highly flexible. Error Handling: You can add an Error Trigger node to catch any issues (e.g., invalid API key, malformed URL) during the TinyURL creation process. Configure it to send notifications or log errors for easy troubleshooting. Post-Shortening Actions: After generating the shortened URL, you can insert additional nodes before the Respond with Shortened URL node to perform other actions. For example, you could: Save to a Database: Store the original and shortened URLs in a database like Airtable, Google Sheets, or a PostgreSQL database. Send a Message: Automatically send the shortened URL via Slack, Discord, email, or SMS. Update a Record: Update a CRM record or project management task with the new shortened link. Custom Domains: If you have a custom domain configured with your TinyURL account, you can change the domain parameter in the Create TinyURL node to use it.
by Niranjan G
Automated GitHub Scanner for Exposed AWS IAM Keys Overview This n8n workflow automatically scans GitHub for exposed AWS IAM access keys associated with your AWS account, helping security teams quickly identify and respond to potential security breaches. When compromised keys are found, the workflow generates detailed security reports and sends Slack notifications with actionable remediation steps. 🔑 Key Features Automated AWS IAM Key Scanning**: Regularly checks for exposed AWS access keys on GitHub Real-time Security Alerts**: Sends immediate Slack notifications when compromised keys are detected Comprehensive Security Reports**: Generates detailed reports with exposure information and risk assessment Actionable Remediation Steps**: Provides clear instructions for securing compromised credentials Continuous Monitoring**: Maintains ongoing surveillance of your AWS environment 📋 Workflow Steps List AWS Users: Retrieves all users from your AWS account Split Users for Processing: Processes each user individually Get User Access Keys: Retrieves access keys for each user Filter Active Keys Only: Focuses only on currently active access keys Search GitHub for Exposed Keys: Scans GitHub repositories for exposed access keys Aggregate Search Results: Consolidates and deduplicates search findings Check For Compromised Keys: Determines if any keys have been exposed Generate Security Report: Creates detailed security reports for compromised keys Extract AWS Usernames: Extracts usernames from AWS response for notification Format Slack Alert: Prepares comprehensive Slack notifications Send Slack Notification: Delivers alerts with actionable information Continue Scanning: Maintains continuous monitoring cycle 🛠️ Setup Requirements Prerequisites Active n8n instance AWS account with IAM permissions GitHub account/token for searching repositories Slack workspace for notifications Required Credentials AWS Credentials: IAM user with permissions to list users and access keys Access Key ID and Secret Access Key GitHub Credentials: Personal Access Token with search permissions Slack Credentials: Webhook URL for your notification channel ⚙️ Configuration AWS Configuration: Configure the "List AWS Users" node with your AWS credentials Ensure proper IAM permissions for listing users and access keys GitHub Configuration: Set up the "Search GitHub for Exposed Keys" node with your GitHub token Adjust search parameters if needed Slack Configuration: Configure the Slack node with your webhook URL Customize notification format if desired 🚀 Usage Running the Workflow Manual Execution: Click "Execute Workflow" to run an immediate scan Scheduled Execution: Set up a schedule to run periodic scans (recommended daily or weekly) Repository Compatibility This workflow is compatible with both public and private GitHub repositories to which you have access. It will scan all repositories you have permission to view based on your GitHub credentials. Handling Alerts When a compromised key is detected: Review the Slack notification for details about the exposure Follow the recommended remediation steps: Deactivate the compromised key immediately Create a new key if needed Investigate the exposure source Update any services using the compromised key ⚠️ Disclaimer This workflow template is provided for reference purposes only to demonstrate how to automate AWS IAM key exposure scanning. Please note: The scanning process may produce false positives as it only matches potential AWS access key patterns Always verify any reported exposures manually before taking action Disabling or deleting access keys without proper verification could have significant negative impacts on your environment Understand which systems and applications rely on identified access keys before deactivating them This template should be customized to fit your specific environment and security policies IMPORTANT: Use this workflow with caution and only after thoroughly understanding your AWS environment. The authors of this template are not responsible for any disruptions or damages resulting from its use. 🔒 Security Considerations This workflow requires access to sensitive AWS credentials Store all credentials securely within n8n Review and rotate access keys regularly 📝 Customization Options Adjust GitHub search parameters for more targeted scanning Customize Slack notification format and content Modify security report generation for your specific needs Integrate with additional notification channels (email, MS Teams, etc.) Optional: Enabling Interactive Slack Buttons The Slack Block Kit notification format supports interactive buttons that can be implemented if you want to perform actions directly from Slack: Disable Key: This button can be configured to automatically disable the compromised AWS IAM access key View Details: This button can be set up to show additional information about the exposure Acknowledge: This button can be used to mark the alert as acknowledged To make these buttons functional: Set up a Slack Socket Mode App: Create a Slack app in the Slack API Console Enable Socket Mode and Interactive Components Subscribe to the block_actions event to capture button clicks Create an n8n Webhook Endpoint: Add a new webhook node to receive Slack button click events Create separate workflows for each button action Implement AWS Key Disabling: For the "Disable Key" button, create a workflow that uses the n8n HTTP Request node to call the AWS IAM UpdateAccessKey API Example HTTP request that can be implemented in n8n: Method: POST URL: https://iam.amazonaws.com/ Query Parameters: Action: UpdateAccessKey AccessKeyId: AKIAIOSFODNN7EXAMPLE Status: Inactive UserName: {{$json.username}} Version: 2010-05-08 Update the Slack Message Format: Modify the Format Slack Alert node to include your webhook URL in the button action values Add callback_id and action_id values to identify which button was clicked This implementation allows for immediate response to security incidents directly from the Slack interface, reducing response time and improving security posture.
by Stathis Askaridis
Integrate Xero with FileMaker using Webhooks Workflow Description This n8n workflow automates the integration between Xero and FileMaker, allowing for seamless data transfer between the two platforms. By listening for webhooks from Xero (e.g., new invoices, payments, or contacts), this workflow ensures that data is automatically sent and recorded in a FileMaker database. Who is This For? This workflow template is ideal for: Accountants** who need a streamlined process to sync financial data between Xero and FileMaker. Business Owners** looking to automate data entry and improve accuracy across their systems. Developers** building solutions for clients that require integration between accounting software and databases. Operations Teams** focused on minimizing manual work and improving efficiency. Key Steps Xero Webhook Trigger: The workflow starts by capturing events from Xero via a webhook. Data Processing: Transforms and maps the incoming data to match FileMaker’s required format. FileMaker Node: Utilizes the FileMaker node to create or update records directly in the FileMaker database. Logging & Error Handling: Tracks successful entries and manages any errors with automated alerts. Setup Instructions Set Up the Xero Webhook: Create a webhook in Xero and point it to your n8n webhook node URL. Configure the types of events to trigger the workflow (e.g., new invoices or payments). Xero will then send some test calls to test you are doing proper hash control. Connect the FileMaker Node: Set up your FileMaker node with the appropriate credentials and database configuration. Map the fields between the incoming Xero data and your FileMaker database structure. Customize Data Processing: Adjust data transformations as needed to ensure compatibility with your FileMaker schema. Test and Deploy: Run the workflow with sample data to ensure everything is functioning correctly. Monitor the execution log to verify data transfer and make any adjustments as needed. Error Handling Configuration: Configure error-handling nodes or alerts to notify you of any issues during data processing. Benefits This setup facilitates real-time data synchronization between Xero and FileMaker, reducing the need for manual data entry and improving overall operational efficiency.
by Jay Hartley
What this workflow does This workflow allows you to monitor multiple Github repos simultaneously without polling due to use of Webhooks. It programmatically allows for adding and deleting of repos to your watchlist to make management convenient. Description Can monitor multiple repos simultaneously. Programmatically register or unregister repos from a list. No need for manual work. Webhook notification means no constant polling necessary. Setup 1. Creating Credentials on Github Generate a personal access token on github by following these esteps; Right hand side of page -> Settings -> scroll to bottom -> Developer Settings > Personal Access Token > Tokens (classic) > Generate New Token Give scopes: admin:repo_hook repo (if you want to use it for your own private repo) if you need more help, see here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens 2. Setting Credentials in n8n In Register Github Webhook Authenticaion > Generic Credential Type Generic Auth Type > Header Auth Header Auth > Create New Credential with Name set to 'Authorization' and Value set to 'Bearer '. (You can reuse this for Delete Github Webhook and Get Existing Webhooks). Now in Register Github Webhook, scroll down to Send Body > JSON and inside the JSON, change the value of "url" to the webhook address given as Production URL in the node Webhook Trigger. 3. Notification settings In the third row, link up the Webhook Trigger to any API of your choice. Slack and Telegram are given as examples. You can also format the notification message as you wish. Setup time: roughly 10 minutes. Instructions Video: https://vimeo.com/1013473758 Test 1. Register Webhooks In Repos to Monitor, add any repo you want to monitor changes for. Disable Webhook Trigger, Click Test Workflow and if your Github credentials were set correctly, it will automatically register the webhooks. - You can test this by running the single node Get Existing Webhook and confirming it outputs the repo addresses. 2. Handle Github Events Now that you have registered the webhooks, re-enable Webhook Trigger and activate the workflow. Make a commit to any of the registered repos. Confirm that the notification went through. That's it!
by ist00dent
This n8n template allows you to instantly generate QR codes from any text or URL by simply sending a webhook request. It's a versatile tool for creating dynamic QR codes for various purposes, from marketing campaigns to event registrations, directly integrated into your automated workflows. 🔧 How it works Receive Data Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body with a data property containing the text or URL you want to encode into the QR code. Generate QR Code: This node makes an HTTP GET request to the QR Server API (api.qrserver.com) to generate the QR code image. The content from your webhook is passed as the data parameter to the API. Respond with QR Code: This node sends the response from the QR Server API back to the service that initiated the webhook. The QR Server API directly returns the image data, so your webhook response will be the QR code image itself. 👤 Who is it for? This workflow is ideal for: Marketers: Generate QR codes for product links, event registrations, or promotional materials on the fly. Developers: Integrate QR code generation into applications, websites, or internal tools. Event Organizers: Create dynamic QR codes for ticketing, information access, or check-ins. Businesses: Streamline processes requiring physical-to-digital transitions, like menu access or contact sharing. Automation Enthusiasts: Add QR code generation capabilities to any workflow. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "data": "https://www.yourwebsite.com/your-specific-page-or-text-to-encode" } The workflow will return the QR code image directly in the response. ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Data Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /generate-qr). Customize QR Code (Optional): Double-click the Generate QR Code node. You can adjust the size parameter in the URL (e.g., size=200x200 for a larger QR code) or add other parameters supported by the QR Server API (e.g., bgcolor, color, qzone). Activate Workflow: Save and activate the workflow. 📝 Tips Handling the Image Output: Since the QR Server API directly returns the image, the webhook response will be the image data. Depending on your use case, you might want to: Save to File/Cloud: Insert a node (e.g., Write Binary File, Amazon S3, Google Drive) after Generate QR Code to save the image to a file system or cloud storage. Embed in HTML/Email: If you're building an HTML response or sending an email, you might need to convert the image data to a Base64 string or provide a URL to a saved image. Error Handling: Enhance workflow robustness by adding an Error Trigger node. This allows you to catch any issues during QR code generation and set up notifications or logging. Dynamic Size/Color: You can extend the Receive Data Webhook to accept parameters for size, color, or bgcolor in the incoming JSON. Then, dynamically pass these to the url of the Generate QR Code node to create highly customizable QR codes. Input Validation: For more advanced use cases, you could add a Function node after the webhook to validate the incoming data to ensure it's in a valid format (e.g., a URL).
by ankitkansaldev
🎬 TikTok Influencer Scraper (URL Input) via Bright Data + n8n & Sheets A comprehensive n8n automation that scrapes TikTok influencer profiles using Bright Data's TikTok dataset and automatically saves detailed profile information to Google Sheets. 📋 Overview This workflow provides an automated TikTok influencer data collection solution that scrapes comprehensive profile information and saves it to Google Sheets. Perfect for influencer marketing research, competitor analysis, social media monitoring, and marketing campaign planning. ✨ Key Features 📝 Form-Based Input: Simple web form to submit TikTok profile URLs 🤖 Bright Data Integration: Uses Bright Data's TikTok dataset for reliable scraping ⏳ Status Monitoring: Intelligent polling system to check scraping progress 🔄 Retry Logic: Automatic retry mechanism with 30-second intervals 📊 Data Extraction: Comprehensive profile data including engagement metrics 📈 Google Sheets Storage: Automatic data storage and organization ⚡ Error Handling: Built-in error handling and status reporting 🎯 Custom Fields: Configurable output fields for specific data needs 🎯 What This Workflow Does Input Profile URLs**: TikTok profile URLs submitted through web form Custom Fields**: Configurable data fields for extraction Country Settings**: Geo-targeting for accurate data collection Processing Form Submission: User submits TikTok profile URL through web form API Trigger: Sends profile data to Bright Data for scraping Status Polling: Continuously checks scraping progress Wait & Retry: Implements 30-second delays between status checks Data Retrieval: Fetches complete profile data when ready Sheet Update: Saves extracted data to Google Sheets Status Reporting: Provides completion status and messages Output Data Points | Field | Description | Example | |-------|-------------|---------| | Account ID | Unique TikTok account identifier | @username123 | | Nickname | Display name on profile | "John Doe" | | Biography | Profile bio/description | "Content creator & influencer" | | Followers | Number of followers | 1,250,000 | | Following | Number of accounts following | 500 | | Likes | Total likes across all videos | 50,000,000 | | Videos Count | Total number of videos posted | 1,200 | | Profile URL | Direct link to TikTok profile | https://www.tiktok.com/@username | | Profile Picture | Profile image URL | https://p16-sign-sg.tiktokcdn.com/... | | Profile Picture HD | High-definition profile image | https://p16-sign-sg.tiktokcdn.com/... | | Is Verified | Verification status | true/false | | Bio Link | External link in bio | https://linktr.ee/username | | Like Engagement Rate | Engagement rate based on likes | 5.2% | | Comment Engagement Rate | Engagement rate based on comments | 2.1% | | Top Videos | List of top performing videos | [video_objects] | | Region | Geographic region | "US" | | Is Under Age 18 | Age status indicator | true/false | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with TikTok dataset access Valid TikTok profile URLs for testing 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Request Generic Credential Name: "Bright Data API" Authentication: Bearer Token Token: Your Bright Data API key Test the connection Configure dataset: Ensure you have access to TikTok dataset (gd_l1villgoiiidt09ci) Verify dataset permissions in Bright Data dashboard Check dataset limits and pricing Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "TikTok Influencer Data" Create a sheet tab named "TikTok profile by url" Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Column A: Account ID Column B: Nickname Column C: Biography Column D: Followers Column E: Following Column F: Likes Column G: Videos Count Column H: Profile URL Column I: Is Verified Column J: Bio Link Column K: Like Engagement Rate Column L: Comment Engagement Rate Column M: Region Column N: Status Column O: Message Step 4: Update Workflow Settings Update API credentials: Open "Sends profile URLs to Bright Data to trigger scraping" node Replace BRIGHT_DATA_API_KEY with your actual API key Update dataset ID if different Update Google Sheets nodes: Open "Google Sheets" node Replace document ID: 1OeqtCFm4Wek9DI5YFOWQXTpQJS-SJxC10iAPKEKkmiY Select your Google Sheets credential Choose the correct sheet/tab name Configure form settings: Open "Search by Profile URL" node Customize form title and field labels as needed Note the webhook URL for form access Step 5: Test & Activate Add test profiles: Access the form using the webhook URL Submit 1-2 TikTok profile URLs for testing Use full URLs (e.g., https://www.tiktok.com/@username) Test the workflow: Submit a test profile through the form Monitor execution in n8n Verify data appears in Google Sheet Check for any error messages 📖 Usage Guide Submitting TikTok Profiles Navigate to your form URL (found in Form Trigger node) Enter TikTok profile URL in the format: https://www.tiktok.com/@username Click Submit to start the scraping process Wait for processing (typically 1-3 minutes) Understanding the Process The workflow follows this sequence: Form Submission → Profile URL captured API Trigger → Scraping job submitted to Bright Data Status Polling → Checks every 30 seconds if data is ready Data Retrieval → Fetches complete profile information Sheet Update → Saves data to Google Sheets Monitoring Progress Check n8n execution logs for real-time status Bright Data dashboard shows scraping progress Google Sheets will populate when data is ready Status column shows "ready" when complete Reading the Results Your Google Sheet will show: Complete TikTok profile information Engagement metrics and statistics Profile verification status Bio links and external connections Timestamp of data collection 🔧 Customization Options Adding More Data Points Edit the JSON body in "Sends profile URLs to Bright Data" node to include additional fields: "custom_output_fields": [ "account_id", "nickname", "biography", "followers", "following", "likes", "videos_count", "language", "creation_time", "last_post_time", "avg_video_duration", "hashtags_used", "music_used" ] Modifying Input Parameters Customize the scraping parameters: Country targeting**: Change "country" field in input Search limits**: Adjust "limit_per_input" value Discovery method**: Modify "discover_by" parameter Error handling**: Toggle "include_errors" setting Batch Processing Multiple Profiles To process multiple profiles simultaneously: Modify the input array in the API call Add multiple profile URLs in single request Implement loop logic for processing results Add rate limiting between requests Custom Form Fields Enhance the form with additional inputs: Open "Search by Profile URL" node Add form fields for: Country selection Number of videos to analyze Specific date ranges Custom tags or categories 🚨 Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify API key in Bright Data dashboard, check dataset permissions "Profile not found or private" Cause: Invalid TikTok URL or private profile Solution: Verify profile URL format, ensure profile is public "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Scraping timeout" Cause: Profile data taking too long to process Solution: Increase wait time or implement longer polling intervals "Invalid dataset ID" Cause: Incorrect or expired dataset configuration Solution: Check Bright Data dashboard for correct dataset ID "Form submission failed" Cause: Webhook configuration issues Solution: Verify webhook URL and form trigger settings Advanced Troubleshooting Check execution logs** in n8n for detailed error messages Test individual nodes** by running them separately Verify data formats** ensure URLs are properly formatted Monitor API limits** check Bright Data usage quotas Add error handling** implement try-catch logic for robust operation 📊 Use Cases & Examples 1. Influencer Marketing Research Goal: Identify and analyze potential influencers for campaigns Research influencers in specific niches Analyze engagement rates and audience size Compare multiple influencers for campaign selection Track influencer growth over time 2. Competitive Analysis Goal: Monitor competitors' TikTok presence and performance Track competitor follower growth Analyze content strategies and engagement Monitor posting frequency and timing Identify trending content themes 3. Social Media Monitoring Goal: Track brand mentions and user-generated content Monitor branded hashtag usage Track brand advocates and micro-influencers Analyze sentiment and engagement patterns Identify trending topics in your industry 4. Market Research Pipeline Goal: Gather social media intelligence for business decisions Analyze target audience behavior Study content preferences and trends Generate reports for stakeholders Support marketing strategy development ⚙ Advanced Configuration Rate Limiting and Performance To optimize for large-scale scraping: Adjust wait times between status checks Implement exponential backoff for retries Add batch processing for multiple profiles Monitor API usage to avoid limits Data Validation and Cleaning Enhance data quality with validation: Add data type validation for numeric fields Implement URL format checking Clean and standardize text fields Add data completeness checks Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with influencer data Slack Notifications**: Send alerts when new data is available Database Storage**: Store data in PostgreSQL/MySQL for analysis BI Tools**: Connect to Tableau/Power BI for visualization Webhook Integration For real-time updates: Add webhook triggers for immediate profile checks Integrate with external systems via webhooks Create API endpoints for programmatic access Implement authentication for secure access 📈 Performance & Limits Expected Performance Single Profile**: 30-60 seconds average processing time Concurrent Requests**: 5-10 simultaneous (depends on Bright Data plan) Data Accuracy**: 95%+ for public TikTok profiles Success Rate**: 90%+ for accessible profiles Daily Capacity**: 100-1000 profiles (depends on rate limits) Resource Usage Memory**: ~50MB per execution Storage**: Minimal (data stored in Google Sheets) API Calls**: 3-5 Bright Data calls per profile (including status checks) Bandwidth**: ~1-2MB per profile scraped Execution Time**: 1-2 minutes per profile Scaling Considerations Rate Limiting**: Add delays for high-volume scraping Error Handling**: Implement retry logic for failed requests Data Validation**: Add checks for malformed profile data Monitoring**: Track success/failure rates over time Cost Optimization**: Monitor API usage to control costs 🤝 Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Google account with Sheets access ☐ Bright Data account with TikTok dataset access ☐ Valid TikTok profile URLs for testing ☐ 15 minutes for setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Create Google Sheet - New sheet with proper column structure ☐ Set up Google Sheets credentials - OAuth setup and test ☐ Update workflow settings - Replace sheet ID and API keys ☐ Test with sample profiles - Submit 1-2 URLs and verify results ☐ Activate workflow - Enable form trigger for production use Ready to Use! 🎉 Your form URL: https://your-n8n-instance.com/form/[webhook-id] 🎯 Happy TikTok Scraping! This workflow provides a solid foundation for automated TikTok influencer data collection. Customize it to fit your specific needs and use cases for influencer marketing, competitive analysis, and social media research.
by assert
Who this template is for This template is for every engineer who wants to automate their code reviews or just get a 2nd opinion on their PR. How it works This workflow will automatically review your changes in a Gitlab PR using the power of AI. It will trigger whenever you comment with +0 to a Gitlab PR, get the code changes, analyze them with GPT, and reply to the PR discussion. Set up Steps Set up webhook of note_events in Gitlab repository (see here on how to do it) Configure ChatGPT credentials Note "+0" in MergeRequest to trigger automatic review by ChatGPT
by Don Jayamaha Jr
🕒 Evaluate Tesla (TSLA) price action and market structure on the 1-hour timeframe using 6 real-time indicators. This sub-agent is designed to feed mid-term technical insights into the Tesla Financial Market Data Analyst Tool. It uses GPT-4.1 to interpret Alpha Vantage indicator data delivered via secure webhooks. ⚠️ This workflow is not standalone and is executed via Execute Workflow. 🔌 Requires: Tesla Quant Technical Indicators Webhooks Tool Alpha Vantage Premium API Key 🔧 Connected Indicators This tool fetches and analyzes the latest 20 datapoints for: RSI (Relative Strength Index)** MACD (Moving Average Convergence Divergence)** BBANDS (Bollinger Bands)** SMA (Simple Moving Average)** EMA (Exponential Moving Average)** ADX (Average Directional Index)** 📋 Sample Output { "summary": "TSLA is gaining strength on the 1-hour chart. RSI is rising, MACD has crossed bullish, and BBANDS are widening.", "timeframe": "1h", "indicators": { "RSI": 62.1, "BBANDS": { "upper": 176.90, "lower": 169.70, "middle": 173.30, "close": 176.30 }, "SMA": 174.20, "EMA": 175.60, "ADX": 27.5, "MACD": { "macd": 0.84, "signal": 0.65, "histogram": 0.19 } } } 🧠 Agent Components | Component | Role | | ------------------------------ | -------------------------------------------------- | | 1hour Data | Pulls Alpha Vantage indicator data via webhook | | Tesla 1hour Indicators Agent | Interprets signals using structured GPT-4.1 prompt | | OpenAI Chat Model | GPT-4.1 LLM performs analysis | | Simple Memory | Maintains session context | 🛠️ Setup Instructions Import Workflow into n8n Name it: Tesla_1hour_Indicators_Tool Install the Webhook Fetcher Tool 👉 Required: Tesla_Quant_Technical_Indicators_Webhooks_Tool This agent expects webhook /1hourData to return pre-cleaned data Add Credentials Alpha Vantage Premium API Key (via HTTP Query Auth) OpenAI GPT-4.1 credentials Configure for Sub-Agent Use Triggered only via Execute Workflow from: 👉 Tesla Financial Market Data Analyst Tool Inputs: message (optional) sessionId (required for memory linkage) 📌 Sticky Notes Overview 🟢 Trigger Setup – Activated only by the parent agent 📊 1h Webhook Fetcher – Calls Alpha Vantage via secured endpoint 🧠 AI Agent Summary – Interprets trend/momentum from indicator data 🔗 GPT Model Notes – GPT-4.1 parses and explains technical alignment 📘 Documentation Sticky – Embedded in canvas with full walkthrough 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company This tool is part of a proprietary multi-agent AI architecture. No commercial reuse or redistribution permitted. 🔗 Author: Don Jayamaha 🔗 Templates: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Detect TSLA trend shifts and validate setups with 1-hour technical clarity—powered by Alpha Vantage + GPT-4.1. This tool is required by the Tesla Financial Market Data Analyst Tool.