by Harshil Agrawal
This workflow allows you to create, update, and get a webinar in GoToWebinar. GoToWebinar node: This node will create a new webinar in GoToWebinar. GoToWebinar1 node: This node will update the description of the webinar that we created in the previous node. GoToWebinar2 node: This node will get the information about the webinar that we created earlier.
by vinci-king-01
Amazon Keyboard Product Scraper with AI and Google Sheets Integration 🎯 Target Audience E-commerce analysts and researchers Product managers tracking competitor keyboards Data analysts monitoring Amazon keyboard market trends Business owners conducting market research Developers building product comparison tools 🚀 Problem Statement Manual monitoring of Amazon keyboard products is time-consuming and error-prone. This template solves the challenge of automatically collecting, structuring, and storing keyboard product data for analysis, enabling data-driven decision making in the competitive keyboard market. 🔧 How it Works This workflow automatically scrapes Amazon keyboard products using AI-powered web scraping and stores them in Google Sheets for comprehensive analysis and tracking. Key Components Scheduled Trigger - Runs the workflow at specified intervals to keep data fresh and up-to-date AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract product information from Amazon search results with natural language processing Data Processing - Transforms and structures the scraped data for optimal spreadsheet compatibility Google Sheets Integration - Automatically saves product data to your spreadsheet with proper column mapping 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Product name and model | "Logitech MX Keys Advanced Wireless Illuminated Keyboard" | | url | URL | Direct link to Amazon product page | "https://www.amazon.com/dp/B07S92QBCX" | | category | String | Product category classification | "Electronics" | 🛠️ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize Amazon Search Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify search terms, filters, or categories as needed Adjust the user prompt to extract additional fields if required 5. Configure Schedule Trigger Set your preferred execution frequency (daily, weekly, etc.) Choose appropriate time zones for your business hours Consider Amazon's rate limits when setting frequency 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured 🔄 Workflow Customization Options Modify Search Criteria Change the Amazon URL to target specific keyboard categories Add price filters, brand filters, or rating requirements Update search terms for different product types Extend Data Collection Modify the user prompt to extract additional fields (price, rating, reviews) Add data processing nodes for advanced analytics Integrate with other data sources for comprehensive market analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic 📈 Use Cases Competitive Analysis**: Track competitor keyboard pricing and features Market Research**: Monitor trending keyboard products and categories Inventory Management**: Keep track of available keyboard options Price Monitoring**: Track price changes over time Product Development**: Research market gaps and opportunities 🚨 Important Notes Respect Amazon's terms of service and rate limits Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. News Article Scraping and Analysis with AI and Google Sheets Integration 🎯 Target Audience News aggregators and content curators Media monitoring professionals Market researchers tracking industry news PR professionals monitoring brand mentions Journalists and content creators Business analysts tracking competitor news Academic researchers collecting news data 🚀 Problem Statement Manual news monitoring is time-consuming and often misses important articles. This template solves the challenge of automatically collecting, structuring, and storing news articles from any website for comprehensive analysis and tracking. 🔧 How it Works This workflow automatically scrapes news articles from websites using AI-powered extraction and stores them in Google Sheets for analysis and tracking. Key Components Scheduled Trigger**: Runs automatically at specified intervals to collect fresh content AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract article titles, URLs, and categories from any news website Data Processing**: Formats extracted data for optimal spreadsheet compatibility Automated Storage**: Saves all articles to Google Sheets with metadata for easy filtering and analysis 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Article headline and title | "'My friend died right in front of me' - Student describes moment air force jet crashed into school" | | url | URL | Direct link to the article | "https://www.bbc.com/news/articles/cglzw8y5wy5o" | | category | String | Article category or section | "Asia" | 🛠️ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize News Source Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify the target news website URL as needed Adjust the user prompt to extract additional fields if required Test with a small website first before scaling to larger news sites 5. Configure Schedule Trigger Set your preferred execution frequency (daily, hourly, etc.) Choose appropriate time zones for your business hours Consider the news website's update frequency when setting intervals 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured 🔄 Workflow Customization Options Modify News Sources Change the website URL to target different news sources Add multiple news websites for comprehensive coverage Implement filters for specific topics or categories Extend Data Collection Modify the user prompt to extract additional fields (author, date, summary) Add sentiment analysis for article content Integrate with other data sources for comprehensive analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic 📈 Use Cases Media Monitoring**: Track mentions of your brand, competitors, or industry keywords Content Curation**: Automatically collect articles for newsletters or content aggregation Market Research**: Monitor industry trends and competitor activities News Aggregation**: Build custom news feeds for specific topics or sources Academic Research**: Collect news data for research projects and analysis Crisis Management**: Monitor breaking news and emerging stories �� Important Notes Respect the target website's terms of service and robots.txt Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a small website first before scaling to larger news sites Consider adding filters in the Code node to exclude certain article types or categories Monitor execution logs for any issues and adjust parameters accordingly Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by PDF Vector
Overview Organizations struggle to make their document repositories searchable and accessible. Users waste time searching through lengthy PDFs, manuals, and documentation to find specific answers. This workflow creates a powerful API service that instantly answers questions about any document or image, perfect for building customer support chatbots, internal knowledge bases, or interactive documentation systems. What You Can Do This workflow creates a RESTful webhook API that accepts questions about documents and returns intelligent, contextual answers. It processes various document formats including PDFs, Word documents, text files, and images using OCR when needed. The system maintains conversation context through session management, caches responses for performance, provides source references with page numbers, handles multiple concurrent requests, and integrates seamlessly with chatbots, support systems, or custom applications. Who It's For Perfect for developer teams building conversational interfaces, customer support departments creating self-service solutions, technical writers making documentation interactive, organizations with extensive knowledge bases, and SaaS companies wanting to add document Q&A features. Ideal for anyone who needs to make large document repositories instantly searchable through natural language queries. The Problem It Solves Traditional document search returns entire pages or sections, forcing users to read through irrelevant content to find answers. Support teams repeatedly answer the same questions that are already documented. This template creates an intelligent Q&A system that provides precise, contextual answers to specific questions, reducing support tickets by up to 60% and improving user satisfaction. Setup Instructions Install the PDF Vector community node from n8n marketplace Configure your PDF Vector API key Set up the webhook URL for your API endpoint Configure Redis or database for session management Set response caching parameters Test the API with sample documents and questions Key Features RESTful API Interface**: Easy integration with any application Multi-Format Support**: Handle PDFs, Word docs, text files, and images OCR Processing**: Extract text from scanned documents and screenshots Contextual Answers**: Provide relevant responses with source citations Session Management**: Enable conversational follow-up questions Response Caching**: Improve performance for frequently asked questions Analytics Tracking**: Monitor usage patterns and popular queries Error Handling**: Graceful fallbacks for unsupported documents API Usage Example POST https://your-n8n-instance.com/webhook/doc-qa Content-Type: application/json { "documentUrl": "https://example.com/user-manual.pdf", "question": "How do I reset my password?", "sessionId": "user-123", "includePageNumbers": true } Customization Options Add authentication and rate limiting for production use, implement multi-document search across entire repositories, create specialized prompts for technical documentation or legal documents, add automatic language detection and translation, build conversation history tracking for better context, integrate with Zendesk, Intercom, or other support systems, and enable direct file upload support for local documents. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by vinci-king-01
How it works This workflow automatically analyzes real estate market sentiment by scraping investment forums and news sources, then provides AI-powered market predictions and investment recommendations. Key Steps Scheduled Trigger - Runs on a cron schedule to regularly monitor market sentiment. Multi-Source Scraping - Uses ScrapeGraphAI to extract discussions from BiggerPockets forums and real estate news articles. Sentiment Analysis - JavaScript nodes analyze text content for bullish/bearish keywords and calculate sentiment scores. Market Prediction - Generates investment recommendations (buy/sell/hold) based on sentiment analysis with confidence levels. Timing Optimization - Provides optimal timing recommendations considering seasonal factors and market urgency. Investment Advisor Alerts - Formats comprehensive reports with actionable investment advice. Telegram Notifications - Sends formatted alerts directly to your Telegram channel for instant access. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping. Set up Telegram bot - Create a bot via @BotFather and add your bot token and chat ID. Customize data sources - Update the URLs to target specific real estate forums or news sources. Adjust schedule frequency - Modify the cron expression based on how often you want sentiment updates. Test sentiment analysis - Run manually first to ensure the analysis logic works for your market. Configure alert preferences - Customize the alert formatting and priority levels as needed. Technologies Used ScrapeGraphAI** - For extracting structured data from real estate forums and news sites JavaScript Code Nodes** - For sentiment analysis, market prediction, and timing optimization Schedule Trigger** - For automated execution using cron expressions Telegram Integration** - For instant mobile notifications and team alerts JSON Data Processing** - For structured sentiment analysis and market intelligence
by Beex
Summary Automatically sync your Beex leads to HubSpot by handling both creation and update events in real time. How It Works Trigger Activation: The workflow is triggered when a lead is created or updated in Beex. Data Transformation: The nested data structure from the Beex Trigger is flattened into a simple JSON format for easier processing. Email Validation: The workflow verifies that the lead contains a valid email address (non-null), as this field serves as the unique identifier in HubSpot. Field Mapping: Configure the fields (via drag and drop) that will be used to create or update a contact in HubSpot. ⚠️ Important: Field names must exactly match the contact property names defined in HubSpot. Event Routing: The workflow routes the action based on the event type received: contact_create or contact_update. Branch Selection: If the event is contact_create, the workflow follows the upper branch; otherwise, it continues through the lower branch. API Request Execution: The corresponding HTTP request is executed POST to create a new contact or PUT to update an existing one both using the same JSON body structure. Setup Instructions Install Beex Nodes: Before importing the template, install the Beex trigger and node using the following package name: n8n-nodes-beex Configure HubSpot Credentials: Set up your HubSpot credentials with: Access Token (typically from a private app) Read/Write permissions for Contacts objects Configure Beex Credentials: For Beex users with platform access (for trial requests, contact frank@beexcc.com): Navigate to Platform Settings → API Key & Callback Copy your API key and paste it into the Beex Trigger node in n8n Set Up Webhook URL: Copy the Webhook URL (Test/Production) from the Beex Trigger Node and paste it into the Callback Integration section in Beex. Save your changes. Requirements HubSpot:* An account with a Private App Token and Read/Write permissions for *Contacts** objects. Beex:** An account with lead generation permissions and a Bearer Token configured in the Trigger Node. Event Configuration:* In the Beex platform's *API Key & Callback** section, enable the following events: "Update general and custom contact data" "Networking" Customization Options Contact Filtering:** Add filters to control which Beex leads should sync to HubSpot. Identifier Configuration:** By default, only leads with a valid email address are processed to ensure accurate matching in HubSpot CRM. You can modify this logic to apply additional restrictions. Field Mapping:** The "Set Fields Update" node is the primary customization point. Here you can map Beex fields to HubSpot properties for both creation and update operations (see Step 4 in How It Works). Field Compatibility:** Ensure that Beex custom fields are compatible with HubSpot's default or custom properties; otherwise, API calls will fail due to schema mismatches.
by Hàn Thiên Hải
This workflow helps SEO professionals and website owners automate the tedious process of monitoring and indexing URLs. It fetches your XML sitemap, filters for recent content, checks the current indexing status via Google Search Console (GSC), and automatically submits non-indexed URLs to the Google Indexing API. By integrating a local submission state (Static Data), it ensures that your API quota is used efficiently by preventing redundant submissions within a 7-day window. Key Features Smart Sitemap Support: Handles both standard sitemaps and Sitemap Indexes (nested sitemaps). Status Check: Uses GSC URL Inspection to verify if a page is truly missing from Google's index before taking action. Quota Optimization: Filters content by lastmod date and tracks submission history to stay within Google's API limits. Rate Limiting: Includes built-in batching and delays to comply with Google's API throughput requirements. Prerequisites Google Search Console API: Enabled in your Google Cloud Console. Google Indexing API: Enabled for instant indexing. Service Account: You need a Service Account JSON key with "Owner" permissions in your GSC Property. Setup steps 1. Configure Google Cloud Console Create Project: Go to the Google Cloud Console and create a new project. Enable APIs: Search for and enable both the Google Search Console API and the Web Search Indexing API. Create Service Account: Navigate to APIs & Services > Credentials > Create Credentials > Service Account. Fill in the details and click Done. Generate Key: Select your service account > Edit service account > Keys > Add Key > Create new key > JSON. Save the downloaded file securely. 2. Set Up Credentials in n8n New Credential: In n8n, go to Create credentials > Google Service Account API. Input Data: Paste the Service Account Email and Private Key from your downloaded JSON file. In JSON file, Service Account Email is client_email and Private Key is private_key. HTTP Request Setup: Enable Set up for use in HTTP Request node. Scopes: Enter exactly: https://www.googleapis.com/auth/indexing https://www.googleapis.com/auth/webmasters.readonly into the Scopes field. GSC Permission: Add the Service Account email to your Google Search Console property as an Owner via Settings > Users and permissions > Add user. 3. Workflow Configuration Configuration Node: Open the Configuration node and enter your Sitemap URL and GSC Property URL. If your property type is URL prefix, the URL must end with a forward slash /. Example: https://hanthienhai.com/ Link Credentials: Update the credentials in both the GSC: Inspect URL Status and GSC: Request Indexing nodes with the service account created in Step 2. 4. Schedule & Activate Set Schedule: Adjust the Schedule Trigger node to your preferred execution frequency. Activate: Toggle the workflow to Active to start the automation. Questions or Need Help? For setup assistance, customization, or workflow support, feel free to contact me at admin@hanthienhai.com
by EoCi - Mr.Eo
🎯 What This Does Automatically finds PDF file in Google Drive and extracts information. Use it to pull out clean output. It then formats the output into a clean JSON object. 🔄 How It Works 1. Manual Trigger starts the process. 2. 🔎Find File: "Google Drive" node finds the PDF file/files in a specified folder and downloads it/them. 3. 📝Extract Raw Text: "Extract From File" node pulls the text content from the retrieval file/files. 4. ✅Output Clean Data: "Code" node refines the extracted content and runs custom code for cleaning and final formatting. 🚀Setup Guidelines Setup Requirements Google Drive Account**: A Google Drive with an empty folder or folder that contains PDF file/files that you want to process. API Keys**: Gemini, Google Drive. Set up steps Setup time: < 5 minutes Add Credentials in n8n: Ensure your Google Drive OAuth2 and Google Gemini (PaLM) API credentials are created and connected. Go to Credentials > New to add them if you haven't created yet. Configure the Search Node (Get PDF Files/File): Open the node and select your Google Drive credential. In the "Resource" field, choose File/Folder. In "Search Method" field, select "Search File/Folder Name", In "Search Query" type in *.pdf. Add on 2 filters, in "Folder" filter click on dropdown choose "From List" and connect to the created folder on your google drive. In "What to Search" filter, select file. Add on "Options" (optional): Click on "Add option", choose ("ID" and "Name") Define Extraction Rules (Extract Files/File's Data): Select File Type: Open node and click on the dropdown below "Operation" section, choose "Extract From PDF". Next, in "Input Binary Field" section keep as default "data". Clean & Format Data (Optional): Adjust the Get PDF Data Only node to keep only the fields you need and give them friendly names. Modify the Data Parser & Cleaner node if you need to perform custom transformation. Activate and Run: Save and Activate the workflow. Click "Execute Workflow" to run it manually and check the output. That’s it! Once configured, this workflow becomes your personal data assistant. Run it anytime you need to extract information quickly and accurately, saving you hours of manual work and ensuring your data is always ready to use.
by Wessel Bulte
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint** - Retrieves the complete tender documentation in XML format JSON endpoint** - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte
by Frederik Duchi
This n8n template demonstrates how to automatically generate personalized calendar views in Baserow, based on a chosen date field and a filter. Having a personalized view with only information that is relevant to you makes it easy to integrate with external calendar tools like Outlook or Google Calendar. Use cases are many: Task management (deadlines per staff member) Customer management (appointments per customer) Inventory management (delivery dates per supplier) Good to know You only need a Date field (e.g., a task deadline, due date, appointment date) and a Link to table field (e.g., a customer, employee, product) to make this work. The generated calendar views can be shared as .ics files and imported into any external calendar application. Authentication is done through a JWT token constructed from your Baserow username and password. How it works Set Baserow credentials**: Allows you to enter your Baserow credentials (username + password) and the API host path. The host is by default https://api.baserow.io, but you can change this in case you are self-hosting. The information is required to generate a JWT token that authenticates all future HTTP request nodes to create and configure the view. Create a token**: Generates a JWT token based on the information provided in the previous node. Set table and field ids**: Stores the generated JWT token and allows you to enter the ids of the tables and fields required to run the automation. Get all records from filter table** Gets all the records from the table you want to filter on. This is the table that has a Link to table field referencing the table with the Date field. Each record from this table will get it’s own view. Some examples: Customers, Employees and Products. Create new calendar view** Calls the API endpoint /api/database/views/table/{table_id} to create a new view. Check the Baserow API documentation for further details. The body of this requests configures the new view by setting among other things a name and the date field Create filter** Calls the API endpoint /api/database/views/{view_id}/filters/ to set a filter on the view so that it only shows the records that are relevant. This filter is based on the Link to table field that is set in earlier steps. Check the Baserow API documentation for further details. Set background color** Calls the API endpoint /api/database/views/{view_id}/decorations/ to set a a color on the background or left side of each item. By default, the color is based on a single select field, but it is also possible to use a condition. Check the Baserow API documentation for further details. Share the view** Calls the API endpoint /api/database/views/{view_id} to update the current view. It updates the ical_public property to true so that an ics link is created. Check the Baserow API documentation for further details. Update the url’s** Updates all the records in the table you want to filter on to fill in the url to the new generated view and the url to the ics file. This can be useful if you want to build an application on top of your database. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Requirements Baserow account (cloud or self-hosted) A Baserow database with a table that has a Date field and a Link to Table field Customising this workflow Change the date field used to generate the calendars (e.g., deadline → appointment date). Adjust the filters to match your context (staff, customer, product, etc.). Configure which fields are shown using the /api/database/view/{view_id}/field-options/ endpoint. Check the Baserow API documentation for further details. Add or remove optional steps such as coloring by status or sharing the ics feed. Extend the workflow to notify staff when a new view has been created for them.
by Marker.io
Marker.io to ServiceNow Integration Automatically create ServiceNow incidents with full technical context when bugs are reported through Marker.io 🎯 What this template does This workflow creates a seamless bridge between Marker.io and ServiceNow, your IT service management platform. Every issue submitted through Marker.io's widget automatically becomes a trackable incident in ServiceNow, complete with technical details and visual context. This ensures your IT team can track, prioritize, and resolve bugs efficiently within their existing ITSM workflow. When a bug is reported, the workflow: Captures the complete Marker.io webhook payload Formats all technical details and metadata Creates a new incident in ServiceNow with the reporter information Includes comprehensive technical context and Marker.io links Preserves screenshots, browser info, and custom data ✨ Benefits Automated ticket creation** - No manual data entry required Complete context** - All bug details transfer automatically Faster triage** - IT teams see issues immediately in ServiceNow Better tracking** - Leverage ServiceNow's incident management capabilities Rich debugging info** - Browser, OS, and screenshot details preserved 💡 Use Cases IT Service Desks**: Streamline bug reporting from end users Development Teams**: Track production issues with full technical context QA Teams**: Convert test findings directly into trackable incidents Support Teams**: Escalate customer-reported bugs to IT with complete details 🔧 How it works N8N Webhook receives Marker.io bug report data JavaScript node formats and extracts relevant information ServiceNow node creates incident with formatted details Incident includes title, description, reporter info, and technical metadata Links preserved to both public and private Marker.io views The result is a fully documented ServiceNow incident that your IT team can immediately action, with all the context needed to reproduce and resolve the issue. 📋 Prerequisites Marker.io account** with webhook capabilities ServiceNow instance** with API access enabled ServiceNow credentials** (username/password or OAuth) Appropriate ServiceNow permissions** to create incidents 🚀 Setup Instructions Import this workflow into your n8n instance Configure the Webhook: Copy the production webhook URL after saving Add to Marker.io: Workspace Settings → Webhooks → Create webhook Select "Issue Created" as the trigger event Set up ServiceNow credentials: In n8n, create new ServiceNow credentials Enter your ServiceNow instance URL Add username and password for a service account Test the connection Customize field mappings (optional): Modify the JavaScript code to map additional fields Adjust priority mappings to match your ServiceNow setup Add custom field mappings as needed Test the integration: Create a test issue in Marker.io Verify the incident appears in ServiceNow Check that all data transfers correctly 📊 Data Captured ServiceNow Incident includes: Short Description**: Issue title from Marker.io Description** containing: 🐛 Issue title and ID 📊 Priority level and issue type 📅 Due date (if set) 📝 Full issue description 🖥️ Browser version and details 💻 Operating system information 🌐 Website URL where issue occurred 🔗 Direct links to Marker.io issue (public and private) 📦 Any custom data fields 📷 Screenshot URL with proper formatting 🔄 Workflow Components Webhook Node**: Receives Marker.io POST requests Code Node**: Processes and formats the data using JavaScript ServiceNow Node**: Creates the incident using ServiceNow API → Read more about Marker.io webhook events 🚨 Troubleshooting Webhook not triggering: Verify webhook URL is correctly copied from n8n to Marker.io Check that "Issue Created" event is selected in Marker.io webhook settings Ensure webhook is set to "Active" status in Marker.io Test with Marker.io's webhook tester feature Check n8n workflow is active and not in testing mode ServiceNow incident not created: Verify ServiceNow credentials are correct and have not expired Check that the service account has permissions to create incidents Ensure ServiceNow instance URL is correct (include https://) Test ServiceNow connection directly in n8n credentials settings Check ServiceNow API rate limits haven't been exceeded Missing or incorrect data: Screenshot URL broken: The workflow already handles URL formatting, but verify Marker.io is generating screenshots Custom data missing: Ensure custom fields exist in Marker.io before sending Due date formatting issues: Check your ServiceNow date format requirements JavaScript errors in Format node: Check webhook payload structure hasn't changed in Marker.io updates Verify all field paths match current Marker.io webhook schema Use n8n's data pinning to debug with actual webhook data Check for undefined values when optional fields are missing Connection issues: ServiceNow timeout: Increase timeout in node settings if needed SSL/Certificate errors: Check ServiceNow instance SSL configuration Network restrictions: Ensure n8n can reach your ServiceNow instance Authentication failures: Regenerate ServiceNow credentials if needed Testing tips: Use n8n's "Execute Workflow" with pinned test data Enable webhook test mode in Marker.io for safe testing Check ServiceNow incident logs for detailed error messages Monitor n8n execution logs for specific failure points
by Ibrahim Emre POLAT
Website & API Health Monitoring System with HTTP Status Validation How it works Performs HTTP health checks on websites and APIs with automatic health status validation Checks HTTP status codes and analyzes JSON responses for common health indicators Returns detailed status information including response times and health status Implements conditional logic to handle different response scenarios Perfect for monitoring dashboards, alerts, and automated health checks Set up steps Deploy the workflow and activate it Get the webhook URL from the trigger node Configure your monitoring system to call the webhook endpoint Send POST requests with target URLs for health monitoring Receive JSON responses with health status, HTTP codes, and timestamps Usage Send POST requests to the webhook URL with target URL parameter Optionally configure timeout and status expectations in request body Receive JSON responses with health status, HTTP codes, and timestamps Perfect for monitoring dashboards, alerts, and automated health checks Use with external monitoring tools like Nagios, Zabbix, or custom dashboards Set up scheduled monitoring calls for continuous health validation Example request: Send POST with {"url": "https://your-site.com", "timeoutMs": 5000} Success response returns: {"ok": true, "statusCode": 200, "healthStatus": "ok"} Failure response returns: {"ok": false, "error": "Health check failed", "statusCode": 503} Benefits Proactive monitoring to identify issues before they impact users Detailed diagnostics with comprehensive health data for troubleshooting Integration ready - works with existing monitoring and alerting systems Customizable timeout settings, expected status codes, and health indicators Scalable solution to monitor multiple services with single workflow endpoint Use Cases E-commerce platforms: Monitor payment APIs, inventory systems, user authentication Microservices: Health validation for distributed service architectures API gateways: Endpoint monitoring with response time validation Database connections: Track connectivity and performance metrics Third-party integrations: Monitor external API dependencies and SLA compliance Target Audience DevOps Engineers implementing production monitoring System Administrators managing server uptime Site Reliability Engineers building monitoring systems Development Teams tracking API health in staging/production IT Support Teams for proactive issue detection