by Anton Vanhoucke
This workflow converts Notion pages to markdown, and then converts that markdown back to Notion blocks. It will triple the content of the last updated page it finds. This is useless by itself, but you can copy-paste from this workflow to create your own. Prerequisites A notion account with some pages or databases Setup instructions Create a notion credential and share some pages as described here: https://docs.n8n.io/integrations/builtin/credentials/notion/ How it works The HTTP Request gets notion child blocks from a page, because the default n8n block only gets plain text and no links. The first code block converts it to markdown. The second code block converts it back to Notion blocks The last HTTP block appends everything to the original Notion page, essentially duplicating it for the purpose of demoing the script. I hope in the future we get official n8n blocks that extract markdown, or use markdown to write to Notion. There is community block that also does this, but this template is easier: you can simply copy-paste the blocks from this workflow.
by Ranjan Dailata
Who is this for? The Capture Website Screenshots with Bright Data Web Unlocker and Save to Disk workflow is built for automation professionals and developers who need reliable, high-quality screenshots from any website even those protected by anti-bot technologies. It is ideal for: Compliance Teams - Capturing visual records of web content for legal or audit purposes. Product Managers - Tracking visual changes across competitor landing pages. Digital Marketers - Archiving campaign pages and offer variations. Developers and QA Teams - Validating UI deployments or rendering issues. Growth Hackers and Scrapers - Who need to bypass bot protection and capture visual snapshots of restricted content. What problem is this workflow solving? Websites today are highly protected with anti-bot tools like Cloudflare, bot detection scripts, and geo-restrictions. These protections often break traditional screenshot tools or prevent headless browsers from accessing content. This workflow solves the following problems: Bypasses anti-bot defenses using Bright Data Web Unlocker. Automatically captures screenshots without manual browser steps. Stores images locally for easy access or reporting. Operates headlessly and at scale, perfect for automations or scheduled jobs. What this workflow does Sets the target URL, file name, and Bright Data zone name using the Set URL, Filename and Bright Data Zone node. Sends an HTTP POST request to Bright Data Web Unlocker API to capture a screenshot. Saves the screenshot image (.png) to a specified disk location using the Write a file to disk node. Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Ensure the URL, file name, and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the screenshot. How to customize this workflow to your needs Change the target URL: Modify the value in the **Set URL, Filename and Bright Data Zone node to capture different websites. Set dynamic filenames**: Use expressions in n8n to generate filenames based on date/time or URL. Specify custom save paths: Adjust the path in the **Write a file to disk node to store screenshots in your preferred directory. Enhance with notifications**: Add additional nodes to send alerts or log activity after each screenshot is taken. Integrate with external systems**: Send screenshots to cloud storage (e.g: AWS S3, Google Drive) or link into monitoring/reporting tools.
by Agent Studio
Automatically store Retell transcripts in Google Sheets/Airtable/Notion from webhook Overview This workflow stores the results of a Retell voice call (transcript, analysis, etc.) once it has ended and been analyzed. It listens for call_analyzed webhook events from Retell and stores the data in Airtable, Google Sheets, and Notion (choose based on your stack). Useful for anyone building Retell agents who want to keep a detailed history of analyzed calls in structured tools. Who is it for For builders of Retell's Voice Agents who want to store call history and essential analytic data. Prerequisites Have a Retell AI Account Create a Retell agent Associate a phone number with your Retell agent Set up one of the following: An Airtable base and table (example: "Transcripts") A Google Sheet with a “Transcripts” tab A Notion database with columns to match the transcript fields Templates: Airtable Google Sheets Notion How it works Receives a webhook POST request from Retell when a call has been analyzed. Filters out any event that is not call_analyzed (Retell sends webhooks for call_started, call_ended and call_analyzed) Extracts useful fields like: Call ID, start/end time, duration, total cost Transcript, summary, sentiment Stores this data in your preferred tool: Airtable Google Sheets Notion How to use it Copy the webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/poc-retell-analysis) and paste it in your Retell agent under "Webhook settings" then "Agent Level Webhook URL". Make sure your Airtable, Google Sheet, or Notion databases are correctly configured to receive the fields. After each call, once Retell finishes the analysis, this workflow will automatically log the results. Extension If you use any "Post-Call Analysis" fields, you can add columns to your Airtable, Google Sheet, or Notion database. Then fetch the data from the call.call_analysis.custom_analysis_data object. Additional Notes Phone numbers are extracted depending on the call direction (from_number or to_number). Cost is converted from cents to dollars before saving. Dates are converted from timestamps to local ISO strings. You can remove any of the outputs (Airtable, Google Sheets, Notion) if you're only using one. 👉 Reach out to us if you're interested in analysing your Retell Agent conversations.
by Airtop
Automating LinkedIn Profile Discovery with Verification Use Case Accurately identifying and verifying a person’s LinkedIn profile is essential for prospecting, recruiting, or contact enrichment. This automation ensures high accuracy by combining search logic with optional profile validation. What This Automation Does This automation locates and verifies a LinkedIn profile using the following inputs: Person_info**: Any identifying information about the person (e.g., name, company, email). Airtop_profile**: Your Airtop Profile authenticated on LinkedIn, used for verifying the profile. How It Works Extracts a likely LinkedIn URL by performing a Google search using the provided person info. Validates the result (if Airtop Profile is provided): Visits the LinkedIn profile. Verifies match by checking the content (e.g., experience, role) against the person info. Returns a verified LinkedIn profile URL or "NA" if not found or not valid. Setup Requirements Airtop API Key Optional but recommended: an Airtop Profile authenticated on LinkedIn. Next Steps Combine with Email Lookup**: Use email-to-profile tools upstream to gather inputs. CRM Integration**: Automatically append LinkedIn profiles to contact records. Automate Outreach**: Use the verified URLs for personalized LinkedIn engagement workflows. Read more about how find and verify Linkedin profiles
by Keith Rumjahn
Who's this for? If you own a website and need to analyze your Matomo analytics data so you can increse the number of frequent visitors If you need to create an SEO report on what are the common trends amongst your most frequent visitors If you want to grow your site based on suggestions from data Matomo is an analytics tool that can give you details of each individual visitor. Much more powerful than Google analytics. Watch youtube tutorial here Get my SEO A.I. agent system here Read more -> How to create an A.I. Agent to analyze Matomo analytics using n8n for free Here's the A.I. output: Keywords showing the most improvement: Openrouter N8N. Keywords needing attention: Ai Generated Reference Letter Obsidian Second Brain Suggested actions for improvement: Optimize for "best Docker Synology" despite stable ranking, an improvement to top 10 is an achievable goal. Since "2nd brain app for developer" is of interest to a developer. Consider writing a blog post on how the app addresses the specific pain points of developers. Use case Instead of hiring an SEO expert, I run this report weekly. It looks at the data for the past week and looks for visitors with more than 3 visits and recommends ideas to convert more visitors into frequent visitors. How it works The workflow gathers matomo analytics for the past 7 days. We then parse the data The data is sent to Openrouter and using a FREE LLM, it analyses the data. It stores the results in baserow How to use this Input your Matomo analytics credentials Input your Matomo site ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Dates, Notes, Blog. Created by Rumjahn
by Juan Carlos Cavero Gracia
Image Carousel Publisher for Instagram and TikTok Description This automation template is designed for content creators, digital marketers, and social media managers looking to streamline their image carousel posting workflow. It automates the process of uploading multiple images as carousels to Instagram and slideshows to TikTok, making your visual content management more efficient across platforms. Who Is This For? Content Creators & Influencers:** Simplify posting image collections and focus more on creating visual content. Digital Marketers:** Ensure consistent carousel posts across multiple platforms with minimal manual effort. Social Media Managers:** Automate repetitive image uploading tasks and maintain visual engagement. What Problem Does This Workflow Solve? Manually uploading image carousels to different platforms can be time-consuming and inconsistent. This workflow addresses these challenges by: Automating Multi-Image Uploads:** Processes multiple images and prepares them for platform-specific formats. Supporting Cross-Platform Publishing:** Simultaneously posts your image carousels to Instagram and TikTok slideshows. Maintaining Visual Consistency:** Ensures your visual stories remain consistent across platforms. Streamlining Batch Processing:** Handles the technical complexity of multi-image uploads with a single workflow trigger. How It Works Image Selection: Trigger the workflow with your selected images. Image Processing: The workflow automatically processes and prepares your images for both platforms. Content Distribution: Uploads the images as a carousel to Instagram and as a slideshow to TikTok. Platform Optimization: Formats the uploads according to each platform's requirements. Setup API Token Generation: Visit upload-post.com and create an account Navigate to the API settings section Generate a new API token Copy the token for use in the next steps Platform Configuration: In the "Upload to Instagram" node: Paste your API token in the designated field Configure your Instagram account settings Set your preferred posting parameters In the "Upload to TikTok" node: Add the same API token Set up your TikTok account credentials Adjust posting preferences Content Parameters Setup: Rename the "HTTP Request" node to "Social Media Upload Request" Configure your account information: Username Account ID Content title format Posting schedule (if applicable) Image Source Configuration: Set up your image source directory Configure image format requirements Test with sample images before going live About upload-post.com Upload-post.com is a third-party service that acts as a bridge between your workflow and social media platforms. It provides: Secure API endpoints for multi-platform posting Image format validation and optimization Queue management for scheduled posts Analytics and posting status tracking Cross-platform compatibility handling Requirements Accounts:** upload-post.com account with access to Instagram and TikTok publishing. API Keys:** Upload-post.com API token. Images:** Properly formatted images that meet Instagram and TikTok specifications: Instagram: Up to 10 images per carousel, 1:1 to 4:5 aspect ratio TikTok: Compatible with slideshow format, 9:16 aspect ratio recommended Use this template to enhance your visual storytelling, maintain consistency across social platforms, and engage your audience with compelling image carousels and slideshows.
by Lucas Correia
What Does This Flow Do? This workflow demonstrates how to dynamically generate a line chart using the QuickChart node based on data provided in a JSON object and then upload the resulting chart image to Google Drive. Use Cases You can use it in presentations or requesting for chart generation from a software with HTTP requests. Automated report generation (e.g., daily sales charts). Visualizing data fetched from APIs or databases. Simple monitoring dashboards. Adding charts to internal tools or notifications. How it Works Trigger: The workflow starts manually when you click 'Test workflow'. Set Sample Data: A Set node (Edit Fields: Set JSON data to test) defines a sample JSON object named jsonData. This object contains: reportTitle: A title (not used in the chart generation in this example, but useful for context). labels: An array of strings representing the labels for the chart's X-axis (e.g., ["Q1", "Q2", "Q3", "Q4"]). salesData: An array of numbers representing the data points for the chart's Y-axis (e.g., [1250, 1800, 1550, 2100]). Generate Chart: The QuickChart node is configured to: Create a line chart. Dynamically read labels from the jsonData.labels array (Labels Mode: From Array). Use the jsonData.salesData array as the input data (Note: This configuration places data in the top-level 'Data' field. For more complex charts with multiple datasets or specific dataset options, configure datasets under 'Dataset Options' instead). The node outputs the generated chart image as binary data in a field named data. Upload to Google Drive: The Google Drive node (Google Drive: Upload File): Takes the binary data (data) from the QuickChart node. Uploads the image to your specified Google Drive folder. Dynamically names the file based on its extension (e.g., chart.png). Setup Steps Import: Import this template into your n8n instance. Configure Google Drive Credentials: Select the Google Drive: Upload File node. You MUST configure your own Google Drive credentials. Click on the 'Credentials' dropdown and either select existing credentials or create new ones by following the authentication prompts. (Optional) Customize Google Drive Folder: In the Google Drive: Upload File node, you can change the Drive ID and Folder ID to specify exactly where the chart should be uploaded. Activate: Activate the workflow if you want it to run automatically based on a different trigger. How to Use & Customize Change Input Data:** Modify the labels and salesData arrays within the Edit Fields: Set JSON data to test node to use your own data. Ensure the number of labels matches the number of data points. Use Real Data Sources:** Replace the Edit Fields: Set JSON data to test node with nodes that fetch data from real sources like: HTTP Request (APIs) Postgres / MongoDB nodes (Databases) Google Sheets node Ensure the output data from your source node is formatted similarly (providing labels and salesData arrays). You might need another Set node to structure the data correctly before the QuickChart node. Change Chart Type:** In the QuickChart node, modify the Chart Type parameter (e.g., change from line to bar, pie, doughnut, etc.). Customize Chart Appearance:** Explore the Chart Options parameter within the QuickChart node to add titles, change colors, modify axes, etc., using QuickChart's standard JSON configuration options. Use Datasets (Recommended for Complex Charts):** For multiple lines/bars or more control, configure datasets explicitly in the QuickChart node: Remove the expression from the top-level Data field. Go to Dataset Options -> Add option -> Add dataset. Set the Data field within the dataset using an expression like {{ $json.jsonData.salesData }}. You can add multiple datasets this way. Change Output Destination:** Replace the Google Drive: Upload File node with other nodes to handle the chart image differently: Write Binary File: Save the chart to the local filesystem where n8n is running. Slack / Discord / Telegram: Send the chart to messaging platforms. Move Binary Data: Convert the image to Base64 to embed in HTML or return via webhook response. Nodes Used Manual Trigger Set QuickChart Google Drive Tags: (Suggestions for tags field) QuickChart, Chart, Visualization, Line Chart, Google Drive, Reporting, Automation
by Hueston
Who is this for? Sales professionals looking to build lead lists from target company domains Business development teams conducting outreach campaigns Marketers building contact databases for account-based marketing Recruiters searching for potential candidates at specific companies Anyone needing to transform a list of company domains into actionable contact information What problem is this workflow solving? Finding business email addresses for outreach is a time-consuming process. The Apollo API doesn't provide a direct way to extract email contacts from domains in a single call. This workflow bridges that gap by: Automating the two-step process required by Apollo's API Processing multiple domains in batches without manual intervention Extracting, enriching, and storing contact information in a structured format Eliminating hours of manual data entry and API interaction What this workflow does This workflow creates an automated pipeline between Google Sheets and Apollo's API to: Pull a list of target domains from a Google Sheet Submit each domain to Apollo's search API to find associated people Loop through each person found and enrich their profile data Extract key information: name, title, email address, and LinkedIn URL Write the enriched contact information back to a results sheet Process the next domain automatically until all are complete Setup Prerequisites: An n8n instance (cloud or self-hosted) Apollo.io account with API access Google account with access to Google Sheets Google Sheets Setup: Create a new Google Sheet with two tabs: Tab 1: "Target Domains" with a column named "Domain To Enrich" Tab 2: "Results" with columns: Company, First Name, Last Name, Title, Email, LinkedIn n8n Setup: Import the workflow JSON into your n8n instance Set up Google Sheets credentials in n8n Update the Google Sheets document ID in both Google Sheets nodes Add your Apollo API key to both HTTP Request nodes Review and adjust API rate limits if needed Testing: Add a few test domains to your "Target Domains" sheet Run the workflow manually to verify it's working correctly Check the "Results" sheet to confirm data is being properly populated How to customize this workflow to your needs Adding More Contact Fields: Modify the "Clean Up" node to extract additional fields from the Apollo API response Add corresponding columns to your "Results" sheet Update the "Results To Results Sheet" node mapping to include the new fields Filtering Results: Add a Filter node after "Clean Up" to include only contacts with specific roles Create conditions based on title, seniority, or other fields returned by Apollo Automating Workflow Execution: Replace the manual trigger with a Schedule Trigger to run daily/weekly Add a Filter node to process only domains with "Not Processed" status Update the status field in Google Sheets after processing Additional Notes This workflow respects Apollo's API rate limits by processing one contact at a time The Apollo API may not return contact information for all domains or all employees Consider legal and privacy implications when collecting and storing contact information Made with ❤️ by Hueston
by Aitor | 1Node
Turn Gumroad buyers into loyal email subscribers and keep your CRM up‑to‑date. When someone makes a purchase on your Gumroad store, this n8n workflow instantly adds that customer to the right MailerLite group (so your nurture sequence starts on time) and writes the sale details into your Google Sheets CRM. You’ll never copy‑and‑paste orders again, and every buyer begins receiving your follow‑up emails the moment they purchase. Requirements A Gumroad account with a product listed A MailerLite account. A MailerLite group of subscribers created Enabled APIs and credentials for Google Sheets, MailerLite and Gumroad How it works Listen for a new sale on Gumroad** The Gumroad trigger watches your account 24/7 and fires as soon as a sale is completed. Create (or update) the subscriber in MailerLite** Their name and email are added to MailerLite. If they already exist, the workflow simply updates their profile. Assign the subscriber to your Gumroad group** Grouping lets your MailerLite automation send the right onboarding or upsell sequence without manual tagging. Log the purchase in Google Sheets** The buyer’s contact details, product, price, and date are appended as a new row in your CRM sheet. Set‑up steps Create an application in Gumroad. Copy the access token, you’ll paste it into the Gumroad trigger node. Grab your MailerLite API key MailerLite dashboard → Integrations → API. Paste it into the two MailerLite nodes. Prepare a Google Sheets spreadsheet Add column headers like Name, Email, Product, Price, Date. Open the template in n8n Cloud or Desktop In the Gumroad node, paste your token. In the MailerLite nodes, paste your API keys and replace the group id. In the Google Sheets node, replace the credentials, pick your spreadsheet and worksheet. Get in touch with us Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.
by Greg Evseev
This n8n workflow template allows you to upload a photo to a SharePoint folder using the Microsoft Graph API. The workflow includes steps for authentication, retrieving a photo for testing purposes, setting the destination folder and file name, and uploading the photo. Who is this for? This workflow is ideal for users who need to automate the process of uploading images to SharePoint. It is particularly useful for developers, IT administrators, and anyone managing digital assets within a SharePoint environment. What problem is this workflow solving? / Use Case This workflow addresses the need to automate the uploading of photos to a specific SharePoint folder. By using the Microsoft Graph API, it ensures secure and efficient file management, reducing manual effort and potential errors. What this workflow does Trigger the Workflow: The workflow starts when the user clicks the 'Test workflow' button. Set Configuration: Sensitive data such as TENANT_ID, CLIENT_ID, and CLIENT_SECRET are set. Authentication: Obtains an access token from Microsoft Graph API using the provided credentials. Get Photo: Retrieves a sample photo from a URL for testing purposes. Set Destination: Sets the target folder and file name for the photo upload. Upload Photo: Uploads the photo to the specified SharePoint folder using the Microsoft Graph API. Setup Prerequisites Create an Application User: Follow this guide to create an application user. Set Permissions: Ensure the following permissions are set: Sites.ReadWrite.All: For SharePoint site access. Files.ReadWrite.All: For file upload operations. Authentication For successful authentication, provide the following: TENANT_ID CLIENT_ID CLIENT_SECRET Note: For demonstration purposes, these values are stored in a 'Set' node. In a production environment, ensure the safety of such data using credentials, secure vaults, or other safe methods. Set Destination The destination is defined by two parameters: TARGET_FOLDER: The folder path in SharePoint where the photo will be uploaded. FILE_NAME: The name of the file to be uploaded. Example: Desired file location: https://contoso.sharepoint.com/uploads/pictures from n8n/example.jpg Set the following: TARGET_FOLDER = /uploads/pictures from n8n FILE_NAME = example.jpg How to Customize This Workflow to Your Needs Update Sensitive Data: Replace the placeholder values for TENANT_ID, CLIENT_ID, and CLIENT_SECRET with your actual credentials. Change Destination: Modify the TARGET_FOLDER and FILE_NAME parameters to match your desired upload location and file name. Test with Different Photos: Update the URL in the 'Get Photo' node to test with different images. Sticky Notes Workflow Overview This sticky note explains the overall purpose and dependencies of the workflow. Authentication Details This sticky note provides details on the authentication process and the importance of securing sensitive data. Set Destination Details This sticky note explains how to set the destination folder and file name for the photo upload. By following these guidelines, you can easily customize and use this workflow to automate photo uploads to SharePoint using the Microsoft Graph API.
by Paulo Ramirez
Receive realtime call-event data from telli Purpose and Problem Solved This template automates the process of receiving and acting upon real-time call event data from telli, an AI-powered voice agent platform. It solves the challenge of manually updating CRM records and initiating follow-up actions based on call outcomes. By leveraging webhooks and n8n's powerful workflow capabilities, this template enables businesses to instantly update their Airtable CRM and trigger appropriate follow-up actions, enhancing efficiency and responsiveness in customer interactions. Prerequisites An active telli account with API access and webhook capabilities An Airtable base set up as your CRM n8n instance (cloud or self-hosted) Airtable Specifications Create an Airtable base with the following table and fields: Table: Contacts Fields: Name (Single line text) Phone (Phone number) Email (Email) Appointment_Booked (Checkbox) Interest (Single select: High, Medium, Low) Last_Call_Date (Date) Notes (Long text) Step-by-Step Setup Instructions Webhook Configuration in telli: Log into your telli dashboard Navigate to the webhook settings Set the endpoint URL to your n8n Webhook node URL Select the "call_ended" event to trigger the webhook n8n Workflow Setup: Create a new workflow in n8n Add a Webhook node as the trigger Configure the Webhook node to receive POST requests Parse Webhook Data: Add a Set node to extract relevant information from the webhook payload Map fields such as call_outcome, appointment_booked, and interest Decision Logic: Add a Switch node to create different paths based on the call outcome Create branches for scenarios like "Appointment Booked", "Interested", and "Not Interested" Airtable Integration: Add Airtable nodes for each outcome to update the Contacts table Configure the nodes to update fields like Appointment_Booked, Interest, and Last_Call_Date Follow-up Actions: For "Interested" but not booked outcomes, add an Email node to trigger a follow-up email campaign For "Appointment Booked", add a node to create a calendar event or task Testing and Activation: Use the n8n testing feature to simulate webhook calls and verify each path Once satisfied, activate the workflow Example Workflow Webhook receives a "call_ended" event from telli Set node extracts call_outcome: appointment_booked = true, interest = true Switch node directs to the "Appointment Booked" path Airtable node updates the contact record: Set Appointment_Booked to true Set Interest to "High" Update Last_Call_Date Calendar node creates an appointment for the booked slot Example Payload Below is an example of the payload you might receive from telli when a call ends: { "event": "call_ended", "call": { "call_id": "b4a05730-2abc-4eb0-8066-2e4d23b53ba9", "attempt": 1, "from_number": "+17755719467", "to_number": "+16506794960", "external_contact_id": "external-123", "contact_id": "6bd1e7e0-6d00-4c0b-ad5b-daa72457a27d", "agent_id": "d8931604-92ad-45cf-9071-d9cd2afbad0c", "triggered_at": 1731956924302, "started_at": 1731956932264, "booked_slot_for": "2025-02-24T15:30:00Z", "ended_at": 1731957002078, "call_length_min": 2, "call_status": "COMPLETED", "transcript": "Agent: Hello...", "transcriptObject": [ { "role": "agent", "content": "Hello..." } ], "call_analysis": { "summary": { "value": true, "details": "A call between an agent and a customer talking about buying an ice cream machine" }, "appointment": { "value": true, "details": "2025-02-18T15:30:00Z" }, "interest": { "value": true, "details": "The customer is interested in buying an ice cream machine" } } } } In this example, you can see that the call resulted in a booked appointment and showed customer interest. Your n8n workflow would process this data, updating the Airtable CRM and triggering any necessary follow-up actions. By implementing this template, businesses can automate their post-call processes, ensuring timely follow-ups and accurate CRM updates. This real-time integration between telli's AI voice agents and your Airtable CRM streamlines operations, improves customer engagement, and increases the efficiency of your sales and support teams.
by Pedro Santos
🤖 AI Agent Web Search using SearchApi & LLM Who is this for? This workflow is ideal for anyone conducting online research, including students, researchers, content creators, and professionals looking for accurate, up-to-date, and verifiable information. It also serves as an excellent foundation for building more sophisticated AI-driven applications. What problem does this workflow solve? / Use case This workflow automates web searches by enabling an AI agent to efficiently retrieve and summarize external, verifiable information, ensuring accuracy through source citations. What this workflow does Connects an AI agent node to SearchApi.io as an integrated search tool. Empowers the AI agent to perform real-time web searches using various SearchApi engines (e.g., Google, Bing). Allows the AI agent to dynamically determine search parameters based on user interaction, delivering contextually relevant results. Ensures responses include clearly cited sources for validation and further exploration. Setup Install the SearchApi community node: Open Settings → Community Nodes inside your self‑hosted n8n instance. Fill npm Package Name with @searchapi/n8n-nodes-searchapi. Accept the risk prompt, and hit Install. It should now appear as a node when you search for it. API Configuration: Set up your SearchApi.io credentials in n8n. Add your preferred LLM provider credentials (e.g., OpenRouter API). Input Requirements: Provide the YouTube video ID (e.g., wBuULAoJxok). Connect LLM Integration: Configure the summarization chain with your chosen model and parameters for text splitting. How to customize this workflow to your needs Integrate additional nodes to structure or store search results (e.g., saving to databases, Notion, Google Sheets). Extend chatbot capabilities to integrate with messaging platforms (Slack, Discord) or email notifications. Adjust search parameters and filters within the AI agent node to tailor information retrieval. Example Usage Input**: User asks, "What are the latest developments in AI regulation?" Output**: AI retrieves, summarizes, and cites recent, authoritative articles and news sources from the web.