by vinci-king-01
How it works This workflow automatically scrapes real estate listings from Zillow and sends them to a Telegram channel. Key Steps Scheduled Trigger - Runs the workflow at specified intervals to find new listings. AI-Powered Scraping - Uses ScrapeGraphAI to extract property information from Zillow. Data Formatting - Processes and structures the scraped data for Telegram messages. Telegram Integration - Sends formatted listing details to your specified Telegram channel. Set up steps Setup time: 5-10 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key. Set up Telegram connection - Connect your Telegram account and specify the target channel. Customize the Zillow URL - Update the URL to target specific locations or search criteria. Adjust schedule - Modify the trigger timing based on how frequently you want to check for new listings.
by explorium
Google Sheets Company Enrichment with Explorium MCP Template Download the following json file and import it to a new n8n workflow: google\_sheets\_enrichment.json Overview This n8n workflow template enables automatic enrichment of company information in your Google Sheets. When you add a new company or update existing company details (name or website), the workflow automatically fetches additional business intelligence data using Explorium MCP and updates your sheet with: Business ID NAICS industry code Number of employees (range) Annual revenue (range) Key Features Automatic Triggering**: Monitors your Google Sheet for new rows or updates to company name/website fields Smart Processing**: Only processes new or modified rows, not the entire sheet Data Validation**: Ensures both company name and website are present before processing Error Handling**: Processes each row individually to prevent one failure from affecting others Powered by AI**: Uses Claude Sonnet 4 with Explorium MCP for intelligent data enrichment Prerequisites Before setting up this workflow, ensure you have: n8n instance (self-hosted or cloud) Google account with access to Google Sheets Anthropic API key for Claude Explorium MCP API key Installation & Setup Step 1: Import the Workflow Create a new workflow. Download the workflow JSON from above. In your n8n instance, go to Workflows → Add Workflow → Import from File Select the JSON file and click Import Step 2: Create Google Sheet Create a new google sheet (or make a copy of this template) Your Google Sheet must have the following columns (exact names): name - Company name website - Company website URL business_id - Will be populated by the workflow naics - Will be populated by the workflow number_of_employees_range - Will be populated by the workflow yearly_revenue_range - Will be populated by the workflow Step 3: Configure Google Sheets Credentials You'll need to set up two Google credentials: Google Sheets Trigger Credentials: Click on the Google Sheets Trigger node Under Credentials, click Create New If working on n8n Cloud, Click the 'Sign in with Google' button Grant permissions to read and monitor your Google Sheets If working on n8n Instance, Follow the OAuth2 authentication process here Fill the Client ID and Client Secret fields Google Sheets Update Credentials: Click on the Update Company Row node Under Credentials, select the same credentials or create new ones (The same you did above) Ensure permissions include write access to your sheets Step 4: Configure Anthropic Credentials Click on the Anthropic Chat Model node Under Credentials, click Create New Enter your Anthropic API key Save the credentials Step 5: Configure Explorium MCP Credentials Click on the MCP Client node Under Credentials, click Create New (Header Auth) Fill the Name field with api_key Fill the Value field with your Explorium API Key Save the credentials Step 6: Link Your Google Sheet In the Google Sheets Trigger node: Select your Google Sheet from the dropdown Select the worksheet (usually "Sheet1") In the Update Company Row node: Select the same Google Sheet and worksheet Ensure the matching column is set to row_number Step 7: Activate the Workflow Click the Active toggle in the top right to activate the workflow The workflow will now monitor your sheet every minute for changes How It Works Workflow Process Flow Google Sheets Trigger: Polls your sheet every minute for new rows or changes to name/website fields Filter Valid Rows: Validates that both company name and website are present Loop Over Items: Processes each company individually AI Agent: Uses Explorium MCP to: Find the company's business ID Retrieve firmographic data (revenue, employees, NAICS code) Format Output: Structures the data for Google Sheets Update Company Row: Writes the enriched data back to the original row Trigger Behavior First Activation**: May process all existing rows to establish a baseline Ongoing Operation**: Only processes new rows or rows where name/website fields change Polling Frequency**: Checks for changes every minute Usage Adding New Companies Add a new row to your Google Sheet Fill in the name and website columns Within 1 minute, the workflow will automatically: Detect the new row Enrich the company data Update the remaining columns Updating Existing Companies Modify the name or website field of an existing row The workflow will re-process that row with the updated information All enrichment data will be refreshed Monitoring Executions In n8n, go to Executions to see workflow runs Each execution shows: Which rows were processed Success/failure status Detailed logs for troubleshooting Troubleshooting Common Issues All rows are processed instead of just new/updated ones Ensure the workflow is activated, not just run manually Manual test runs will process all rows First activation may process all rows once No data is returned for a company Verify the company name and website are correct Check if the company exists in Explorium's database Some smaller or newer companies may not have data available Workflow isn't triggering Confirm the workflow is activated (Active toggle is ON) Check that changes are made to the name or website columns Verify Google Sheets credentials have proper permissions Authentication errors Re-authenticate Google Sheets credentials Verify Anthropic API key is valid and has credits Check Explorium Bearer token is correct and active Error Handling The workflow processes each row individually, so if one company fails to enrich: Other rows will still be processed The failed row will retain its original data Check the execution logs for specific error details Best Practices Data Quality: Ensure company names and websites are accurate for best results Website Format: Include full URLs (https://example.com) rather than just domain names Batch Processing: The workflow handles multiple updates efficiently, so you can add several companies at once Regular Monitoring: Periodically check execution logs to ensure smooth operation API Limits & Considerations Google Sheets API**: Subject to Google's API quotas Anthropic API**: Each enrichment uses Claude Sonnet 4 tokens Explorium MCP**: Rate limits may apply based on your subscription Support For issues specific to: n8n platform**: Consult n8n documentation or community Google Sheets integration**: Check n8n's Google Sheets node documentation Explorium MCP**: Contact Explorium support for API-related issues Anthropic/Claude**: Refer to Anthropic's documentation for API issues Example Use Cases Sales Prospecting: Automatically enrich lead lists with company size and revenue data Market Research: Build comprehensive databases of companies in specific industries Competitive Analysis: Track and monitor competitor information Investment Research: Gather firmographic data for potential investment targets
by Kunsh
A streamlined AI-powered tool that extracts actionable technical insights from HackerOne security reports for advanced bug bounty hunters. How It Works Send any HackerOne report URL (e.g., https://hackerone.com/reports/123456) to the chat interface. The AI agent will: Fetch the report JSON automatically Analyze for unique techniques, payloads, and root causes Extract reusable insights in a structured format Summarize with practical pentesting value Setup Requirements Google Gemini API credentials configured Chat interface deployed and accessible HackerOne report URLs Output Format Summary: One-liner impact statement Techniques: Payloads, code snippets, exploitation steps Pro Tips: Reusable insights for future hunts Perfect for rapid triage and building your personal exploit knowledge base.
by Jimleuk
This n8n template offers a simple yet capable chatbot assistant who can answer course enquiries over SMS. Given the right access to data, AI Agents are capable of planning and performing relatively complex research tasks to get their answers. In this example, the agent must first understand the database schema, retrieve lists of values before generating it's own query to search over the database. Checkout the example database here - https://airtable.com/appO5xvP1aUBYKyJ7/shr8jSFDaghubDOrw How it works A Twilio trigger gives us the ability to receive SMS input into our workflow via webhook. The message is then directed to our AI agent who is instructed to assist the user and use the course database as reference. The database is an Airtable base. The agent autonomously figures out which tool it needs to use and generates it's own "filter_by_formula" query to search over the available courses. On successful search results, the Agent can then use this information to answer the user's query. The Agent's output is logged in a second sheet of the Airtable base. We can use this later for analysis and lead gen. Finally, the response is sent back to the user through SMS using Twilio. How to use Ensure your Twilio number is set to forward messages to this workflow's webhook URL. Configure and update the course database as required. If you're not interested in courses, you can swap this out for inventory, deliveries or any other data relevant to your business. Ask questions like: "Can you help me find suitable courses to fill my Wednesday mornings?" "Which courses are being instructed by profession Lee?" "I'm interested in creative arts. What courses are available which could be relevant to me?" Requirements Twilio for SMS receiving and sending OpenAI for LLM and Agent Airtable for Course Database Customising this workflow Add additional tools and expand the range of queries the agent is able to answer or assist with. Not using Airtable? This technique also works with SQL databases like PostgreSQL.
by Mike Russell
Boost engagement on your Discord server by automatically sharing new YouTube videos along with AI generated summaries of their content. This workflow is ideal for content creators and community managers looking to provide value and spark interest through summarized content, making it easier for community members to decide if a video is of interest to them. Watch this video tutorial to learn more about the template. How it works RSS Feed Trigger**: Monitors your YouTube channel for new uploads using the RSS feed. Video Captions Retrieval**: Fetches video captions using the YouTube API to get detailed content data. AI Summary Generation**: Uses an AI model to generate concise summaries from the video captions, highlighting key points. Discord Notification**: Posts video announcements along with their AI generated summaries to a specified Discord channel using a webhook. Set up steps Configure YouTube RSS Feed: Set up the RSS feed node to detect new video uploads. Add your YouTube channel ID to the URL in the first node: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID. Connect OpenAI Account: To enable AI summary generation, connect your OpenAI account in n8n. Set Up Discord Webhook: Create a webhook in your Discord server and configure it in the Discord node. Design the Message: Format the Discord message as you like to include the video title, link, and the AI generated summary. Example This template empowers you to maintain a highly engaging Discord community, ensuring members receive not only regular updates but also valuable insights into each video's content without needing to watch immediately.
by Aditya Gaur
Who is this template for? This template is designed for developers, DevOps engineers, and automation enthusiasts who want to streamline their GitLab merge request process using n8n, a low-code workflow automation tool. It eliminates manual intervention by automating the merging of GitLab branches through API calls. How it works ? Trigger the workflow: The workflow can be triggered by a webhook, a scheduled event, or a GitLab event (e.g., a new merge request is created or approved). Fetch Merge Request Details: n8n makes an API call to GitLab to retrieve merge request details. Check Merge Conditions: The workflow validates whether the merge request meets predefined conditions (e.g., approvals met, CI/CD pipelines passed). Perform the Merge: If all conditions are met, n8n sends a request to the GitLab API to merge the branch automatically. Setup Steps 1. Prerequisites An n8n instance (Self-hosted or Cloud) A GitLab personal access token with API access A GitLab repository with merge requests enabled 2. Create the n8n Workflow Set up a trigger: Choose a trigger node (Webhook, Cron, or GitLab Trigger). Fetch merge request details: Add an HTTP Request node to call GET /merge_requests/:id from GitLab API. Validate conditions: Check if the merge request has necessary approvals. Ensure CI/CD pipelines have passed. Merge the request: Use an HTTP Request node to call PUT /merge_requests/:id/merge API. 3. Test the Workflow Create a test merge request. Check if the workflow triggers and merges automatically. Debug using n8n logs if needed. 4. Deploy and Monitor Deploy the workflow in production. Use n8n’s monitoring features to track execution. This template enables seamless GitLab merge automation, improving efficiency and reducing manual work! Note: Never hard code API token or secret in your https request.
by The Higher Pitch
This workflow automates the process of publishing PR News articles to the WordPress website. 🔧 How it works: Uses an RSS Feed Trigger to monitor new PR News articles. Extracts the article content and parses the featured image URL. Uploads the image to WordPress as a media item. Creates a new draft post on the WordPress site using the article's content and sets the uploaded image as the featured image. ✅ Features: Polls RSS feed every minute. Automatically extracts and sets featured images. Posts are created as drafts for editorial review. 📝 Requirements: WordPress REST API access with media upload permission. Active WordPress credentials in n8n. Perfect for teams who want to streamline PR content publishing without manual effort.
by Yang
What this workflow does This workflow extracts product details—like name, price, discount, and rating— from website screenshots using Dumpling AI. It starts when a new product page URL is added to a Google Sheet, captures a screenshot of that page, extracts visible product info from the image, and writes the results back into the sheet. What problem is this workflow solving? Many product pages block traditional scraping tools or use unstructured layouts. This workflow bypasses HTML limitations by using visual AI extraction, making it reliable even when content is embedded in images or hard to parse with code. Who is this for? This is ideal for eCommerce researchers, pricing analysts, marketers, or anyone building a product database from websites without needing to code or maintain complex scrapers. Setup Create a Google Sheet with a column named "Site" (or update the trigger). Add your product page URLs in this column—one per row. Connect your Google Sheets and Dumpling AI credentials in n8n. Ensure your Dumpling AI account has API access for screenshots and extraction. How to customize the workflow Prompt adjustment**: In the “Extract Text from Screenshot” node, you can modify the prompt to extract other information like brand name, delivery time, or availability. Add more fields**: After the extraction, edit the “Format Extracted Data” node to map additional fields from the response to your Google Sheet columns. Change output destination**: You can easily replace the Google Sheets module with Airtable, Notion, or another app if preferred. > ⚠️ This works best when the product data is clearly visible in the screenshot. > It won’t extract info that’s hidden behind popups or loaded via user interaction.
by Shahrear
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform your expense tracking with automated AI receipt processing that extracts data and organizes it instantly. What this workflow does Monitors Google Drive for new receipt uploads (images/PDFs) Downloads and processes files automatically Extracts key data using VLM Run community node (merchant, amount, currency, date) Saves structured data to Google Sheets for easy tracking Setup Prerequisites: Google Drive/Sheets accounts, VLM Run API credentials, n8n instance. You need to install VLM Run community node. To install Community nodes you need to follow steps, Settings -> Community Nodes -> Install -> Search with name @vlm-run/n8n-nodes-vlmrun Quick Setup: Configure Google Drive OAuth2 and create receipt upload folder Add VLM Run API credentials Create Google Sheets with columns: Customer, Merchant, Amount, Currency, Date Update folder/sheet IDs in workflow nodes Test and activate How to customize this workflow to your needs Extend functionality by: Adding expense categories and approval workflows Connecting to accounting software (QuickBooks, Xero) Including Slack notifications for processed receipts Adding data validation and duplicate detection This workflow transforms manual receipt processing into an automated system that saves hours while improving accuracy.
by n8n Team
This workflow creates a Jira issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in Jira. Prerequisites Zendesk account and Zendesk credentials. Jira account and Jira credentials. Jira project to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in Jira. The Jira issue key is then saved in one of the ticket's fields (in setup we call this "Jira Issue Key"). The next time a comment is added to the ticket, the workflow retrieves the Jira issue key from the ticket's field and adds the comment to the issue in Jira. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give the trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Jira issue key. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Jira Issue Key". Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by bangank36
This workflow restores all n8n instance credentials from GitHub backups using the n8n API node. It complements the Backup Your Credentials to GitHub template by allowing users to seamlessly restore previously saved credentials. How It Works The workflow fetches credentials stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the credentials repo.path** – The folder path within the repository where credentials are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and credentials are stored in a credentials/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → credentials/ Required Credentials GitHub API** – Access to your repository n8n API** – To import credentials into your n8n instance Who Is This For? This template is ideal for users who want to restore their credentials from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Yaron Been
This cutting-edge n8n automation is a powerful market research tool designed to continuously monitor and capture User-Generated Content (UGC) opportunities on Fiverr. By intelligently scraping, parsing, and logging gig data, this workflow provides: Automated Market Scanning: Daily scrapes of Fiverr UGC gigs Real-time market intelligence Consistent, hands-off data collection Intelligent Data Extraction: Parses complex HTML structures Captures key gig details Transforms unstructured web data into actionable insights Seamless Data Logging: Automatic Google Sheets integration Comprehensive gig marketplace tracking Historical data preservation Key Benefits 🤖 Full Automation: Continuous market research 💡 Smart Filtering: Detailed UGC gig insights 📊 Instant Reporting: Real-time market trends ⏱️ Time-Saving: Eliminate manual research Workflow Architecture 🔍 Stage 1: Automated Triggering Scheduled Scraping**: Daily gig discovery Precise Timing**: Configurable run intervals Consistent Monitoring**: Always-on market intelligence 🌐 Stage 2: Web Scraping HTTP Request**: Fetch Fiverr search results Dynamic Headers**: Bypass potential scraping restrictions Targeted Search**: UGC-specific gig discovery 🧩 Stage 3: Data Extraction HTML Parsing**: Extract critical gig information Structured Data Collection**: Gig Prices Seller Names Gig Titles Direct Gig URLs 📋 Stage 4: Data Logging Google Sheets Integration**: Automatic data storage Historical Tracking**: Build comprehensive gig databases Easy Analysis**: Spreadsheet-ready format Potential Use Cases Content Creators**: Market rate research Freelance Platforms**: Competitive intelligence Marketing Agencies**: UGC trend analysis Recruitment Specialists**: Talent pool mapping Business Strategists**: Market opportunity identification Setup Requirements Fiverr Search Configuration Targeted search keywords Specific UGC categories Web Scraping Preparation User-agent rotation strategy Potential proxy configuration Robust error handling Google Sheets Setup Connected Google account Prepared spreadsheet Appropriate sharing permissions n8n Installation Cloud or self-hosted instance Import workflow configuration Configure API credentials Future Enhancement Suggestions 🤖 AI-powered gig trend analysis 📊 Advanced data visualization 🔔 Real-time price change alerts 🧠 Machine learning market predictions 🌐 Multi-platform gig tracking Ethical Considerations Respect Fiverr's Terms of Service Implement responsible scraping practices Avoid overwhelming target websites Use data for legitimate research purposes Technical Recommendations Implement exponential backoff for requests Use randomized delays between scrapes Maintain flexible CSS selector strategies Consider rate limiting and IP rotation Connect With Me Ready to unlock market insights? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your market research with intelligent, automated workflows!