by Olivier
This template syncs prospects from ProspectPro into HubSpot. It checks if a company already exists in HubSpot (by ProspectPro ID or domain), then updates the record or creates a new one. Sync results are logged back in ProspectPro with tags to prevent duplicates and mark errors, ensuring reliable and repeatable integrations. โจ Features Automatically sync ProspectPro prospects to HubSpot companies Smart search logic: match by ProspectPro ID first, then by domain Creates new HubSpot companies when no match is found Updates existing HubSpot companies with latest ProspectPro data Logs sync results back into ProspectPro with tags (HubspotSynced, HubspotSyncFailed) Extendable and modular: use as a trigger workflow or callable sub-flow โ Requirements n8n instance or cloud workspace Install the ProspectPro Verified Community Node ProspectPro account & API credentials (14-day free trial) HubSpot account with OAuth2 app and API credentials ๐ง Setup Instructions Import the template and set your credentials (ProspectPro, HubSpot). Connect to a trigger (e.g., ProspectPro "New website visitor") or call as a sub-workflow. Add a propery to Hubspot for the ProspectPro ID if you don't already have one Adjust sync logic in the "Continue?"-node and HubSpot fields to match your setup. Optional: extend error handling, add Slack/CRM notifications, or sync back HubSpot data into ProspectPro. ๐ Security Notes Prevents re-processing of failed syncs using the HubspotSyncFailed tag Error branches included for failed updates/creates Manual resolution required if sync errors persist ๐งช Testing Run with a ProspectPro ID of a company with a known domain Check HubSpot for creation or update of the company record Verify updated tags (HubspotSynced / HubspotSyncFailed) in ProspectPro ๐ About ProspectPro ProspectPro is a B2B Prospecting Platform for Dutch SMEs. It helps sales teams identify prospects, track website visitors, and streamline sales without a full CRM. Website: https://www.prospectpro.nl Platform: https://mijn.prospectpro.nl API docs: https://www.docs.bedrijfsdata.nl Support: https://www.prospectpro.nl/klantenservice Support hours: MondayโFriday, 09:00โ17:00 CET ๐ About HubSpot HubSpot is a leading CRM platform offering marketing, sales, and customer service tools. It helps companies manage contacts, automate workflows, and grow their customer base. Website: https://www.hubspot.com Developer Docs: https://developers.hubspot.com
by Oneclick AI Squad
This n8n workflow helps users easily discover nearby residential construction projects by automatically scraping and analyzing property listings from 99acres and other real estate platforms. Users can send an email with their location preferences and receive a curated list of available properties with detailed information, including pricing, area, possession dates, and construction status. Good to know The workflow focuses specifically on residential construction projects and active developments Property data is scraped in real-time to ensure the most current information Results are automatically formatted and structured for easy reading The system handles multiple property formats and data variations from different sources Fallback mechanisms ensure reliable data extraction even when website structures change How it works Trigger: New Email** - Detects incoming emails with property search requests and extracts location preferences from email content Extract Area & City** - Parses the email body to identify target areas (e.g., Gota, Ahmedabad) and falls back to city-level search if specific area is not mentioned Scrape Construction Projects** - Performs web scraping on 99acres and other property websites based on the extracted area and city information Parse Project Listings** - Cleans and formats the scraped HTML data into structured project entries with standardized fields Format Project Details** - Transforms all parsed projects into a consistent email-ready list format with bullet points and organized information Send Results to User** - Delivers a professionally formatted email with the complete list of matching construction projects to the original requester Email Format Examples Input Email Format To: properties@yourcompany.com Subject: Property Search Request Hi, I am interested in buying a flat. Can you please send me the list of available properties in Gota, Ahmedabad? Output Email Example Subject: ๐๏ธ Property Search Results: 4 Projects Found in Gota, Ahmedabad ๐๏ธ Available Construction Projects in Gota, Ahmedabad Search Area: Gota, Ahmedabad Total Projects: 4 Search Date: August 4, 2025 ๐ PROJECT LISTINGS: ๐ท Project 1 ๐ Name: Vivaan Oliver offers ๐ข BHK: 3 BHK ๐ฐ Price: N/A ๐ Area: 851.0 Sq.Ft ๐๏ธ Possession: August 2025 ๐ Status: under construction ๐ Location: Thaltej, Ahmedabad West ๐ Scraped Date: 2025-08-04 ๐ท Project 2 ๐ Name: Vivaan Oliver offers ๐ข BHK: 3 BHK ๐ฐ Price: Price on Request ๐ Area: 891 Sq Ft ๐๏ธ Possession: N/A ๐ Status: Under Construction ๐ Location: Thaltej, Ahmedabad West ๐ Scraped Date: 2025-08-04 ๐ท Project 3 ๐ Name: It offers an exclusive range of ๐ข BHK: 3 BHK ๐ฐ Price: N/A ๐ Area: 250 Sq.Ft ๐๏ธ Possession: 0 2250 ๐ Status: Under Construction ๐ Location: Thaltej, Ahmedabad West ๐ Scraped Date: 2025-08-04 ๐ท Project 4 ๐ Name: N/A ๐ข BHK: 2 BHK ๐ฐ Price: N/A ๐ Area: N/A ๐๏ธ Possession: N/A ๐ Status: N/A ๐ Location: Thaltej, Ahmedabad West ๐ก Next Steps: โข Contact builders directly for detailed pricing and floor plans โข Schedule site visits to shortlisted properties โข Verify possession timelines and construction progress โข Compare amenities and location advantages ๐ For more information or specific requirements, reply to this email. How to use Setup Instructions Import the workflow into your n8n instance Configure Email Credentials: Set up email trigger for incoming property requests Set up SMTP credentials for sending property listings Configure Web Scraping: Ensure proper headers and user agents for 99acres access Set up fallback mechanisms for different property websites Test the workflow with sample property search emails Sending Property Search Requests Send an email to your configured property search address Include location details in natural language (e.g., "Gota, Ahmedabad") Optionally specify preferences like BHK, budget, or amenities Receive detailed property listings within minutes Requirements n8n instance** (cloud or self-hosted) with web scraping capabilities Email account** with IMAP/SMTP access for automated communication Reliable internet connection** for real-time property data scraping Valid target websites** (99acres, MagicBricks, etc.) access Troubleshooting No properties found**: Verify area spelling and check if the location has active listings Scraping errors**: Update user agents and headers if websites block requests Duplicate results**: Implement better deduplication logic based on property names and locations Email parsing issues**: Test with various email formats and improve regex patterns Website structure changes**: Implement fallback parsers and regular monitoring of scraping success rates
by Rakin Jakaria
Who this is for This workflow is for freelancers, job seekers, or service providers who want to automatically apply to businesses by scraping their website information, extracting contact details, and sending personalized job application emails with AI-powered content โ all from one form submission. What this workflow does This workflow starts every time someone submits the Job Applier Form. It then: Scrapes the target business website** to gather company information and contact details. Converts HTML content** to readable markdown format for better AI processing. Extracts email addresses* and creates a company summary using *GPT-5 AI**. Validates email addresses** to ensure they contain proper formatting (@ symbol check). Accesses your experience data* from a connected *Google Sheet** with your skills and portfolio. Generates personalized application emails* (subject + body) using *GPT-5** based on the job position and company info. Sends the application email* automatically via *Gmail** with your name as sender. Provides confirmation** through a completion form showing the AI's response. Setup To set this workflow up: Form Trigger โ Customize the job application form fields (Target Business Website, Applying As dropdown with positions like Video Editor, SEO Expert, etc.). OpenAI GPT-5 โ Add your OpenAI API credentials for both AI models used in the workflow. Google Sheets โ Connect your sheet containing your working experience data, skills, and portfolio information. Gmail Account โ Link your Gmail account for sending application emails automatically. Experience Data โ Update the Google Sheet with your relevant skills, experience, and achievements for each job type. Sender Name โ Modify the sender name in Gmail settings (currently set to "Jamal Mia"). How to customize this workflow to your needs Add more job positions to the dropdown menu (currently includes Video Editor, SEO Expert, Full-Stack Developer, Social Media Manager). Modify the AI prompt to reflect your unique value proposition and application style. Enhance email validation with additional checks like domain verification or email format patterns. Add follow-up scheduling to automatically send reminder emails after a certain period. Include attachment functionality to automatically attach your resume or portfolio to applications. Switch to different email providers or add multiple sender accounts for variety.
by Wolf Bishop
A reliable, no-frills web scraper that extracts content directly from websites using their sitemaps. Perfect for content audits, migrations, and research when you need straightforward HTML extraction without external dependencies. How It Works This streamlined workflow takes a practical approach to web scraping by leveraging XML sitemaps and direct HTTP requests. Here's how it delivers consistent results: Direct Sitemap Processing: The workflow starts by fetching your target website's XML sitemap and parsing it to extract all available page URLs. This eliminates guesswork and ensures comprehensive coverage of the site's content structure. Robust HTTP Scraping: Each page is scraped using direct HTTP requests with realistic browser headers that mimic legitimate web traffic. The scraper includes comprehensive error handling and timeout protection to handle various website configurations gracefully. Intelligent Content Extraction: The workflow uses sophisticated JavaScript parsing to extract meaningful content from raw HTML. It automatically identifies page titles through multiple methods (title tags, Open Graph metadata, H1 headers) and converts HTML structure into readable text format. Framework Detection: Built-in detection identifies whether sites use WordPress, Divi themes, or heavy JavaScript frameworks. This helps explain content extraction quality and provides valuable insights about the site's technical architecture. Rich Metadata Collection: Each scraped page includes detailed metadata like word count, HTML size, response codes, and technical indicators. This data is formatted into comprehensive markdown files with YAML frontmatter for easy analysis and organization. Respectful Rate Limiting: The workflow includes a 3-second delay between page requests to respect server resources and avoid overwhelming target websites. The processing is sequential and controlled to maintain ethical scraping practices. Detailed Success Reporting: Every scraped page generates a report showing extraction success, potential issues (like JavaScript dependencies), and technical details about the site's structure and framework. Setup Steps Configure Google Drive Integration Connect your Google Drive account in the "Save to Google Drive" node Replace YOUR_GOOGLE_DRIVE_CREDENTIAL_ID with your actual Google Drive credential ID Create a dedicated folder for your scraped content in Google Drive Copy the folder ID from the Google Drive URL (the long string after /folders/) Replace YOUR_GOOGLE_DRIVE_FOLDER_ID_HERE with your actual folder ID in both the folderId field and cachedResultUrl Update YOUR_FOLDER_NAME_HERE with your folder's actual name Set Your Target Website In the "Set Sitemap URL" node, replace https://yourwebsitehere.com/page-sitemap.xml with your target website's sitemap URL Common sitemap locations include /sitemap.xml, /page-sitemap.xml, or /sitemap_index.xml Tip: Not sure where your sitemap is? Use a free online tool like https://seomator.com/sitemap-finder Verify the sitemap URL loads correctly in your browser before running the workflow Update Workflow IDs (Automatic) When you import this workflow, n8n will automatically generate new IDs for YOUR_WORKFLOW_ID_HERE, YOUR_VERSION_ID_HERE, YOUR_INSTANCE_ID_HERE, and YOUR_WEBHOOK_ID_HERE No manual changes needed for these placeholders Adjust Processing Limits (Optional) The "Limit URLs (Optional)" node is currently disabled for full site scraping Enable this node and set a smaller number (like 5-10) for initial testing For large websites, consider running in batches to manage processing time and storage Customize Rate Limiting (Optional) The "Wait Between Pages" node is set to 3 seconds by default Increase the delay for more respectful scraping of busy sites Decrease only if you have permission and the target site can handle faster requests Test Your Configuration Enable the "Limit URLs (Optional)" node and set it to 3-5 pages for testing Click "Test workflow" to verify the setup works correctly Check your Google Drive folder to confirm files are being created with proper content Review the generated markdown files to assess content extraction quality Run Full Extraction Disable the "Limit URLs (Optional)" node for complete site scraping Execute the workflow and monitor the execution log for any errors Large websites may take considerable time to process completely (plan for several hours for sites with hundreds of pages) Review Results Each generated file includes technical metadata to help you assess extraction quality Look for indicators like "Limited Content" warnings for JavaScript-heavy pages Files include word counts and framework detection to help you understand the site's structure Framework Compatibility: This scraper is specifically designed to work well with WordPress sites, Divi themes, and many JavaScript-heavy frameworks. The intelligent content extraction handles dynamic content effectively and provides detailed feedback about framework detection. While some single-page applications (SPAs) that render entirely through JavaScript may have limited content extraction, most modern websites including those built with popular CMS platforms will work excellently with this scraper. Important Notes: Always ensure you have permission to scrape your target website and respect their robots.txt guidelines. The workflow includes respectful delays and error handling, but monitor your usage to maintain ethical scraping practices.RetryClaude can make mistakes. Please double-check responses.
by Marth
Okay, here are the "How It Works" and "Setup Steps" for your "Automated Social Media Content Distribution System," presented clearly in Markdown. How It Works (Workflow Stages) โ๏ธ This system transforms manual, repetitive tasks into a smooth, automated content distribution pipeline: Content Submission & Trigger: You add a new row to your designated Google Sheet with all the content details (Title, URL, Short_Description, Image_URL, Hashtags, and boolean flags for which platforms to post to). The Google Sheets Trigger node immediately detects this new entry, initiating the workflow. Content Preparation: The Set node takes the raw data from your Google Sheet and formats it into a cohesive text string (social_media_text_core) that is suitable for posting across different social media platforms. Conditional Social Media Posting: A series of If nodes (Check Facebook Post, Check Twitter Post, Check LinkedIn Post) sequentially check your preferences (based on the Post_to_Facebook, Post_to_Twitter, Post_to_LinkedIn columns in your sheet). If a platform is marked TRUE, the corresponding social media node (Facebook, Twitter, LinkedIn) is activated to publish your content. If FALSE, that platform is skipped, and the workflow moves to the next check. Status Update & Notification: After attempting to post to all selected platforms, the Google Sheets (Update) node updates the Publication_Status column of your original row to "Published." This prevents re-posting and provides a clear record. Finally, the Slack (Notification) node sends an alert to your chosen Slack channel, confirming that the content has been successfully distributed. Setup Steps ๐ ๏ธ (Build It Yourself!) Follow these detailed steps to build and implement this workflow in your n8n instance: Prepare Your Google Sheet: Create a new Google Sheet (e.g., named "Social Media Posts"). Set up the following exact column headers in the first row: Title, URL, Short_Description, Image_URL, Hashtags, Post_to_Facebook, Post_to_Twitter, Post_to_LinkedIn, Publication_Status Fill in a test row with some sample data, ensuring TRUE/FALSE values for the posting flags. Gather Your API Keys & Credentials: Google Sheets Credential: You'll need an OAuth2 credential for Google Sheets in n8n to allow read/write access to your sheet. Facebook Credential: An OAuth2 credential for Facebook with permissions to post to your selected Page. Twitter Credential: A Twitter API credential (API Key, API Secret, Access Token, Access Token Secret) from your Twitter Developer App. LinkedIn Credential: An OAuth2 credential for LinkedIn with permissions to share updates to your profile or organization page. Slack Credential: A Slack API token (Bot User OAuth Token) for sending messages to your channel. Build the n8n Workflow Manually (10 Nodes): Start a new workflow in n8n. Drag and drop each of the following nodes onto the canvas and connect them as described below: Google Sheets Trigger Name: Google Sheets Trigger Parameters: Authentication: Select your Google Sheets credential. Spreadsheet ID: [Copy the ID from your Google Sheet's URL] Sheet Name: [Your Sheet Name, e.g., 'Sheet1' or 'Content'] Watch For: Rows Events: Added Connections: Output to Set Content Parameters. Set Name: Set Content Parameters Parameters: Values to Set: Add a new value: Type: String Name: social_media_text_core Value: ={{ $json.Title }} - {{ $json.Short_Description }}\nRead more: {{ $json.URL }}\n{{ $json.Hashtags }} Connections: Output to Check Facebook Post. If Name: Check Facebook Post Parameters: Value 1: ={{ $json.Post_to_Facebook }} Operation: is true Connections: True output to Post Facebook Message. False output to Check Twitter Post. Facebook Name: Post Facebook Message Parameters: Authentication: Select your Facebook credential. Page ID: [YOUR_FACEBOOK_PAGE_ID] Message: ={{ $json.social_media_text_core }} Link: ={{ $json.URL }} Picture: ={{ $json.Image_URL }} Options: Published (checked) Connections: Output to Check Twitter Post. If Name: Check Twitter Post Parameters: Value 1: ={{ $json.Post_to_Twitter }} Operation: is true Connections: True output to Create Tweet. False output to Check LinkedIn Post. Twitter Name: Create Tweet Parameters: Authentication: Select your Twitter credential. Tweet: ={{ $json.social_media_text_core }} Image URL: ={{ $json.Image_URL }} Connections: Output to Check LinkedIn Post. If Name: Check LinkedIn Post Parameters: Value 1: ={{ $json.Post_to_LinkedIn }} Operation: is true Connections: True output to Share LinkedIn Update. False output to Update Publication Status. LinkedIn Name: Share LinkedIn Update Parameters: Authentication: Select your LinkedIn credential. Resource: Share Update Type: Organization or Personal (Choose as appropriate) Organization ID: [YOUR_LINKEDIN_ORG_ID] (If Organization type selected) Content: ={{ $json.social_media_text_core }} Content URL: ={{ $json.URL }} Image URL: ={{ $json.Image_URL }} Connections: Output to Update Publication Status. Google Sheets Name: Update Publication Status Parameters: Authentication: Select your Google Sheets credential. Spreadsheet ID: [YOUR_GOOGLE_SHEET_CONTENT_ID] Sheet Name: [Your Sheet Name, e.g., 'Sheet1' or 'Content'] Operation: Update Row Key Column: URL Key Value: ={{ $json.URL }} Values: Add a new value: Column: Publication_Status Value: Published Connections: Receives connections from both Share LinkedIn Update and the False branch of Check LinkedIn Post. Slack Name: Send Slack Notification Parameters: Authentication: Select your Slack credential. Chat ID: [YOUR_SLACK_CHANNEL_ID] Text: New content "{{ $json.Title }}" successfully published to social media! ๐ Check: {{ $json.URL }} Connections: Output to Update Publication Status. Final Steps & Activation: Test the Workflow: Before activating, manually add a new row to your Google Sheet or use n8n's "Execute Workflow" button (if available for triggers). Observe the flow through each node to ensure it behaves as expected and posts to your social media accounts. Activate Workflow: Once you are confident it's working correctly, turn the workflow "Active" in the top right corner of your n8n canvas.
by WeblineIndia
๐ Generate Weekly Energy Consumption Reports with API, Email and Google Drive This workflow automates the process of retrieving energy consumption data, formatting it into a CSV report, and distributing it every week via email and Google Drive. โก Quick Implementation Steps: Import the workflow into your n8n instance. Configure your API, email details and Google Drive folder. (Optional) Adjust the CRON schedule if you need a different time or frequency. Activate workflowโautomated weekly reports begin immediately. ๐ฏ Whoโs It For Energy providers, sustainability departments, facility managers, renewable energy operators. ๐ Requirements n8n instance Energy Consumption API access Google Drive account Email SMTP access โ๏ธ How It Works Workflow triggers every Monday at 8 AM, fetches consumption data, emails CSV report and saves a copy to Google Drive. ๐ Workflow Steps 1. Schedule Weekly (Mon 8:00 AM) Type: Cron Node Runs every Monday at 8:00 AM. Triggers the workflow execution automatically. 2. Fetch Energy Data Type: HTTP Request Node Makes a GET request to: https://api.energidataservice.dk/dataset/ConsumptionDE35Hour (sample API) The API returns JSON data with hourly electricity consumption in Denmark. Sample Response Structure: { "records": [ { "HourDK": "2025-08-25T01:00:00", "MunicipalityNo": _, "MunicipalityName": "Copenhagen", "ConsumptionkWh": 12345.67 } ] } 3. Normalize Records Type: Code Node Extracts the records array from the API response and maps each entry into separate JSON items for easier handling downstream. Code used: const itemlist = $input.first().json.records; return itemlist.map(r => ({ json: r })); 4. Convert to File Type: Convert to File Node Converts the array of JSON records into a CSV file. The CSV is stored in a binary field called data. 5. Send Email Weekly Report Type: Email Send Node Sends the generated CSV file as an attachment. Parameters: fromEmail: Sender email address (configure in node). toEmail: Recipient email address. subject: "Weekly Energy Consumption Report". attachments: =data (binary data from the previous node). 6. Report File Upload to Google Drive Type: Google Drive Node Uploads the CSV file to your Google Drive root folder. Filename pattern: energy_report_{{ $now.format('yyyy_MM_dd_HH_ii_ss') }} Requires valid Google Drive OAuth2 credentials. โจ How To Customize Change report frequency, email template, data format (CSV/Excel) or add-ons. โ Add-ons Integration with analytics tools (Power BI, Tableau) Additional reporting formats (Excel, PDF) Slack notifications ๐ฆ Use Case Examples Automated weekly/monthly reporting for compliance Historical consumption tracking Operational analytics and forecasting ๐ Troubleshooting Guide | Issue | Cause | Solution | |-------|-------|----------| | Data not fetched | API endpoint incorrect | Verify URL | | Email delivery issues | SMTP configuration incorrect | Verify SMTP | | Drive save fails | Permissions/Drive ID incorrect | Check Drive permissions | ๐ Need Assistance? Contact WeblineIndia for additional customization and support, we're happy to help.
by Natnail Getachew
How it works This workflow automates the curation and publishing of AI news to a Telegram channel. It follows a four-stage process: Aggregation: Monitors RSS feeds from VentureBeat and The AI Blog on a daily schedule. Filtering: Merges the feeds and uses a code snippet to select only the single most recent, high-quality article. Summarization: Sends the article data to Google Gemini, which generates a professional, emoji-rich summary formatted for social media. Human-in-the-loop Approval: Instead of auto-posting, it sends the draft to a private Telegram chat. The post is only published to the public channel once a human clicks "Approve." Set up steps Time estimate: 10โ15 minutes. Credentials: You will need to connect your Google Gemini (PaLM) API key and your Telegram Bot API token. Telegram IDs: Ensure you have the Chat ID for both your private approval chat and your public destination channel. Schedule: The trigger is currently set to 3:56 PM daily; adjust this in the "Schedule Trigger" node to fit your preferred posting time.
by DataForSEO
Once a week, this workflow automatically scans Google for newly ranked keywords for your domains using the DataForSEO API. It pulls the latest data for every target you track, stores a fresh snapshot in Google Sheets, and compares it to the previous run. Any newly ranked keywords are automatically added to a dedicated Google Sheet, creating an easy-to-review log. Lastly, the workflow sends a short summary to Slack, so your team can quickly see whatโs changed without manual checks. Whoโs it for SEO specialists and marketers who want to automatically track newly ranked keywords for their target domains and get quick weekly updates without doing manual Google checks. What it does This workflow automatically fetches new keywords your domains started ranking for on Google using DataForSEO Labs API, saves them into Google Sheets, and sends you a Slack summary so you can quickly see whatโs changed. How it works Triggers on your chosen schedule (default: once a week). Reads your keywords and target domains from Google Sheets. Extracts fresh ranking data from Google via DataForSEO API. Compares the results with the previous run. Adds newly ranked keywords into a dedicated Google Sheet. Sends a weekly summary message to Slack. Requirements DataForSEO account A spreadsheet in Google Sheets with your keywords that matches the required column structure (as in the example). A spreadsheet in Google Sheets with your target domains that matches the required column structure (as in the example). Slack account Customization You can easily customize the workflow by changing the schedule, exporting results to dashboards and other tools (such as Looker Studio and BigQuery) instead of Google Sheets, and modifying the Slack message text.
by Tony Adijah
Who is this for This workflow is perfect for job seekers, recruiters, freelancers, and anyone actively monitoring job boards who wants to automate their search โ getting instant Telegram alerts for new matching jobs without manually refreshing pages or missing time-sensitive postings. What this workflow does It automatically scrapes job posts from RSS feeds or job board APIs on a schedule, filters them by your target roles, locations, and keywords, deduplicates results against previously seen jobs stored in Google Sheets, and sends formatted Telegram alerts with job details and direct apply links for every new matching position. How it works Schedule Trigger runs every 6 hours (configurable) to check for new job posts. HTTP Request fetches job data from an RSS feed or job board API (default: RemoteOK). Parse & Extract normalizes raw data into clean job objects with title, company, location, URL, salary, tags, and posting date. Keyword Filter matches jobs against your configurable target roles, locations, and exclude terms โ only relevant jobs pass through. Deduplication checks each job against a Google Sheet of previously seen jobs. Only truly new jobs continue. Log to Sheet saves every new job to Google Sheets for tracking and history. Telegram Alert sends a formatted message with job details, tags, salary, and a direct apply link. Setup steps Schedule โ Adjust the interval in the Schedule Trigger node (default: every 6 hours). Set to every 1 hour for active searches. Job Source URL โ Replace the URL in the HTTP Request node with your target job board RSS feed or API endpoint. Examples: RemoteOK (https://remoteok.com/api), Arbeitnow (https://www.arbeitnow.com/api/job-board-api), or any RSS feed from LinkedIn, Indeed, etc. Keywords โ Edit the arrays in the Keyword Filter node: targetRoles, targetLocations, and excludeTerms. Google Sheets โ Connect your Google Sheets OAuth credential. Create a spreadsheet with columns: Title, Company name, Location, Url, Description, Posted date, Salary, Matched Role, Scraped date. Set the spreadsheet ID in both Sheet nodes. Telegram โ Create a bot via @BotFather, get your Chat ID, connect the Telegram credential, and set your Chat ID in the alert node. Test โ Run the workflow manually once to verify jobs flow through correctly. Requirements Google Sheets account with OAuth credentials Telegram bot (created via @BotFather) with bot token and chat ID n8n instance (cloud or self-hosted) A job board with an RSS feed or public API How to customize Add multiple HTTP Request nodes for different job boards and merge results with a Merge node. Change the cron to every 1 hour for high-priority searches. Add salary range filtering in the keyword filter code. Replace Telegram with Slack, Discord, WhatsApp, or email alerts. Add an AI node (Ollama, OpenAI) to score and rank jobs by relevance before alerting. Adjust the Parse & Extract node field mappings when switching to a different job board API.
by JESUS PACAHUALA ARROYO
This workflow automates the process of identifying local businesses with a weak digital presence to offer them specialized marketing services. By combining real-time data from Google Maps with the analytical power of Gemini AI, it transforms raw search results into a structured sales pipeline. How it works Data Extraction: The process starts with a form where you enter search keywords (e.g., "restaurants in Lima"). The workflow then queries the SerpApi to fetch the top local results from Google Maps. Filtering & Prioritization: It filters results by region and sorts them by rating. It specifically targets the top 5 businesses with the lowest ratings or missing information, as these represent the highest conversion opportunities. AI Analysis: The Gemini AI agent acts as a senior consultant. It analyzes each lead's weaknesses, assigns a priority score, and generates a personalized sales pitch and email copy. Record Keeping: Finally, all enriched data, including the AI-generated strategy, is formatted and saved into a Google Sheet for immediate sales action. Setup steps SerpApi:** Register at serpapi.com to get your API key and add it to the HTTP Request node credentials. Google Gemini:** Set up your Google AI Studio credentials for the AI Agent node. Google Sheets:** Create a spreadsheet with columns for Company Name, Rating, Address, AI Score, and Sales Strategy. Link it in the final node.
by Naveen Choudhary
This workflow prevents your AI support bot from responding to every single message by intelligently aggregating rapid-fire messages from users before generating a comprehensive response. Who's it for Support teams and bot developers who want to provide better AI responses by letting users fully explain their issue before the bot responds, reducing unnecessary back-and-forth and improving response quality. How it works When a user sends a message to your Telegram bot, the workflow: Checks if the user has an active session within the last 60 seconds If no session exists, creates a new one and starts a 60-second timer If a session is active, appends the new message to the existing conversation After 60 seconds of the first message, fetches all aggregated messages Sends the complete conversation to OpenAI for a comprehensive response Delivers the AI-generated answer back to the user via Telegram Clears the session for the next interaction Requirements Telegram bot token (set up via BotFather) OpenAI API key PostgreSQL database with a user_sessions table containing columns: user_id, session_id, messages (jsonb array), first_message_at, wait_expires_at, status, and resume_url n8n instance (self-hosted or cloud) How to set up Configure your Telegram credentials in the Telegram Trigger node Add your OpenAI API key to the OpenAI Chat Model node Set up PostgreSQL credentials and create the required database table Adjust the wait time (default 60 seconds) in the Wait node if needed Activate the workflow and test with your Telegram bot How to customize Modify the aggregation window by changing the wait duration Customize the AI prompt in the AI Agent node for different response styles Add additional logic to handle specific keywords or commands Integrate other AI models or add context from external sources
by Vadim Mubi
Meet your new automated Sales Manager. This workflow acts as a relentless but helpful AI Sales Coach that lives inside your n8n. It wakes up every morning, scans your Notion CRM for deals that have been neglected, and uses OpenAI to send personalized, context-aware "nudges" directly to the specific sales agent responsible via Telegram. ๐ก Why use this? Stop Generic Spam:* Unlike dumb bots that blast the whole channel, this workflow maps Notion Users to Telegram Handles to tag the *exact person responsible (e.g., "@alex, wake up!"). Context-Aware AI:** The AI reads the deal details. It gets "greedy" for high-value deals and "sarcastic" for long silences. Zero Management:** It runs on autopilot, ensuring no lead slips through the cracks. โ๏ธ How it works Data Sync: Fetches your "Agents" database (Notion User โ Telegram ID) and "Active Deals". Logic Core: Calculates exactly how many days a lead has been untouched. Smart Routing: Matches the Assigned Manager in Notion to their Telegram ID. AI Generation: Generates a punchy message based on the deal's value and stage. Delivery: Sends the notification to Telegram with a direct link to the Notion deal. ๐ Setup Steps Get the Notion System This workflow requires a specific database structure to map Agents and Deals. ๐ Click here to duplicate the Notion Template Connect Databases In the "Get Agents" node, select your duplicated Agents Configuration database. In the "Get Active Leads" node, select your duplicated Active Deals database. Configure Settings Open the ๐ CONFIGURATION node to set: DAYS_INACTIVE_LIMIT: How many days of silence before alerting? (default: 7). COACH_PERSONA: Customize the AI's personality (e.g., "Tough Boss" or "Friendly Helper"). TELEGRAM_CHAT_ID: A fallback chat for unassigned leads. Connect Accounts Add your credentials for Notion, OpenAI, and Telegram.