by vinci-king-01
Social Media Sentiment Analysis Dashboard with AI and Real-time Monitoring 🎯 Target Audience Social media managers and community managers Marketing teams monitoring brand reputation PR professionals tracking public sentiment Customer service teams identifying trending issues Business analysts measuring social media ROI Brand managers protecting brand reputation Product managers gathering user feedback 🚀 Problem Statement Manual social media monitoring is overwhelming and often misses critical sentiment shifts or trending topics. This template solves the challenge of automatically collecting, analyzing, and visualizing social media sentiment data across multiple platforms to provide actionable insights for brand management and customer engagement. 🔧 How it Works This workflow automatically monitors social media platforms using AI-powered sentiment analysis, processes mentions and conversations, and provides real-time insights through a comprehensive dashboard. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain real-time monitoring AI-Powered Sentiment Analysis - Uses advanced NLP to analyze sentiment, emotions, and topics Multi-Platform Integration - Monitors Twitter, Reddit, and other social platforms Real-time Alerting - Sends notifications for critical sentiment changes or viral content Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the mention was recorded | "2024-01-15T10:30:00Z" | | platform | String | Social media platform | "Twitter" | | username | String | User who posted the content | "@john_doe" | | content | String | Full text of the post/comment | "Love the new product features!" | | sentiment_score | Number | Sentiment score (-1 to 1) | 0.85 | | sentiment_label | String | Sentiment classification | "Positive" | | emotion | String | Primary emotion detected | "Joy" | | topics | Array | Key topics identified | ["product", "features"] | | engagement | Number | Likes, shares, comments | 1250 | | reach_estimate | Number | Estimated reach | 50000 | | influence_score | Number | User influence metric | 0.75 | | alert_priority | String | Alert priority level | "High" | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Slack workspace for notifications (optional) Social media API access (Twitter, Reddit, etc.) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for sentiment analysis data Configure the sheet name (default: "Sentiment Analysis") 4. Configure Social Media Monitoring Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for social media platforms you want to monitor Customize the user prompt to extract specific sentiment data Set up keywords, hashtags, and brand mentions to track 5. Set up Notification Channels Configure Slack webhook or API credentials Set up email service credentials for alerts Define sentiment thresholds for different alert levels Test notification delivery 6. Configure Schedule Trigger Set monitoring frequency (every 15 minutes, hourly, etc.) Choose appropriate time zones for your business hours Consider social media platform rate limits 7. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test sentiment analysis with sample content 🔄 Workflow Customization Options Modify Monitoring Targets Add or remove social media platforms Change keywords, hashtags, or brand mentions Adjust monitoring frequency based on platform activity Extend Sentiment Analysis Add more sophisticated emotion detection Implement topic clustering and trend analysis Include influencer identification and scoring Customize Alert System Set different thresholds for different sentiment levels Create tiered alert systems (info, warning, critical) Add sentiment trend analysis and predictions Output Customization Add data visualization and reporting features Implement sentiment trend charts and graphs Create executive dashboards with key metrics Add competitor sentiment comparison 📈 Use Cases Brand Reputation Management**: Monitor and respond to brand mentions Crisis Management**: Detect and respond to negative sentiment quickly Customer Feedback Analysis**: Understand customer satisfaction and pain points Product Launch Monitoring**: Track sentiment around new product releases Competitor Analysis**: Monitor competitor sentiment and engagement Influencer Identification**: Find and engage with influential users 🚨 Important Notes Respect social media platforms' terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update your monitoring keywords and parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider privacy implications and data protection regulations 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Sentiment analysis errors: Review the Code node's JavaScript logic Rate limiting: Adjust monitoring frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Social media platform API documentation Sentiment analysis best practices and guidelines
by ConnectSafely
Who's it for This workflow is built for sales professionals, recruiters, founders, and growth marketers who need to build targeted prospect lists from LinkedIn without risking their accounts. Perfect for anyone who wants to find decision-makers, build lead lists, or research target audiences at scale. If you're running outbound campaigns, building ABM lists, sourcing candidates, or doing competitive research, this automation handles LinkedIn searches and exports results directly to your Google Sheet—no browser cookies, no session hijacking, no ban risk. How it works The workflow automates LinkedIn people searches by leveraging ConnectSafely.ai's compliant API, then exports structured results to Google Sheets or JSON files. The process flow: Define your search parameters (keywords, location, job title, result limit) Execute the search via ConnectSafely.ai API Process and normalize the response data Export to Google Sheets for CRM import or further automation Optionally save as JSON file for data backup or processing No LinkedIn cookies required. No browser automation. Platform-compliant searches that won't get your account restricted. Watch the complete step-by-step implementation guide: LinkedIn Search Export Automation Tutorial Setup steps Step 1: Get Your ConnectSafely.ai API Credentials Obtain API Key: Log into ConnectSafely.ai Dashboard Navigate to Settings → API Keys Generate a new API key Copy your API key (you'll need this in the next step) Add Bearer Auth Credential in n8n: Go to Credentials in n8n Click Add Credential → HTTP Bearer Auth Paste your ConnectSafely.ai API key Save the credential Step 2: Configure Search Parameters Open the Set Search Parameters node and customize your search: | Parameter | Description | Example | |-----------|-------------|---------| | keywords | Search terms for profiles | CEO SaaS, Marketing Director | | location | Geographic filter | United States, San Francisco Bay Area | | title | Job title filter | Head of Growth, VP Sales | | limit | Maximum results to return | 100 (max varies by plan) | Pro Tips: Use specific keywords for better targeting Combine title + keywords for precision (e.g., keywords: "B2B" + title: "VP Sales") Start with smaller limits (25-50) for testing Step 3: Configure Google Sheets Integration 3.1 Connect Google Sheets Account Go to Credentials → Add Credential → Google Sheets OAuth2 Follow the OAuth flow to connect your Google account Grant access to Google Sheets 3.2 Prepare Your Google Sheet Create a new Google Sheet with the following columns (the workflow will auto-populate these): | Column Name | Description | |-------------|-------------| | profileUrl | LinkedIn profile URL | | fullName | Contact's full name | | firstName | First name | | lastName | Last name | | headline | LinkedIn headline/tagline | | currentPosition | Current job title | | company | Company name (extracted from headline) | | location | Geographic location | | connectionDegree | 1st, 2nd, or 3rd degree connection | | isPremium | LinkedIn Premium member (true/false) | | isOpenToWork | Open to work badge (true/false) | | profilePicture | Profile image URL | | extractedAt | Timestamp of extraction | 3.3 Configure the Export Node Open the Export to Google Sheets node Select your Google Sheets credential Enter your Document ID (from the sheet URL) Select the Sheet Name The column mapping is pre-configured for auto-mapping Step 4: Test the Workflow Click the Manual Trigger node Click Test Workflow Verify: Search executes successfully Results appear in the Format Results output Data exports to your Google Sheet JSON file is generated (optional) Customization Search Parameter Combinations Sales Prospecting: keywords: "B2B SaaS" location: "United States" title: "VP of Sales" limit: 100 Recruiting: keywords: "Python Machine Learning" location: "San Francisco Bay Area" title: "Senior Engineer" limit: 50 Founder Networking: keywords: "Seed Series A" location: "New York City" title: "Founder CEO" limit: 100 Extending the Workflow Add to CRM: Connect the Format Results output to HubSpot, Salesforce, or Pipedrive nodes Enrich Data: Add a loop to fetch full profile details for each result using the /linkedin/profile endpoint Chain with Outreach: Connect to the LinkedIn Connection Request Workflow to automatically send personalized invites to your search results Schedule Searches: Replace Manual Trigger with a Schedule Trigger to run daily/weekly searches Output Data Format Each result includes: { "profileUrl": "https://www.linkedin.com/in/johndoe", "profileId": "johndoe", "profileUrn": "urn:li:member:123456789", "fullName": "John Doe", "firstName": "John", "lastName": "Doe", "headline": "VP of Sales at TechCorp | B2B SaaS", "currentPosition": "VP of Sales", "company": "TechCorp", "location": "San Francisco, California", "connectionDegree": "2nd", "isPremium": true, "isOpenToWork": false, "profilePicture": "https://media.licdn.com/...", "extractedAt": "2024-01-15T10:30:00.000Z" } Use Cases Sales Prospecting: Build targeted lead lists of decision-makers at companies matching your ICP Recruiting & Talent Sourcing: Find passive candidates with specific skills and experience levels Market Research: Analyze competitor employee profiles and organizational structures Event Planning: Build invite lists for webinars, conferences, or virtual events Partnership Development: Identify potential partners and integration opportunities Investor Research: Find founders and executives at companies in specific stages/verticals Troubleshooting Common Issues & Solutions Issue: "No results found" error Solution:** Broaden your search parameters; try removing one filter at a time Issue: Empty company field in results Solution:** Company is extracted from headline; some profiles may not include company in their headline format Issue: API authentication errors Solution:** Verify your ConnectSafely.ai API key is valid and has proper permissions; check Bearer Auth credential format Issue: Google Sheets not updating Solution:** Confirm OAuth credentials are valid; check that the sheet has write permissions Issue: Fewer results than expected Solution:** LinkedIn limits search results; try more specific parameters or upgrade your ConnectSafely.ai plan Issue: Rate limit errors Solution:** Add delay between multiple searches; check your API plan limits Documentation & Resources Official Documentation ConnectSafely.ai Docs:** https://connectsafely.ai/docs API Reference:** Available in ConnectSafely.ai dashboard n8n HTTP Request Node:** https://docs.n8n.io/nodes/n8n-nodes-base.httpRequest Support Channels Email Support:** support@connectsafely.ai Documentation:** https://connectsafely.ai/docs Custom Workflows:** Contact us for custom automation Connect With Us Stay updated with the latest automation tips, LinkedIn strategies, and platform updates: LinkedIn:** linkedin.com/company/connectsafelyai YouTube:** youtube.com/@ConnectSafelyAI-v2x Instagram:** instagram.com/connectsafely.ai Facebook:** facebook.com/connectsafelyai X (Twitter):** x.com/AiConnectsafely Bluesky:** connectsafelyai.bsky.social Mastodon:** mastodon.social/@connectsafely Need Custom Workflows? Looking to build sophisticated LinkedIn automation workflows tailored to your business needs? Contact our team for custom automation development, strategy consulting, and enterprise solutions. We specialize in: Multi-channel prospecting workflows AI-powered lead scoring and qualification CRM integration and data synchronization Custom search and enrichment pipelines Bulk outreach automation with personalization
by Daniel
Transform any website into a structured knowledge repository with this intelligent crawler that extracts hyperlinks from the homepage, intelligently filters images and content pages, and aggregates full Markdown-formatted content—perfect for fueling AI agents or building comprehensive company dossiers without manual effort. 📋 What This Template Does This advanced workflow acts as a lightweight web crawler: it scrapes the homepage to discover all internal links (mimicking a sitemap extraction), deduplicates and validates them, separates image assets from textual pages, then fetches and converts non-image page content to clean Markdown. Results are seamlessly appended to Google Sheets for easy analysis, export, or integration into vector databases. Automatically discovers and processes subpage links from the homepage Filters out duplicates and non-HTTP links for efficient crawling Converts scraped content to Markdown for AI-ready formatting Categorizes and stores images, links, and full content in a single sheet row per site 🔧 Prerequisites Google account with Sheets access for data storage n8n instance (cloud or self-hosted) Basic understanding of URLs and web links 🔑 Required Credentials Google Sheets OAuth2 API Setup Go to console.cloud.google.com → APIs & Services → Credentials Click "Create Credentials" → Select "OAuth client ID" → Choose "Web application" Add authorized redirect URIs: https://your-n8n-instance.com/rest/oauth2-credential/callback (replace with your n8n URL) Download the client ID and secret, then add to n8n as "Google Sheets OAuth2 API" credential type During setup, grant access to Google Sheets scopes (e.g., spreadsheets) and test the connection by listing a sheet ⚙️ Configuration Steps Import the workflow JSON into your n8n instance In the "Set Website" node, update the website_url value to your target site (e.g., https://example.com) Assign your Google Sheets credential to the three "Add ... to Sheet" nodes Update the documentId and sheetName in those nodes to your target spreadsheet ID and sheet name/ID Ensure your sheet has columns: "Website", "Links", "Scraped Content", "Images" Activate the workflow and trigger manually to test scraping 🎯 Use Cases Knowledge base creation: Crawl a company's site to aggregate all content into Sheets, then export to Notion or a vector DB for internal wikis AI agent training: Extract structured Markdown from industry sites to fine-tune LLMs on domain-specific data like legal docs or tech blogs Competitor intelligence: Build dossiers by crawling rival websites, separating assets and text for SEO audits or market analysis Content archiving: Preserve dynamic sites (e.g., news portals) as static knowledge dumps for compliance or historical research ⚠️ Troubleshooting No links extracted: Verify the homepage has tags; test with a simple site like example.com and check HTTP response in executions Sheet update fails: Confirm column names match exactly (case-sensitive) and credential has edit permissions; try a new blank sheet Content truncated: Google Sheets limits cells to ~50k chars—adjust the .slice(0, 50000) in "Add Scraped Content to Sheet" or split into multiple rows Rate limiting errors: Add a "Wait" node after "Scrape Links" with 1-2s delay if the site blocks rapid requests
by vinci-king-01
Breaking News Aggregator with SendGrid and PostgreSQL ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Operational SendGrid account PostgreSQL database accessible from n8n Basic knowledge of SQL for table creation Required Credentials ScrapeGraphAI API Key** – Enables web scraping and parsing SendGrid API Key** – Sends email notifications PostgreSQL Credentials** – Host, port, database, user, and password Specific Setup Requirements | Resource | Requirement | Example Value | |----------------|------------------------------------------------------------------|------------------------------| | PostgreSQL | Table with columns: id, title, url, source, published_at| news_updates | | Allowed Hosts | Outbound HTTPS access from n8n to target sites & SendGrid endpoint| https://*.gov, https://api.sendgrid.com | | Keywords List | Comma-separated compliance terms to filter results | GDPR, AML, cybersecurity | How it works This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Key Steps: Schedule Trigger**: Runs once daily (or at any chosen interval). ScrapeGraphAI**: Crawls predefined regulatory URLs and returns structured article data. Code (JS)**: Filters results by keywords and formats them. SplitInBatches**: Processes articles in manageable chunks to avoid timeouts. If Node**: Checks whether each article already exists in the database. PostgreSQL**: Inserts only new articles into the news_updates table. Set Node**: Generates an email-friendly HTML summary. SendGrid**: Dispatches the compiled summary to compliance stakeholders. Set up steps Setup Time: 15-20 minutes Install ScrapeGraphAI Node: From n8n, go to “Settings → Community Nodes → Install”, search “ScrapeGraphAI”, and install. Create PostgreSQL Table: CREATE TABLE news_updates ( id SERIAL PRIMARY KEY, title TEXT, url TEXT UNIQUE, source TEXT, published_at TIMESTAMP ); Add Credentials: Navigate to “Credentials”, add ScrapeGraphAI, SendGrid, and PostgreSQL credentials. Import Workflow: Copy the JSON workflow, paste into “Import from Clipboard”. Configure Environment Variables (optional): REG_NEWS_KEYWORDS, SEND_TO_EMAILS, DB_TABLE_NAME. Set Schedule: Open the Schedule Trigger node and define your preferred cron expression. Activate Workflow: Toggle “Active”, then click “Execute Workflow” once to validate all connections. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow at the defined interval. ScrapeGraphAI** – Scrapes and parses news listings into JSON. Code** – Filters articles by keywords and normalizes timestamps. SplitInBatches** – Prevents database overload by batching inserts. If** – Determines whether an article is already stored. PostgreSQL** – Executes parameterized INSERT statements. Set** – Builds the HTML email body. SendGrid** – Sends the daily digest email. Data Flow: Schedule Trigger → ScrapeGraphAI → Code → SplitInBatches → If → PostgreSQL → Set → SendGrid Customization Examples Change Keyword Filtering // Code Node snippet const keywords = ['GDPR','AML','SOX']; // Add or remove terms item.filtered = keywords.some(k => item.title.includes(k)); return item; Switch to Weekly Digest { "trigger": { "cronExpression": "0 9 * * 1" // Every Monday at 09:00 } } Data Output Format The workflow outputs structured JSON data: { "title": "Data Privacy Act Amendment Passed", "url": "https://regulator.gov/news/1234", "source": "regulator.gov", "published_at": "2024-06-12T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI node not found – Install the community node and restart n8n. Duplicate key error in PostgreSQL – Ensure the url column is marked UNIQUE to prevent duplicates. Emails not sending – Verify SendGrid API key and check account’s daily limit. Performance Tips Limit initial scrape URLs to fewer than 20 to reduce run time. Increase SplitInBatches size only if your database can handle larger inserts. Pro Tips: Use environment variables to manage sensitive credentials securely. Add an Error Trigger node to catch and log failures for auditing purposes. Combine with Slack or Microsoft Teams nodes to push instant alerts alongside email digests.
by Avkash Kakdiya
How it works This workflow captures webinar feedback through a webhook and normalizes the submitted data for processing. It stores raw feedback in Google Sheets, uses an AI model to understand sentiment and intent, and generates a personalized response. A professional HTML thank-you email is sent automatically to each attendee. All replies and delivery details are logged back into the spreadsheet for tracking. Step-by-step Receive webinar feedback** Feedback Webhook – Accepts feedback submissions from a webinar form in real time. ID Generation – Creates a human-readable, unique feedback ID for tracking. Normalize Feedback – Cleans and standardizes incoming fields like name, email, rating, and comments. Store and enrich feedback** Store Partial – Saves the raw feedback data into Google Sheets. Common Resources – Attaches shared webinar resources such as recordings and slides. Analyze feedback with AI** Message a model – Evaluates sentiment, engagement level, and intent using an AI model. Parse AI Response – Extracts structured insights like segment, reply text, and next steps. Generate and send follow-up** Merge – Combines feedback data, AI response, and resources. Build Email HTML – Creates a clean, professional HTML email tailored to each attendee. Send AI Thank You Email – Sends the personalized follow-up via Gmail. Log final outcome** Store Feedback – Updates Google Sheets with the sent email content, timestamp, and status. Why use this? Save time by automating webinar feedback follow-ups end to end. Ensure every attendee receives a thoughtful, personalized response. Maintain a complete feedback and communication log in one place. Improve engagement without sounding promotional or generic. Scale post-webinar communication without manual effort.
by Joe Marotta
What This Flow Does Automated stock portfolio analysis system that performs comprehensive fundamental and technical analysis of your portfolio holdings on a scheduled basis, with intelligent follow-up capabilities. How It Works Two-Phase Analysis System: Monday Analysis (Main weekly analysis) Reads your stock holdings from Google Sheets Performs deep fundamental analysis using Claude AI with web search Conducts technical analysis with current market data Combines both analyses into final buy/sell/hold recommendations Emails you comprehensive analysis report Wednesday Follow-up (Interactive refinement) Sends midweek check-in email asking for additional input If you reply with documents, questions, or market observations Runs supplemental analysis incorporating your feedback Updates recommendations based on new information and market changes Delivers refined analysis via email Key Features Fractional share support - handles both whole and fractional stock positions Web-enabled AI analysis - Claude AI searches current market data, news, earnings Dual-analyst approach - separate fundamental and technical analysis for comprehensive coverage Interactive feedback loop - Wednesday follow-ups allow you to guide analysis Professional email reports - formatted HTML emails with actionable recommendations Setup Steps Google Sheets: Duplicate given template and fill with your investment information Gmail OAuth: Connect your Gmail account for sending reports Anthropic API: Add Claude AI credentials for analysis Replace placeholders: Update YOUR_EMAIL@gmail.com, YOUR_GOOGLE_SHEETS_ID, webhook IDs Schedule configuration: Currently set for Monday 12pm EST analysis, Wednesday 12pm EST follow-up Use Case Perfect for part time investors who want systematic, AI-powered analysis of their portfolio with the flexibility to provide additional context and refinements throughout the week.
by Dr. Firas
💥 Automate Scrape Google Maps Business Leads (Email, Phone, Website) using Apify 🧠 AI-Powered Business Prospecting Workflow (Google Maps + Email Enrichment) Who is this for? This workflow is designed for entrepreneurs, sales teams, marketers, and agencies who want to automate lead discovery and build qualified business contact lists — without manual searching or copying data. It’s perfect for anyone seeking an AI-driven prospecting assistant that saves time, centralizes business data, and stays fully compliant with GDPR. What problem is this workflow solving? Manually searching for potential clients, copying their details, and qualifying them takes hours — and often leads to messy spreadsheets. This workflow automates the process by: Gathering publicly available business information from Google Maps Enriching that data with AI-powered summaries and contact insights Compiling it into a clean, ready-to-use Google Sheet database This means you can focus on closing deals, not collecting data. What this workflow does This automation identifies, analyzes, and organizes business opportunities in just a few steps: Telegram Trigger → Send a message specifying your business type, number of leads, and Google Maps URL. Apify Integration → Fetches business information from Google Maps (public data). Duplicate Removal → Ensures clean, non-redundant results. AI Summarization (GPT-4) → Generates concise business summaries for better understanding. Email Extraction (GPT-4) → Finds and extracts professional contact emails from company websites. Google Sheets Integration → Automatically stores results (name, category, location, phone, email, etc.) in a structured sheet. Telegram Notification → Confirms when all businesses are processed. All data is handled ethically and transparently — only from public sources and without any unsolicited contact. Setup Telegram Setup Create a Telegram bot via BotFather Copy the API token and paste it into the Telegram Trigger node credentials. Apify Setup Create an account on Apify Get your API token and connect it to the “Run Google Maps Scraper” node. Google Sheets Setup Connect your Google account under the “Google Maps Database” node. Specify the target spreadsheet and worksheet name. OpenAI Setup Add your OpenAI API key to the AI nodes (“Company Summary Info” and “Extract Business Email”). Test Send a Telegram message like: restaurants, 5, https://www.google.com/maps/search/restaurants+in+Paris How to customize this workflow to your needs Change search region or business type** by modifying the Telegram input message format. Adjust the number of leads** via the maxCrawledPlacesPerSearch parameter in Apify. Add filters or enrichments** (e.g., websites with social links, review counts, or opening hours). Customize AI summaries** by tweaking the prompt inside the “Company Summary Info” node. Integrate CRM tools** like HubSpot or Pipedrive by adding a connector after the Google Sheets node. ⚙️ Expected Outcome ✅ A clean, enriched, and ready-to-use Google Sheet of businesses with: Name, category, address, and city Phone number and website AI-generated business summary Extracted professional email (if available) ✅ Telegram confirmation once all businesses are processed ✅ Fully automated, scalable, and GDPR-compliant prospecting workflow 💡 This workflow provides a transparent, ethical way to streamline your B2B lead research while staying compliant with privacy and anti-spam regulations. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Cheng Siong Chin
How It Works This workflow automates monthly revenue reconciliation across Stripe, PayPal, and bank statements by standardizing data formats, detecting discrepancies, and producing audit-ready reports. It concurrently retrieves revenue data from multiple sources, normalizes datasets into consistent structures, consolidates records, and reconciles transactions against bank statements with intelligent mismatch detection. The system aggregates monthly totals, generates detailed audit reports with clearly flagged discrepancies, archives finalized outputs to Google Drive, and notifies tax agents. Designed for accounting firms, finance teams, and businesses, it enables automated revenue verification, multi-channel reconciliation, discrepancy identification, and compliance audit documentation without manual record matching or error-prone spreadsheet workflows. Setup Steps Configure Stripe, PayPal. Set up normalization rules for date, currency, and transaction ID mappings. Connect Google Drive for report archiving and Gmail for agent notifications. Define mismatch thresholds and reconciliation tolerance parameters. Prerequisites Stripe, PayPal, and bank statement accounts Use Cases Accounting firms automating client revenue verification; multi-channel e-commerce businesses Customization Add additional payment sources (Square, Shopify), adjust normalization rules for regional formats Benefits Eliminates manual reconciliation, detects discrepancies automatically
by vinci-king-01
Medical Research Tracker with Email and Pipedrive ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scans authoritative healthcare policy websites for new research, bills, or regulatory changes, stores relevant findings in Pipedrive, and immediately notifies key stakeholders via email. It is ideal for healthcare administrators and policy analysts who need to stay ahead of emerging legislation or guidance that could impact clinical operations, compliance, and strategy. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Pipedrive account and API token SMTP credentials (or native n8n Email credentials) for sending alerts List of target URLs or RSS feeds from government or healthcare policy organizations Basic familiarity with n8n credential setup Required Credentials | Service | Credential Name | Purpose | |--------------------|-----------------|-----------------------------------| | ScrapeGraphAI | API Key | Perform web scraping | | Pipedrive | API Token | Create / update deals & notes | | Email (SMTP/Nodemailer) | SMTP creds | Send alert emails | Environment Variables (optional) | Variable | Example Value | Description | |-------------------------|------------------------------|-----------------------------------------------| | N8N_DEFAULT_EMAIL_FROM | policy-bot@yourorg.com | Default sender for Email Send node | | POLICY_KEYWORDS | telehealth, Medicare, HIPAA | Comma-separated keywords for filtering | How it works This workflow automatically scans authoritative healthcare policy websites for new research, bills, or regulatory changes, stores relevant findings in Pipedrive, and immediately notifies key stakeholders via email. It is ideal for healthcare administrators and policy analysts who need to stay ahead of emerging legislation or guidance that could impact clinical operations, compliance, and strategy. Key Steps: Manual Trigger**: Kick-starts the workflow or schedules it via cron. Set → URL List**: Defines the list of healthcare policy pages or RSS feeds to scrape. Split In Batches**: Iterates through each URL so scraping happens sequentially. ScrapeGraphAI**: Extracts headlines, publication dates, and links. Code (Filter & Normalize)**: Removes duplicates, standardizes JSON structure, and applies keyword filters. HTTP Request**: Optionally enriches data with summary content using external APIs (e.g., OpenAI, SummarizeBot). If Node**: Checks if the policy item is new (not already logged in Pipedrive). Pipedrive**: Creates a new deal or note for tracking and collaboration. Email Send**: Sends an alert to compliance or leadership teams with the policy summary. Sticky Note**: Provides inline documentation inside the workflow. Set up steps Setup Time: 15–20 minutes Install ScrapeGraphAI: In n8n, go to “Settings → Community Nodes” and install n8n-nodes-scrapegraphai. Create Credentials: a. Pipedrive → “API Token” from your Pipedrive settings → add in n8n. b. ScrapeGraphAI → obtain API key → add as credential. c. Email SMTP → configure sender details in n8n. Import Workflow: Copy the JSON template into n8n (“Import from clipboard”). Update URL List: Open the initial Set node and replace placeholder URLs with the sites you monitor (e.g., cms.gov, nih.gov, who.int, state health departments). Define Keywords (optional): a. Open the Code node “Filter & Normalize”. b. Adjust the const keywords = [...] array to match topics you care about. Test Run: Trigger manually; verify that: Scraped items appear in the execution logs. New deals/notes show up in Pipedrive. Alert email lands in your inbox. Schedule: Add a Cron node (e.g., every 6 hours) in place of Manual Trigger for automated execution. Node Descriptions Core Workflow Nodes: Manual Trigger** – Launches the workflow on demand. Set – URL List** – Holds an array of target policy URLs/RSS feeds. Split In Batches** – Processes each URL one at a time to avoid rate limiting. ScrapeGraphAI** – Scrapes page content and parses structured data. Code – Filter & Normalize** – Cleans results, removes duplicates, applies keyword filter. HTTP Request – Summarize** – Calls a summarization API (optional). If – Duplicate Check** – Queries Pipedrive to see if the policy item already exists. Pipedrive (Deal/Note)** – Logs a new deal or adds a note with policy details. Email Send – Alert** – Notifies subscribed stakeholders. Sticky Note** – Embedded instructions inside the canvas. Data Flow: Manual Trigger → Set (URLs) → Split In Batches → ScrapeGraphAI → Code (Filter) → If (Duplicate?) → Pipedrive → Email Send Customization Examples 1. Add Slack notifications // Insert after Email Send { "node": "Slack", "parameters": { "channel": "#policy-alerts", "text": New policy update: ${$json["title"]} - ${$json["url"]} } } 2. Use different CRM (HubSpot) // Replace Pipedrive node config { "resource": "deal", "operation": "create", "title": $json["title"], "properties": { "dealstage": "appointmentscheduled", "description": $json["summary"] } } Data Output Format The workflow outputs structured JSON data: { "title": "Telehealth Expansion Act of 2024", "date": "2024-05-30", "url": "https://www.congress.gov/bill/118th-congress-house-bill/1234", "summary": "This bill proposes expanding Medicare reimbursement for telehealth services...", "source": "congress.gov", "status": "new" } Troubleshooting Common Issues Empty Scrape Results – Check if the target site uses JavaScript rendering; ScrapeGraphAI may need a headless browser option enabled. Duplicate Deals in Pipedrive – Ensure the “If Duplicate?” node compares a unique field (e.g., URL or title) before creating a new deal. Performance Tips Limit batch size to avoid API rate limits. Cache or store the last scraped timestamp to skip unchanged pages. Pro Tips: Combine this workflow with an n8n “Cron” or “Webhook” trigger for fully automated monitoring. Use environment variables for keywords and email recipients to avoid editing nodes each time. Leverage Pipedrive’s automations to notify additional teams (e.g., legal) when high-priority items are logged.
by Cheng Siong Chin
How It Works This workflow automates end-to-end research analysis by coordinating multiple AI models—including NVIDIA NIM (Llama), OpenAI GPT-4, and Claude to analyze uploaded documents, extract insights, and generate polished reports delivered via email. Built for researchers, academics, and business analysts, it enables fast, accurate synthesis of information from multiple sources. The workflow eliminates the manual burden of document review, cross-referencing, and report compilation by running parallel AI analyses, aggregating and validating model outputs, and producing structured, publication-ready documents in minutes instead of hours. Data flows from Google Sheets (user input) through document extraction, parallel AI processing, response aggregation, quality validation, structured storage in Google Sheets, automated report formatting, and final delivery via Gmail with attachments. Setup Steps Configure API credentials Add OpenAI API key with GPT-4 access enabled Connect Anthropic Claude API credentials Set up Google Sheets integration with read/write permissions Configure Gmail credentials with OAuth2 authentication for automated email Customize email templates and report formatting preferences Prerequisites NVIDIA NIM API access, OpenAI API key (GPT-4 enabled), Anthropic Claude API key Use Cases Academic literature reviews, competitive intelligence reports Customization Adjust AI model parameters (temperature, tokens) per analysis depth needs Benefits Reduces research analysis time by 80%, eliminates single-source bias through multi-model consensus
by Avkash Kakdiya
How it works This workflow automates end-to-end contract analysis when a new file is uploaded to Google Drive. It downloads the contract, extracts its content, and uses AI to analyze legal terms, obligations, and risks. Based on the assessed risk level, it notifies stakeholders and logs structured results into Google Sheets for audit and compliance. Step-by-step Step 1: Contract ingestion and AI analysis** Google Drive Trigger – Monitors a specific folder for newly uploaded contract files. Download file – Downloads the uploaded contract from Google Drive. Extract Text From Downloaded File – Extracts readable text or prepares raw content for complex files. AI Contract Analysis – Analyzes legal, commercial, and financial clauses using AI. Format AI Output – Parses and structures the AI response into clean, usable fields. Step 2: Risk alerts and audit logging** Alert Teams Automatically – Evaluates risk level and checks for significant risks. Send a message (Risk Alert) – Sends a detailed alert email for medium-risk contracts. Send a message (Info Only) – Sends an informational email when no action is required. Get The Data To Save In Google Sheet (Alert Path) – Prepares alert-related contract data. Get The Data To Save In Google Sheet (Info Path) – Prepares non-alert contract data. Append row in sheet – Stores contract details, risks, and timestamps in Google Sheets. Why use this? Eliminates manual contract screening and repetitive reviews. Detects explicit and inferred risks consistently using AI. Automatically alerts teams only when attention is required. Creates a centralized audit log for compliance and reporting. Scales contract analysis without increasing legal workload.
by Oneclick AI Squad
This automated n8n workflow processes student applications on a scheduled basis, validates data, updates databases, and sends welcome communications to students and guardians. Main Components Trigger at Every Day 7 am** - Scheduled trigger that runs the workflow daily Read Student Data** - Reads pending applications from Excel/database Validate Application Data** - Checks data completeness and format Process Application Data** - Processes validated applications Update Student Database** - Updates records in the student database Prepare Welcome Email** - Creates personalized welcome messages Send Email** - Sends welcome emails to students/guardians Success Response** - Confirms successful processing Error Response** - Handles any processing errors Essential Prerequisites Excel file with student applications (student_applications.xlsx) Database access for student records SMTP server credentials for sending emails File storage access for reading application data Required Excel File Structure (student_applications.xlsx): Application ID | First Name | Last Name | Email | Phone Program Interest | Grade Level | School | Guardian Name | Guardian Phone Application Date | Status | Notes Expected Input Data Format: { "firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "phone": "+1234567890", "program": "Computer Science", "gradeLevel": "10th Grade", "school": "City High School", "guardianName": "Jane Doe", "guardianPhone": "+1234567891" } Key Features ⏰ Scheduled Processing:** Runs daily at 7 AM automatically 📊 Data Validation:** Ensures application completeness 💾 Database Updates:** Maintains student records 📧 Auto Emails:** Sends welcome messages ❌ Error Handling:** Manages processing failures Quick Setup Import workflow JSON into n8n Configure schedule trigger (default: 7 AM daily) Set Excel file path in "Read Student Data" node Configure database connection in "Update Student Database" node Add SMTP settings in "Send Email" node Test with sample data Activate workflow Parameters to Configure excel_file_path: Path to student applications file database_connection: Student database credentials smtp_host: Email server address smtp_user: Email username smtp_password: Email password admin_email: Administrator notification email