by Niels Berkhout
How it works This workflow sends a message to a specific LinkedIn profile. The LinkedIn profile data and message content are sourced from a Google Sheet. To send the message from your LinkedIn profile to another LinkedIn user, the SourceGeek node is used. Once the message is successfully sent, the original row in the Google Sheet is updated with a timestamp. How to use Prepare a Google Sheet containing a LinkedIn profile URL and a message Select the rows you want the workflow to process The workflow loops through the selected rows and uses the SourceGeek node to send a message to each LinkedIn profile After a message is sent, the corresponding row is updated to prevent it from being used again Requirements A Google Sheet with LinkedIn profile URLs and message content A verified SourceGeek node
by Om Gate
This n8n template demonstrates how to monitor YouTube channels and create AI-generated summaries in Notion. It helps you build a searchable video knowledge base without watching every new upload manually. Who’s it for Research teams tracking industry content PR professionals monitoring brand mentions, product reviews, and industry news for daily press briefings Content teams building internal learning libraries Good to know The workflow uses RSS, so it triggers when new feed items appear. Long videos may take more time to process and summarize. Make sure your VideoDB account has enough balance before running this workflow. Track usage rates at console.videodb.io/dashboard/usage. How it works An RSS trigger watches a YouTube channel feed. New video links are sent to VideoDB for upload and transcription. VideoDB summarizes the transcript into key points. n8n creates a Notion database entry with title, link, and summary. Your Notion workspace becomes a continuously updated content archive. How to use Add credentials for VideoDB and Notion. Set your target YouTube RSS URL in the trigger node. Configure the Notion database ID and field mapping. Test with a sample feed item, then activate the workflow. Requirements VideoDB API key (Get one here) Notion workspace with API access YouTube channel RSS feed URL ( You can checkout websites such as https://tubepilot.ai/tools/youtube-rss-feed-generator/ for getting the RSS Feed for any YT Channel ) n8n instance (cloud or self-hosted) Customising this workflow Track multiple channels by adding more RSS triggers. Change the AI prompt for shorter or more detailed summaries. Add topic tags or sentiment fields in Notion. Send Slack updates when a new summary is created. Disclaimer: This workflow uses VideoDB's Verified Community Node and will only work on self-hosted n8n instances.
by vinci-king-01
AI Conference Intelligence & Networking Optimizer with ScrapeGraphAI > ⚠️ IMPORTANT: This template requires a self-hosted n8n instance with ScrapeGraphAI integration. It cannot be used with n8n Cloud due to web scraping capabilities. This workflow automatically discovers industry conferences and provides AI-powered networking intelligence to maximize your event ROI. How it works This workflow automatically discovers industry conferences and provides AI-powered networking intelligence to maximize your event ROI. Key Steps Scheduled Discovery - Runs weekly to find new industry conferences from Eventbrite and other sources. AI-Powered Scraping - Uses ScrapeGraphAI to extract comprehensive conference information including speakers, agenda, and networking opportunities. Speaker Intelligence - Analyzes speakers to identify high-priority networking targets based on their role, company, and expertise. Agenda Analysis - Extracts and maps the complete conference schedule to optimize your time and networking strategy. Networking Strategy - Generates AI-powered recommendations for maximizing networking ROI with prioritized contact lists and approach strategies. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping capabilities. Customize conference sources - Update the Eventbrite URL to target specific industries or locations. Adjust monitoring frequency - Modify the weekly trigger to match your conference discovery needs. Review networking priorities - The system automatically prioritizes speakers, but you can customize the criteria. Technical Configuration Prerequisites Self-hosted n8n instance (version 1.0+) ScrapeGraphAI API credentials Eventbrite API access (optional, for enhanced data) API Configuration ScrapeGraphAI Setup Sign up at https://scrapegraph.ai Generate API key from dashboard Add credentials in n8n: Settings > Credentials > Add Credential > ScrapeGraphAI Customization Examples Modify Conference Sources: // In Eventbrite Scraper node, update the URL: const targetUrl = "https://www.eventbrite.com/d/united-states/technology/"; const industryFilter = "?q=artificial+intelligence"; Adjust Networking Priorities: // In Speaker Intelligence node, modify scoring weights: const priorityWeights = { executive_level: 0.4, company_size: 0.3, industry_relevance: 0.2, speaking_topic: 0.1 }; Customize Output Format: // In Networking Strategy node, modify output structure: const outputFormat = { high_priority: speakers.filter(s => s.score > 8), medium_priority: speakers.filter(s => s.score > 6 && s.score <= 8), networking_plan: generateApproachStrategy(speakers) }; Data Storage & Output Formats Storage Options Local JSON files** - Default storage for conference data Google Drive** - For sharing reports with team Database** - PostgreSQL/MySQL for enterprise deployments Cloud Storage** - AWS S3, Google Cloud Storage Output Formats JSON** - Raw data for API integration CSV** - For spreadsheet analysis PDF Reports** - Executive summaries Markdown** - Documentation and sharing Sample Output Structure { "conference_data": { "event_name": "AI Summit 2024", "date": "2024-06-15", "location": "San Francisco, CA", "speakers": [ { "name": "Dr. Sarah Chen", "title": "CTO, TechCorp", "company": "TechCorp Inc", "networking_score": 9.2, "priority": "high", "approach_strategy": "Connect via LinkedIn, mention shared AI interests" } ], "networking_plan": { "high_priority_targets": 5, "recommended_approach": "Focus on AI ethics panel speakers", "schedule_optimization": "Attend morning keynotes, network during breaks" } } } Key Features Automated Conference Discovery** - Finds relevant industry events from multiple sources Speaker Intelligence Analysis** - Identifies high-value networking targets with contact priority scoring Strategic Agenda Mapping** - Optimizes your conference schedule for maximum networking impact AI-Powered Recommendations** - Provides personalized networking strategies and approach methods Priority Contact Lists** - Ranks speakers by business value and networking potential Troubleshooting Common Issues ScrapeGraphAI Rate Limits - Implement delays between requests Website Structure Changes - Update scraping prompts in ScrapeGraphAI nodes API Authentication - Verify credentials and permissions Performance Optimization Adjust trigger frequency based on conference season Implement caching for repeated data Use batch processing for large conference lists Support & Customization For advanced customization or enterprise deployments, consider: Custom speaker scoring algorithms Integration with CRM systems (Salesforce, HubSpot) Advanced analytics and reporting dashboards Multi-language support for international conferences
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description: Unified Academic Search Across Major Research Databases This powerful workflow enables researchers to search multiple academic databases simultaneously, automatically deduplicate results, and export formatted bibliographies. By leveraging PDF Vector's multi-database search capabilities, researchers can save hours of manual searching and ensure comprehensive literature coverage across PubMed, ArXiv, Google Scholar, Semantic Scholar, and ERIC databases. Target Audience & Problem Solved This template is designed for: Graduate students** conducting systematic literature reviews Researchers** ensuring comprehensive coverage of their field Librarians** helping patrons with complex searches Academic teams** building shared bibliographies It solves the critical problem of fragmented academic search by providing a single interface to query all major databases, eliminating duplicate results, and standardizing output formats. Prerequisites n8n instance with PDF Vector node installed PDF Vector API credentials with search permissions Basic understanding of academic search syntax Optional: PostgreSQL for search history logging Minimum 50 API credits for comprehensive searches Step-by-Step Setup Instructions Configure PDF Vector Credentials Go to n8n Credentials section Create new PDF Vector credentials Enter your API key from pdfvector.io Test the connection to verify setup Import the Workflow Template Copy the template JSON code In n8n, click "Import Workflow" Paste the JSON and save Review all nodes for any configuration needs Customize Search Parameters Open the "Set Search Parameters" node Modify the default search query for your field Adjust the year range (default: 2020-present) Set results per source limit (default: 25) Configure Export Options Choose your preferred export formats (BibTeX, CSV, JSON) Set the output directory for files Configure file naming conventions Enable/disable specific export types Test Your Configuration Run the workflow with a sample query Check that all databases return results Verify deduplication is working correctly Confirm export files are created properly Implementation Details The workflow implements a sophisticated search pipeline: Parallel Database Queries: Searches all configured databases simultaneously for efficiency Smart Deduplication: Uses DOI matching and fuzzy title comparison to remove duplicates Relevance Scoring: Combines citation count, title relevance, and recency for ranking Format Generation: Creates properly formatted citations in multiple styles Batch Processing: Handles large result sets without memory issues Customization Guide Adding Custom Databases: // In the PDF Vector search node, add to providers array: "providers": ["pubmed", "semantic_scholar", "arxiv", "google_scholar", "eric", "your_custom_db"] Modifying Relevance Algorithm: Edit the "Rank by Relevance" node to adjust scoring weights: // Adjust these weights for your needs: const titleWeight = 10; // Title match importance const citationWeight = 5; // Citation count importance const recencyWeight = 10; // Recent publication bonus const fulltextWeight = 15; // Full-text availability bonus Custom Export Formats: Add new format generators in the workflow: // Example: Add APA format export const apaFormat = papers.map(p => { const authors = p.authors.slice(0, 3).join(', '); return ${authors} (${p.year}). ${p.title}. ${p.journal || 'Preprint'}.; }); Advanced Filtering: Implement additional filters: Journal impact factor thresholds Open access only options Language restrictions Methodology filters for systematic reviews Search Features: Query multiple databases in parallel Advanced filtering and deduplication Citation format export (BibTeX, RIS, etc.) Relevance ranking across sources Full-text availability checking Workflow Process: Input: Search query and parameters Parallel Search: Query all databases Merge & Deduplicate: Combine results Rank: Sort by relevance/citations Enrich: Add full-text links Export: Multiple format options
by Samuel Heredia
This n8n workflow securely processes contact form submissions by validating user input, formatting the data, and storing it in a MongoDB database. The flow ensures data consistency, prevents unsafe entries, and provides a confirmation response back to the user. Workflow 1. Form Submission Node Purpose: Serves as the workflow’s entry point. Functionality: Captures user input from the contact form, which typically includes: name last name email phone number 2. Code Node (Validation Layer) Purpose: Ensures that collected data is valid and secure. Validations performed: Removes suspicious characters to mitigate risks like SQL injection or script injection. Validates the phone_number field format (numeric, correct length, etc.). If any field fails validation, the entry is marked as “is_not_valid” to block it from database insertion. 3. Edit Fields Node (Data Formatting) Purpose: Normalizes data before database insertion. Transformations applied: Converts field names to snake_case (first_name, last_name, phone_number). Standardizes field naming convention for consistency in MongoDB storage. 4. MongoDB Node (Insert Documents) Purpose: Persists validated data in MongoDB Atlas. Process: Inserts documents into the target collection with the cleaned and formatted fields. Connection is established securely using a MongoDB Atlas connection string (URI). 🔧 How to Set Up MongoDB Atlas Connection URL a. Create a Cluster b. Log in to MongoDB Atlas and create a new cluster. c. Configure Database Access: Add a database user with a secure username and password, Assign appropriate roles (e.g., Atlas Admin for full access or Read/Write for limited). d. Obtain Connection String (URI) From Atlas, go to Clusters → Connect → Drivers. Copy the provided connection string, which looks like: mongodb+srv://<username>:<password>@cluster0.abcd123.mongodb.net/myDatabase?retryWrites=true&w=majority Configure in n8n In the MongoDB node, paste the URI. Replace <username>, <password>, and myDatabase with your actual credentials and database name. Test the connection to ensure it is successful. 5. Form Ending Node Purpose: Provides closure to the workflow. Functionality: Sends a confirmation response back to the user, indicating that their contact details were successfully submitted and stored. ✅ Result: With this workflow, all contact form submissions are safely validated, normalized, and stored in MongoDB Atlas, ensuring both data integrity and security basic.
by System Admin
Script to delete traces of selenium in the browser
by Jannik Lehmann
GitLab Wrapped Generator ✨ Automatically generate your personalized GitLab Wrapped, a stunning year-in-review of your contributions, activity, and stats. Powered by gitlab-wrapped by @michaelangelorivera. 🚀 How it works Forks the gitlab-wrapped project (or finds your existing fork) Configures CI/CD environment variables Triggers the GitLab pipeline Monitors until completion (polls every 2 minutes) 🎉 Your wrapped will be available at: https://YOUR-USERNAME.gitlab.io/gitlab-wrapped --- ⚙️ Setup Create a GitLab PAT with these scopes: api read_repository write_repository Fill out the form: Your GitLab username Your PAT token GitLab instance URL (defaults to gitlab.com) Year (defaults to 2025) Submit & relax! ☕ The workflow handles everything automatically. --- 💡 Works with GitLab.com and self-hosted instances 📅 Generate wrapped reports for any past year
by Kirill Khatkevich
Meta Ads Detailed Targeting Extractor (Universal, Switch by Endpoint) This workflow is a universal automation for all four Meta Detailed Targeting API endpoints: Search, Suggestions, Browse, and Validation. You use a single Google Sheets tab with an endpoint column; a Switch node routes each row to the correct branch; results are written to four separate sheets in the same spreadsheet. It is designed for media buyers, performance marketers, and analysts who manage targeting research, audience suggestions, browse trees, and validation in bulk and want one workflow instead of four. Use Case Working with Meta’s Detailed Targeting API usually means separate flows for search, suggestions, browse, and validation. This workflow is ideal if you want to: Centralize targeting operations** in one place: one input sheet, one workflow, four result sheets. Drive everything from Google Sheets**: add rows with endpoint (search | suggestions | browse | validation), ad_account_id, and endpoint-specific parameters; run manually or on new rows. Keep results organized** by endpoint: search_results, suggestions_results, browse_results, validation_results in the same document. Run on demand or on row add**: use Manual Trigger for full-sheet runs or Google Sheets Trigger to process only new rows. How it Works The workflow is organized into clear blocks: 1. Trigger & input Manual Trigger* → *Read Input (Google Sheets)** — reads the entire targeting_requests sheet for ad-hoc or test runs. Google Sheets Trigger** — runs when a new row is added to targeting_requests; only new rows are processed (no re-processing of existing data). Read Input (Google Sheets)** always reads from the same sheet: targeting_requests. 2. Validation & routing Valid rows (ad_account_id + endpoint)** — Filter node keeps only rows where both ad_account_id and endpoint are non-empty. Switch by endpoint** — routes each row to one of four branches based on endpoint: search, suggestions, browse, or validation (values must match exactly, including case). 3. Each branch (Search, Suggestions, Browse, Validation) API** (Facebook Graph API) — calls the corresponding edge: targetingsearch, targetingsuggestions, targetingbrowse, or targetingvalidation with parameters from the row (act_{ad_account_id}/...). Merge** (combine by position) — merges the API response with the original request row so each result keeps context (e.g. ad_account_id, q, targeting_list). Split** (field: data) — expands the API data array into one item per targeting result. Format** — maps fields to flat columns for the sheet: endpoint, ad_account_id, query, limit_type, targeting_id, targeting_name, audience_size_lower_bound, audience_size_upper_bound, path, description, type; for Validation branch, valid is also included. Save to Google Sheets** — appends to the branch-specific sheet: search_results, suggestions_results, browse_results, or validation_results. 4. Output All four Save nodes write to the same spreadsheet (same Document ID), each to its own sheet. The valid column is populated only in validation_results; other sheets leave it empty. Input sheet: targeting_requests Required columns for every row: | Column | Description | |------------------|-------------| | endpoint | One of: search, suggestions, browse, validation (lowercase). | | ad_account_id | Meta ad account ID (without the act_ prefix). | Endpoint-specific columns: | Endpoint | Required | Optional | |----------------|-----------------------|----------| | search | q — search query | limit (default 25), limit_type, locale | | suggestions| targeting_list — JSON array, e.g. [{"type":"interests","id":"6003263791114"}] | limit (up to 45), limit_type, locale | | browse | — | limit_type, locale | | validation | One of: targeting_list, id_list, or name_list (string/JSON per Meta API docs) | locale | If search has no q, or suggestions / validation lack the required targeting input, the API call will fail. Output sheets (same document) Use the same Document ID in Read Input, Google Sheets Trigger, and all four Save nodes. Create (or let n8n create) these sheet names: | Sheet | Branch | Notes | |-----------------------|-------------|-------| | search_results | Search | endpoint, ad_account_id, query, limit_type, targeting_id, targeting_name, audience_size_*, path, description, type | | suggestions_results | Suggestions | Same columns; query holds the targeting_list from the request | | browse_results | Browse | Same columns; query empty | | validation_results | Validation | Same columns + valid (true/false from API) | Setup Instructions 1. Credentials Connect Google Sheets OAuth2 in: Read Input (Google Sheets), Google Sheets Trigger (if used), and all four Save nodes. Connect Facebook Graph API credentials in each of the four API nodes (e.g. same “Facebook Graph” credential set). 2. Spreadsheet & sheets Set the Document ID in: Read Input (Google Sheets) — Document = your spreadsheet, Sheet = targeting_requests. Google Sheets Trigger — same Document ID and sheet targeting_requests (if you use the trigger). All four Save …_results nodes — same Document ID; each node uses its own Sheet name: search_results, suggestions_results, browse_results, validation_results. Create the input sheet targeting_requests with the columns described above and the four result sheets (or allow n8n to create them on first append). 3. Switch by endpoint Ensure the endpoint column in targeting_requests contains exactly: search, suggestions, browse, or validation (lowercase, as in the Switch conditions). 4. Triggers Keep Manual Trigger for full-sheet runs; use Google Sheets Trigger for row-added automation. When using the trigger, run the workflow only when new rows are added so existing rows are not processed again. 5. Activate Save and activate the workflow. Test with a few rows per endpoint before processing large sheets. The workflow reuses the same patterns as other Meta Detailed Targeting templates: read from Sheets, call Facebook Graph API, Merge by position, Split Out on data, then append to Sheets. The difference is the single input sheet with an endpoint column, Switch-based routing, and four dedicated branches writing to four sheets in one spreadsheet.
by Paul Kobelke
Remove Duplicates & Update Google Sheets How it Works This workflow helps you keep your Google Sheets clean and up-to-date by automatically removing duplicate entries and re-uploading the cleaned data back to your sheet. It’s especially useful for large lead lists, email databases, or any dataset where duplicate rows can cause confusion and inefficiency. The flow: Trigger the workflow manually. Fetch all rows from a specific Google Sheet. Identify and remove duplicate rows based on the profileUrl field. Convert the cleaned dataset into a file. Update the original Google Sheet with the new, deduplicated data. Setup Steps Connect your Google Sheets and Google Drive credentials in n8n. Update the workflow with your desired spreadsheet and sheet ID. Run the workflow by clicking “Execute Workflow” whenever you want to clean up your data. The process only takes a few seconds and ensures your sheet stays organized without any manual effort. Use Cases CRM lead management (avoiding duplicate prospects). Contact lists with scraped or imported data. Marketing databases with overlapping submissions.
by Agus Narestha
✅ What This Workflow Does This workflow automates the process of creating Google Calendar events from a Google Sheet. It ensures each row in the sheet is evaluated for its current status and: Creates new events in Google Calendar for rows marked as pending or failed. Updates the Google Sheet with the result: Created, Failed, or Duplicate. Handles errors gracefully and prevents duplicate event creation. 🛠️ How It Works Manual Trigger: Start the workflow. Read Sheet: Fetch all rows from the configured Google Sheet. Check Status: If status is pending or failed, continue to event creation. If already created, update sheet as duplicate. Create Event: Attempt to create a Google Calendar event using the row’s data. Check for Errors: If the creation succeeds, update the sheet as Created. If it fails, update the sheet as Failed. Update Sheet: Reflect the result (Created, Failed, Duplicate) for each row. This ensures a reliable workflow where the Google Sheet and Google Calendar remain synchronized without manual intervention. 🧰 Setup Requirements To run this workflow, you need: n8n account with workflow access. Google Sheets OAuth2 credentials connected to your Google Sheet. Google Calendar OAuth2 credentials connected to the target calendar. A properly formatted Google Sheet with columns for event details (see below). The workflow nodes must be authorized to read and write to both Google Sheets and Google Calendar. 🧩 Key Features Manual Trigger: Start the workflow anytime. Google Sheets Read/Write: Reads event data and updates status after processing. Google Calendar Integration: Automatically creates events. Error Handling: Detects errors during event creation and logs them in the sheet. Duplicate Prevention: Rows already processed are marked as duplicates. Dynamic Data Mapping: Pulls event details directly from the sheet to Google Calendar. 📂 Input Spreadsheet Format The workflow expects a sheet with the following columns: |summary|start|end|description|location|attendees|status| |-|-|-|-|-|-|-|- |Meeting A|2026-03-20T10:00:00+08:00|2026-03-20T11:00:00+08:00|Client discussion|Google Meet|test@gmail.com|pending |Meeting B|2026-03-21T10:00:00+08:00|2026-03-21T11:00:00+08:00|Client discussion|Zoom|test@gmail.com,test2@gmail.com|pending |Meeting C|2026-03-22T10:00:00+08:00|2026-03-22T11:00:00+08:00|Client discussion|Google Meet|test3@gmail.com|created
by Anurag
Description This workflow automates the download of new or updated files from a Google Drive folder, processing only files changed since the last run using a timestamp control file. How It Works Triggered on a schedule. Checks for a n8n_last_run.txt file in your Google Drive to read when the workflow last ran. If missing, defaults to processing changes in the last 24 hours. Searches for new or modified files in your specified folder. Downloads new/changed files. Replaces the timestamp file with the current time for future runs. Setup Steps Set up your Google Drive credentials in n8n. Find the Folder ID of the Google Drive folder you wish to monitor. Edit all Google Drive nodes: Select your credentials Paste the Folder ID Adjust the schedule trigger if needed. Activate the workflow. Features No duplicate file processing (idempotent) Handles missing timestamp files Clear logical sticky notes in the editor Modular, extendable design Prerequisites Google Drive API credentials connected to n8n Target Google Drive folder accessible by the credentials
by Robert Breen
This workflow checks a Google Sheet for new tasks (marked Added = No) and automatically creates them in a Monday.com board. Once added, the workflow updates the sheet to mark the task as Added = Yes. ⚙️ Setup Instructions 1️⃣ Prepare Your Google Sheet Copy this template to your own Google Drive: Google Sheet Template First row should contain column names Add your data in rows 2–100. Make sure each new task row starts with Added = No. Connect Google Sheets in n8n Go to n8n → Credentials → New → Google Sheets (OAuth2) Log in with your Google account and grant access. In the workflow, select your Spreadsheet ID and Worksheet Name. Optional: You can connect Airtable, Notion, or your database instead of Google Sheets. 2️⃣ Connect Monday.com Node In Monday.com → go to your Admin → API Copy your Personal API Token Docs: Generate Monday API Token In n8n → Credentials → New → Monday.com API Paste your token and save. Open the Create Monday Task node → choose your credential → select your Board ID and Group ID. 📬 Contact Need help customizing this (e.g., mapping more fields, syncing statuses, or updating timelines)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com