by Alex Berman
Who is this for This workflow is built for real estate investors, private investigators, recruiters, and sales teams who need to skip trace individuals -- finding contact details, addresses, and phone numbers from a name, email, or phone number -- and store the enriched records automatically in Notion. How it works A user fills out an n8n form with one or more search inputs (name, email, or phone number). The workflow submits that data to the ScraperCity People Finder API, which begins an async enrichment job. The workflow then polls the job status every 60 seconds until it completes. Once the scrape succeeds, the results are downloaded, parsed, deduplicated, and each enriched person record is written as a new page in a Notion database. How to set up Create a ScraperCity account at scrapercity.com and copy your API key. In n8n, create an HTTP Header Auth credential named "ScraperCity API Key" with the key Authorization and value Bearer YOUR_KEY. Create a Notion integration and share your target database with it. Create a Notion credential in n8n. Open the Configure Search Defaults node and set your preferred max_results value. Open the Save Person Record to Notion node and set your Notion Database ID. Requirements ScraperCity account (scrapercity.com) with People Finder access n8n instance (cloud or self-hosted) Notion workspace with a database for storing people records How to customize the workflow Change the form fields to accept bulk CSV input instead of a single name. Add a Filter node after parsing to only save records that include a valid phone number. Route results to Google Sheets or HubSpot instead of Notion by swapping the final node.
by Alex Berman
Who is this for This workflow is built for real estate investors, wholesalers, and skip tracers who need to find contact details -- phone numbers, emails, and addresses -- for property owners at scale. It automates the entire lookup process using the ScraperCity People Finder API and stores clean results in Airtable for follow-up. How it works A manual trigger starts the workflow. A configuration node lets you define the list of property owner names (or phones/emails) to look up. The workflow submits a skip trace job to the ScraperCity People Finder API, which returns a runId for async tracking. An async polling loop checks the job status every 60 seconds until the result is marked SUCCEEDED. Once complete, the workflow downloads the results CSV and parses each contact record using a code node. Duplicate records are removed, and each unique contact is synced into an Airtable base as a new row with name, phone, email, and address fields. How to set up Create a ScraperCity API credential in n8n (HTTP Header Auth, header name Authorization, value Bearer YOUR_KEY). Update the Configure Search Inputs node with your target names, phones, or emails. Connect your Airtable credential and set your Base ID and Table name in the Sync Contacts to Airtable node. Requirements ScraperCity account with People Finder access (scrapercity.com) Airtable account with a base set up to receive contact data How to customize the workflow Change max_results in Configure Search Inputs to return more contacts per person. Swap the Airtable node for a Google Sheets node if preferred. Add a filter node after parsing to keep only records that have a verified phone number.
by Fahmi Fahreza
Automated Multi-Bank Balance Sync to BigQuery This workflow automatically fetches balances from multiple financial institutions (RBC, Amex, Wise, PayPal) using Plaid, maps them to QuickBooks account names, and loads structured records into Google BigQuery for analytics. Who’s it for? Finance teams, accountants, and data engineers managing consolidated bank reporting in Google BigQuery. How it works The Schedule Trigger runs weekly. Four Plaid API calls fetch balances from RBC, Amex, Wise, and PayPal. Each response splits out individual accounts and maps them to QuickBooks names. All accounts are merged into one dataset. The workflow structures the account data, generates UUIDs, and formats SQL inserts. BigQuery node uploads the finalized records. How to set up Add Plaid and Google BigQuery credentials, replace client IDs and secrets with variables, test each connection, and schedule the trigger for your reporting cadence.
by Yusei Miyakoshi
Who's it for This template is for teams that want to stay updated on industry trends, tech news, or competitor mentions without manually browsing news sites. It's ideal for marketing, development, and research teams who use Slack as their central hub for automated, timely information. What it does / How it works This workflow runs on a daily schedule (default 9 AM), fetches the top articles from Hacker News for a specific keyword you define (e.g., 'AI'), and uses an AI agent with OpenRouter to generate a concise, 3-bullet point summary in Japanese for each article. The final formatted summary, including the article title, is then posted to a designated Slack channel. The entire process is guided by descriptive sticky notes on the canvas, explaining each configuration step. How to set up In the Configure Your Settings node, change the default keyword AI to your topic of interest and update the slack_channel to your target channel name. Click the OpenRouter Chat Model node and select your OpenRouter API key from the Credentials dropdown. If you haven't connected it yet, you will need to create a new credential. Click the Send Summary to Slack node and connect your Slack account using OAuth2 credentials. (Optional) Adjust the schedule in the Trigger Daily at 9 AM node to change how often the workflow runs. Activate the workflow. Requirements An n8n instance (Cloud or self-hosted). A Slack account and workspace. An OpenRouter API key stored in your n8n credentials. If self-hosting, ensure the LangChain nodes are enabled. How to customize the workflow Change the News Source:* Replace the *Hacker News* node with an *RSS Feed Read** node or another news integration to pull articles from different sources. Modify the AI Prompt:* In the *Summarize Article with AI** node, you can edit the system message to change the summary language, length, or tone. Use a Different AI Model:* Swap the *OpenRouter* node for an *OpenAI, **Anthropic, or any other supported chat model. Track Multiple Keywords:* Modify the workflow to loop through a list of keywords in the *Configure Your Settings** node to monitor several topics at once.
by Port IO
Complete incident workflow from detection through resolution to post-mortem, with full organizational context from Port's catalog. This template handles both incident triggered and resolved events from PagerDuty, automatically creating Jira tickets with context, notifying teams via Slack, calculating MTTR, and using Port AI Agents to schedule post-mortem meetings and create documentation. How it works The n8n workflow orchestrates the following steps: On Incident Triggered: PagerDuty webhook — Receives incident events from PagerDuty via POST request. Event routing — Routes to triggered or resolved flow based on event type. Port context enrichment — Uses Port's n8n node to query your software catalog for service context, on-call engineers, recent deployments, runbooks, and past incidents. AI severity assessment — OpenAI assesses severity based on Port context and recommends investigation actions. Escalation routing — Critical incidents automatically escalate to leadership Slack channel. Jira ticket creation — Creates incident ticket with full context, investigation checklist, and recommended actions. Team notification — Notifies the team's Slack channel with incident details and resources. On Incident Resolved: Port context extraction — Gets post-incident context from Port including stakeholders and documentation spaces. MTTR calculation — Calculates mean time to resolution from incident timestamps. Post-mortem generation — AI generates a structured post-mortem template with timeline. Port AI Agent scheduling — Triggers Port AI Agent to schedule post-mortem meeting, invite stakeholders, and create documentation. Resolution notification — Notifies team with MTTR, post-mortem document link, and meeting details. Metrics logging — Logs MTTR metrics back to Port for service reliability tracking. Setup [ ] Register for free on Port.io [ ] Configure Port with services, on-call schedules, and deployment history [ ] Set up Port AI agents for post-mortem scheduling [ ] Connect PagerDuty webhook for incident events [ ] Configure Jira project for incident tickets (use project key 'INC' or customize) [ ] Set up Slack channels for alerts (#incidents and #leadership-alerts) [ ] Add OpenAI credentials for severity assessment [ ] Test with a sample incident event [ ] You should be good to go! Prerequisites You have a Port account and have completed the onboarding process. Port's integrations are configured (GitHub, Jira, PagerDuty if available). You have a working n8n instance (Cloud or self-hosted) with Port's n8n custom node installed. PagerDuty account with webhook capabilities. Jira Cloud account with appropriate project permissions. Slack workspace with bot permissions to post messages. OpenAI API key for severity assessment and post-mortem generation. ⚠️ This template is intended for Self-Hosted instances only.
by Intuz
This n8n template from Intuz provides a complete solution to automate the syncing of new subscribers from Google Sheets to MailerLite. It intelligently identifies and adds only new contacts, preventing duplicates and ensuring your email lists are clean and accurate. Who's this workflow for? Marketing Teams Email Marketers Small Business Owners Community Managers How it works 1. Read from Google Sheets: The workflow begins by reading all contact rows from your designated Google Sheet. 2. Check for Existing Subscribers: For each contact, it performs a search in MailerLite to check if a subscriber with that email address already exists. 3. Handle Duplicates: If the subscriber is found in MailerLite, the workflow stops processing that specific contact, preventing any duplicates from being created. 4. Create New Subscribers: If the contact is not found, the workflow proceeds to create a new subscriber in MailerLite, using all the details from the Google Sheet (like name, company, and country) and assigns them to the specified group. Setup Instructions 1. Google Sheets Setup: Connect your Google Sheets account to n8n. Create a sheet with the required columns: Email, first_name, last_name, Company, Country, and group_id. In the Get row(s) in sheet node, select your credentials and specify the Document ID and Sheet Name. 2. MailerLite Setup: Connect your MailerLite account to n8n using your API key. In both the Get a subscriber and Create subscriber... nodes, select your MailerLite credentials. Make sure the group_id values in your Google Sheet correspond to valid Group IDs in your MailerLite account. 3. Activate Workflow: Save the workflow and click "Execute workflow" to run the sync whenever you need to update your subscriber list. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Michael Taleb
Workflow Summary This automation keeps your Supabase vector database synchronized with documents stored in Google Drive, while also making the data contextual and vector based for better retrieval. When a file is added or modified, the workflow extracts its text, splits it into smaller chunks, and enriches each chunk with contextual metadata (such as summaries and document details). It then generates embeddings using OpenAI and stores both the vector data and metadata in Supabase. If a file changes, the old records are replaced with updated, contextualized content. The result is a continuously updated and context-aware vector database, enabling highly accurate hybrid search and retrieval. To setup 1. Connect Google Drive • Create a Google Drive folder to watch. • Connect your Google Drive account in n8n and authorize access. • Point the Google Drive Trigger node to this folder (new/modified files trigger the flow). 2. Configure Supabase • Please refer to the Setting Up Supabase Sticky Note. 3. Connect OpenAI (or your embedding model) • Add your OpenAI API key in n8n credentials.
by Tushar Mishra
1. Data Ingestion Workflow (Left Panel – Pink Section) This part collects data from the ServiceNow Knowledge Article table, processes it into embeddings, and stores it in Qdrant. Steps: Trigger: When clicking ‘Execute workflow’ The workflow starts manually when you click Execute workflow in n8n. Get Many Table Records Fetches multiple records from the ServiceNow Knowledge Article table. Each record typically contains knowledge article content that needs to be indexed. Default Data Loader Takes the fetched data and structures it into a format suitable for text splitting and embedding generation. Recursive Character Text Splitter Splits large text (e.g., long knowledge articles) into smaller, manageable chunks for embeddings. This step ensures that each text chunk can be properly processed by the embedding model. Embeddings OpenAI Uses OpenAI’s Embeddings API to convert each text chunk into a high-dimensional vector representation. These embeddings are essential for semantic search in the vector database. Qdrant Vector Store Stores the generated embeddings along with metadata (e.g., article ID, title) in the Qdrant vector database. This database will later be used for similarity searches during chatbot interactions. 2. RAG Chatbot Workflow (Right Panel – Green Section) This section powers the Retrieval-Augmented Generation (RAG) chatbot that retrieves relevant information from Qdrant and responds intelligently. Steps: Trigger: When chat message received Starts when a user sends a chat message to the system. AI Agent Acts as the orchestrator, combining memory, tools, and LLM reasoning. Connects to the OpenAI Chat Model and Qdrant Vector Store. OpenAI Chat Model Processes user messages and generates responses, enriched with context retrieved from Qdrant. Simple Memory Stores conversational history or context to ensure continuity in multi-turn conversations. Qdrant Vector Store1 Performs a similarity search on stored embeddings using the user’s query. Retrieves the most relevant knowledge article chunks for the chatbot. Embeddings OpenAI Converts user query into embeddings for vector search in Qdrant.
by Ryo Sayama
Automate your Bitcoin content pipeline by turning the latest CoinDesk headlines into structured Japanese summaries posted to Discord every six hours — completely hands-free. Who is this for Crypto traders, Discord community managers, and content creators who want to keep their audience updated on Bitcoin news without writing posts manually. No coding required. What this workflow does On a six-hour schedule, the workflow fetches the three newest articles from the CoinDesk RSS feed. Each article is sent to Google Gemini 2.5 Flash via a Basic LLM Chain node. A Structured Output Parser then extracts the AI response into four clean fields: a one-line Japanese summary, a brief AI commentary, the article URL, and hashtags. The structured post is delivered to a Discord channel via webhook. Each post is also saved to Google Sheets for auditing, and a Slack notification confirms every successful run. How to set up Open the Set config values node and fill in your Google Sheet ID, Discord Webhook URL, and hashtags. Add your Google Gemini API credential to the Gemini 2.5 Flash node (free tier works). In Discord, go to your channel settings → Integrations → Webhooks, create a new webhook, and paste the URL into Set config values. Connect your Google Sheets OAuth2 credential and set the target sheet name. Add your Slack credential and update the channel ID in the Notify Slack node. Activate the workflow — it will run automatically every six hours. Requirements Google Gemini API key (free tier works) Discord server with webhook permission Google Sheets OAuth2 credential Slack API credential How to customize Swap the RSS URL in Set config values to follow Ethereum, Solana, or any crypto feed. Edit the prompt inside Basic LLM Chain to change language or post style. Update the hashtags in Set config values. Adjust the posting interval in the Schedule Trigger node (default: every 6 hours). Add a Filter node between RSS and LLM to post only articles matching specific keywords.
by Harshil Agrawal
This workflow allows you to create, update and get a post using the Discourse node. Discourse node: This node creates a new post under a category. Based on your use-case, you can select a different category. Discourse1 node: This node updates the content of the post. Discourse2 node: This node fetches the node that we created using the Discourse node. Based on your use-case, you can add or remove nodes to connect Discourse to different services.
by Satva Solutions
Automated Stripe Payment to QuickBooks Sales Receipt This n8n workflow seamlessly connects Stripe and QuickBooks Online to keep your accounting in perfect sync. Whenever a payment in Stripe succeeds, the workflow automatically checks if the corresponding customer exists in QuickBooks. If found, it instantly creates a Sales Receipt under that customer. If not, it creates the customer first — then logs the sale. Key Features: ⚡ Real-Time Sync: Automatically triggers when a Stripe payment intent succeeds. 👤 Smart Customer Matching: Searches for existing customers in QuickBooks to prevent duplicates. 🧾 Automated Sales Receipts: Creates accurate sales records for every successful Stripe payment. 🔄 End-to-End Automation: Handles customer creation, receipt generation, and data consistency without manual entry. Requirements: A running n8n instance, active Stripe and QuickBooks Online accounts with API access.
by Open Paws
Who is this for This workflow is designed for sales professionals, recruiters, and researchers who need to: Build comprehensive profiles of individuals from public sources Understand communication and personality styles before outreach Find verified contact information Research legal and public record history for individuals It's ideal for animal advocacy campaigns targeting corporate decision-makers, researchers profiling legislators on animal welfare issues, and activists preparing for meetings with executives at companies being asked to adopt animal welfare policies. What it does This multi-source OSINT agent creates comprehensive individual profiles: Personality analysis: Uses Humantic AI to analyze LinkedIn profiles for communication preferences, personality traits, and engagement recommendations Contact discovery: Uses Hunter.io to find and verify professional email addresses Legal research: Searches CourtListener for any court cases involving the individual Legislation involvement: Checks LegiScan for any legislative activity or testimony Document search: Searches DocumentCloud for government documents mentioning the person Web research: Uses Serper to find news articles, publications, and public appearances Synthesis: Combines all findings into an actionable intelligence report The workflow waits for Humantic AI profile generation (45 seconds) before retrieving the complete personality analysis. How to set up Import the workflow into your n8n instance Configure the required API credentials: Humantic AI API for personality analysis Hunter API for email finding CourtListener API for court case searches LegiScan API for legislation searches Serper API for web searches Jina AI API for content extraction OpenRouter API for AI synthesis Test with a public figure to verify all integrations Activate the workflow Example usage { "firstName": "John", "lastName": "Davis", "companyName": "Smithfield Foods", "companyDomain": "smithfieldfoods.com", "linkedinURL": "https://linkedin.com/in/johndavis", "reportGoal": "Prepare for corporate campaign meeting - understand decision-making authority, communication style, and any public statements on animal welfare" } Requirements Humantic AI API key Hunter API key CourtListener API key LegiScan API key Serper API key Jina AI API key OpenRouter API key How to customize Skip personality analysis**: Remove the Humantic AI nodes if you only need factual research Add social media**: Integrate Twitter/X or other social platform analysis to track public statements on animal issues Extend contact finding**: Add additional email verification or phone number lookup services Customize report format**: Adjust the final synthesis prompt for campaign briefings, legislator profiles, or corporate target research Add campaign database integration**: Connect output directly to your advocacy CRM or campaign tracking system Batch processing**: Wrap the workflow to process multiple decision-makers from a target company list