by Airtop
Scoring LinkedIn Profiles Against Your ICP Use Case This automation scores individual LinkedIn profiles against your Ideal Customer Profile (ICP) based on interest in AI, technical depth, and seniority level. It's ideal for prioritizing leads and understanding how well a person fits your ICP criteria. What This Automation Does Given a LinkedIn profile and an Airtop profile, it: Extracts relevant data from the person's profile Determines levels of AI interest, seniority, and technical depth Calculates an ICP score based on weighted criteria Returns the full enriched profile with the score Input parameters: LinkedIn Profile URL** (e.g., https://linkedin.com/in/janedoe) Airtop Profile** connected to LinkedIn ICP scoring method** in the Airtop node prompt Output fields in JSON format: Full name, job title, employer, company LinkedIn URL, location, number of connections and followers, about section content and more Calculated ICP Score (out of 95) How It Works Form Trigger or Workflow Trigger: Accepts input from either a form or another workflow. Parameter Assignment: Ensures proper variable names for downstream nodes. Airtop Enrichment Tool: Extracts and scores the person based on a detailed prompt. Scoring: Uses this point system: AI Interest: beginner (5), intermediate (10), advanced (25), expert (35) Technical Depth: basic (5), intermediate (15), advanced (25), expert (35) Seniority Level: junior (5), mid-level (15), senior (25), executive (30) Output Formatting: Cleans and returns the result as JSON. Setup Requirements IMPORTANT: Enter your ICP scoring method in the prompt field of the Airtop node Airtop Profile connected to LinkedIn. Airtop API credentials configured in n8n. Optional: a front-end form to collect profile URLs and trigger the automation. Next Steps Embed in CRM**: Trigger this automation on new leads to auto-score them. Batch Process Leads**: Run it over a list of profile URLs for segmentation. Customize Scoring**: Adjust point weights based on your sales priorities. Read more about ICP Scoring with Airtop and n8n
by Shahrear
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automatically transform audio files into professional transcription reports with AI-powered speech recognition, timestamp generation, and formatted Google Docs output. What this workflow does Monitors Gmail for incoming audio attachments Downloads and processes audio files using VLM Run AI transcription Generates accurate transcriptions with precise timestamps and segmentation Creates professional reports in Google Docs with formatted output Handles asynchronous processing for long audio files without timeouts Setup Prerequisites: Gmail account, VLM Run API credentials, Google Docs access, self-hosted n8n. You need to install VLM Run community node Quick Setup: Configure Gmail OAuth2 for email monitoring Add VLM Run API credentials for audio transcription Set up Google Docs OAuth2 for report generation Create target Google Doc for transcription reports Update document URL in workflow nodes Test with sample audio file and activate Perfect for Meeting recordings and conference calls Voice memos and dictation workflows Interview transcriptions and journalism Podcast episode documentation Accessibility compliance and documentation Legal proceedings and court recordings Educational content and lecture notes Customer service call analysis Key Benefits Human-level accuracy** - Advanced AI speech recognition with automatic punctuation Timestamp precision** - Segmented transcriptions with exact time markers Multi-format support** - Handles MP3, WAV, M4A, AAC, OGG, FLAC files Asynchronous processing** - No timeouts for long audio files Professional formatting** - Beautifully structured Google Docs reports Automatic workflow** - Zero manual intervention required Saves hours per recording** - Transforms manual transcription into instant results Searchable documentation** - Google Docs integration enables easy content discovery How to customize Extend by adding: Speaker identification and diarization Integration with project management tools (Notion, Asana, Trello) Automatic summary generation from transcripts Translation to multiple languages Slack notifications for completed transcriptions Integration with CRM systems for call logging Audio quality enhancement preprocessing Custom formatting templates for different use cases Automatic keyword extraction and tagging Integration with calendar systems for meeting context This workflow revolutionizes audio documentation by combining cutting-edge AI transcription with professional report generation, making spoken content instantly accessible, searchable, and shareable across your organization.
by ist00dent
This n8n template enables you to instantly generate high-quality screenshots of any specified public URL by simply sending a webhook request. Itβs an indispensable tool for developers, content creators, marketers, or anyone needing on-demand visual captures of web pages without manual intervention, all while including crucial security measures. π§ How it works Receive URL Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing a url property with the website you want to screenshot. You can trigger it from any application or service capable of sending an HTTP POST request. Validate URL for SSRF: This is a crucial security step. This Function node validates the incoming url to prevent Server-Side Request Forgery (SSRF) vulnerabilities. It checks for valid http:// or https:// protocols and, more importantly, ensures the URL does not attempt to access internal/private IP addresses or localhost. If the URL is deemed unsafe or invalid, it flags it for an error response. IF URL Valid: This IF node checks the isValidUrl flag set by the previous validation step. If the URL is valid (true), the workflow proceeds to take the screenshot. If the URL is invalid or flagged for security (false), the workflow branches to Respond with Validation Error. Take Screenshot: This node sends an HTTP GET request to the ScreenshotMachine API to capture an image of the validated URL. Remember to replace YOUR_API_KEY in the URL field of this node with your actual API key from ScreenshotMachine. Respond with Screenshot Data: This node sends the data received directly from the Take Screenshot node back to the original caller of the webhook. This response typically includes information about the generated screenshot, such as the URL to the image file, success status, and other metadata from the ScreenshotMachine API. Respond with Validation Error: If the IF URL Valid node determines the URL is unsafe or invalid, this node sends a descriptive error message back to the webhook caller, explaining why the request was denied due to security concerns or an invalid format. π Security Considerations This template includes a dedicated Validate URL for SSRF node to mitigate Server-Side Request Forgery (SSRF) vulnerabilities. SSRF attacks occur when an attacker can trick a server-side application into making requests to an unintended location. Without validation, an attacker could potentially use your n8n workflow to scan internal networks, access sensitive internal resources, or attack other services from your n8n server. The validation checks for: Only http:// or https:// protocols. Prevention of localhost or common private IP ranges (e.g., 10.x.x.x, 172.16.x.x - 172.31.x.x, 192.168.x.x). While this validation adds a significant layer of security, always ensure your n8n instance is properly secured and updated. π€ Who is it for? This workflow is ideal for: Developers: Automate screenshot generation for testing, monitoring, or integrating visual content into applications. Content Creators: Quickly grab visuals for articles, presentations, or social media posts. Marketing Teams: Create dynamic visual assets for campaigns, ads, or competitive analysis. Automation Enthusiasts: Integrate powerful screenshot capabilities into existing automated workflows. Website Owners: Monitor how your website appears across different tools or over time. π Prerequisites To use this template, you will need: An n8n instance (cloud or self-hosted). An API Key from ScreenshotMachine. You can obtain one by signing up on their website: https://www.screenshotmachine.com/ π Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "url": "https://www.example.com" } If the URL is valid, the workflow will return the JSON response directly from the ScreenshotMachine API. This response typically includes information about the generated screenshot, such as the URL to the image file, success status, and other metadata: { "status": "success", "hash": "...", "url": "https://www.screenshotmachine.com/...", "size": 12345, "mimetype": "image/jpeg" } If the URL is invalid or blocked by the security validation, the workflow will return an error response similar to this: { "status": "error", "message": "Access to private IP addresses is not allowed for security reasons." } βοΈ Setup Instructions Import Workflow: In your n8n editor, click "File" > "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive URL Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /website-screenshot). Add ScreenshotMachine API Key: Double-click the Take Screenshot node. In the 'URL' parameter, locate YOUR_API_KEY and replace it with your actual API key obtained from ScreenshotMachine. Example URL structure: http://api.screenshotmachine.com/?key=YOUR_API_KEY&url={{ $json.validatedUrl }} Activate Workflow: Save and activate the workflow. π Tips Processing Screenshots: You're not limited to just responding with the screenshot data! You can insert additional nodes after the Take Screenshot node (and before the Respond with Screenshot Data node) to further process or utilize the generated image. Common extensions include: Saving to Cloud Storage: Use nodes for Amazon S3, Google Drive, or Dropbox to store the screenshots automatically, creating an archive. Sending via Email: Attach the screenshot to an email notification using an Email or Gmail node for automated alerts or reports. Posting to Chat Platforms: Share the screenshot directly in a Slack, Discord, or Microsoft Teams channel for team collaboration or visual notifications. Image Optimization: Use an image processing node (if available via an API or a custom function) to resize, crop, or compress the screenshot before saving or sending. Custom Screenshot Parameters: The ScreenshotMachine API supports various optional parameters (e.g., width, height, quality, delay, fullpage). Upgrade: Extend the Receive URL Webhook to accept these parameters in the incoming JSON body (e.g., {"url": "...", "width": 1024, "fullpage": true}). Leverage: Dynamically pass these parameters to the Take Screenshot HTTP Request node's URL to customize your screenshots for different use cases. Scheduled Monitoring: Upgrade: Combine this workflow with a Cron or Schedule node. Set it to run periodically (e.g., daily, hourly). Leverage: Automatically monitor your website or competitors' sites for visual changes. You could then save screenshots to cloud storage and even trigger a comparison tool if a change is detected. Automated Visual Regression Testing: Upgrade: After taking a screenshot, store it with a unique identifier. In subsequent runs, take a new screenshot, then use an external image comparison API or a custom function to compare the new screenshot with a baseline. Leverage: Get automated alerts if visual elements on your website change unexpectedly, which is critical for quality assurance. Dynamic Image Generation for Social Media/Marketing: Upgrade: Feed URLs (e.g., for new blog posts, product pages) into this workflow. After generating the screenshot, use it to create dynamic social media images or marketing assets. Leverage: Streamline the creation of engaging visual content, saving design time.
by Anthony
What this workflow does Linkedin tracks which Chrome extensions are installed in your browser. This workflow uses a huge raw JSON of chrome extension ids, extracted from Linkedin pages, and builds a pretty Google Sheet with the list of these extensions. This workflow web scrapes Google to search for chrome extension id - and extracts the first search result. Setup Clone this Google Sheet template: https://docs.google.com/spreadsheets/d/1nVtoqx-wxRl6ckP9rBHSL3xiCURZ8pbyywvEor0VwOY/edit?gid=0#gid=0 Get API key for Google SERP API access here: https://rapidapi.com/restyler/api/serp-api1 Create n8n header auth for Google SERP API Some context and discussion https://www.linkedin.com/feed/update/urn:li:activity:7245006911807393792/ Follow the author and get the final Google Sheet with 1300+ Chrome extensions: https://www.linkedin.com/in/anthony-sidashin/
by Yang
Who is this for? This workflow is for social media agencies, influencer marketers, and brand managers who need to automatically qualify TikTok creators based on their follower metrics. Itβs especially useful for teams managing influencer outreach campaigns or building talent databases. What problem is this workflow solving? Manually tracking TikTok user stats is time-consuming and inconsistent. This automation instantly pulls TikTok profile data and only saves creators who meet a defined follower threshold. It removes manual vetting, reduces spreadsheet work, and makes influencer qualification scalable. What this workflow does This workflow uses Airtable as the trigger, Dumpling AI to scrape TikTok profile information, and a logic condition to check if the profile has more than 100k followers. Qualified profiles are updated with full metrics and stored back in Airtable. Setup Airtable Setup Create a table with a field named Tik tok username Connect your Airtable account to n8n using a Personal Access Token Set up a trigger to run when a new TikTok username is added Dumpling AI Sign up at Dumpling AI Create a Dumpling AI credential in n8n using your API key The HTTP node sends the TikTok handle to Dumplingβs /get-tiktok-profile endpoint Configure Filter The IF node checks if followerCount is greater than or equal to 100000 Airtable Update If qualified, the record is updated with: ID (TikTok ID) followerCount followingCount heartCount videoCount How to customize this workflow to your needs Change the follower count threshold to fit your campaign (e.g. 10K, 500K, 1M) Add fields like engagement rate, niche tags, or scraped bio Chain additional steps like sending approved creators to your CRM or triggering outreach messages Add another filter to exclude private or inactive accounts
by Keith Rumjahn
Who is this template for? Anyone who is drowning in emails Busy parents who has alot of school emails Busy executives with too many emails Case Study I get too many emails from my kid's school about soccer practice, lunch orders and parent events. I use this workflow to read all the emails and tell me what is important and what requires actioning. Read more -> How I used A.I. to read all my emails What this workflow does It uses IMAP to read the emails from your email account (i.e. Gmail). It then passes the email to Openrouter.ai and uses a free A.I. model to read and summarize the email. It then sends the summary as a message to your messenger (i.e. Line). Setup You need to find your email server IMAP credentials. Input your openrouter.ai API credentials or replace the HTTP request node with an A.I. node such as OpenAI. Input your messenger credentials. I use Line but you can change the node to another messenger line Telegram. You need to change the message ID to your ID inside the http request. You can find your user ID inside the https://developers.line.biz/console/. Change the "to": {insert your user ID}. How to adjust it to your needs You can change the A.I. prompt to fit your needs by telling it to mark emails from a certain address as important. You can change the A.I. model from the current meta-llama/llama-3.1-70b-instruct:free to a paid model or other free models. You can change the messenger node to telegram or any other messenger app you like.
by ist00dent
This n8n template allows you to instantly fetch a random dog image from the Dog CEO API by simply sending a webhook request. It's a fun and simple way to integrate random dog photos into your projects, whether for websites, applications, or playful automations. π§ How it works Trigger Webhook: This node acts as the entry point for the workflow. It listens for any incoming POST request. No specific data is required in the webhook body, as the workflow fetches a random image. Fetch Random Dog Image: This node makes an HTTP GET request to https://dog.ceo/api/breeds/image/random. The API responds with a JSON object containing the URL of a random dog image. Respond with Image URL: This node sends the URL of the random dog image back to the service that initiated the webhook. π€ Who is it for? This workflow is ideal for: Developers: Quickly integrate random dog images into web applications, bots, or prototypes. Content Creators: Get fresh, random dog photos for social media, blogs, or presentations. Learning n8n: A straightforward example of using a webhook to trigger an API call and return data. Anyone who loves dogs! π Data Structure When you trigger the webhook, you can send an empty POST request body. The workflow will return a JSON response similar to this (the message URL will vary): { "message": "https://images.dog.ceo/breeds/hound-walker/n02089867_2626.jpg", "status": "success" } βοΈ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Trigger Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /get-dog-image). Activate Workflow: Save and activate the workflow. π Tips Download the Image: Instead of just returning the URL, you can download the image and then process it. Insert another HTTP Request node after Fetch Random Dog Image to download the image binary. Set the HTTP Request node's 'Response Format' to 'Binary'. Use the expression ={{ $json.message }} for the URL. Save to Cloud Storage: After downloading the image (as described above), you can save it to various cloud storage services: Google Drive: Add a Google Drive node. Connect it to the output of the image download node. Configure it to upload the binary data to a specific folder. Amazon S3: Add an AWS S3 node. Configure it to upload the binary data, specifying your bucket and desired filename. Dropbox: Use the Dropbox node to upload the image file. Send as a Message: Share the dog image directly in a chat or email: Slack/Discord/Telegram: Use the respective integration node to send the image URL or the downloaded image as an attachment. Email: Attach the downloaded image to an email using an Email or Gmail node. Display on a Web Page: If you're embedding this into a web application, you can simply use the returned URL in an tag to display the image. Error Handling: You can add an Error Trigger node to catch any issues during the image fetching process (e.g., if the Dog CEO API is down) and send notifications.
by Thomas Janssen
Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG) Tutorial Click here to watch the full tutorial on YouTube How it works This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client. AI Agent Template Click here to navigate to the AI Agent n8n workflow which uses this MCP server Warning This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. Activate the n8n flow Run the "RAG Ingestion Pipeline" and upload some PDF documents How to use it Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API. More detailed instructions Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data
by Eduardo Hales
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow is a simple AI Agent that connects to Langfuse so send tracing data to help monitor LLM interactions. The main idea is to create a custom LLM model that allows the configuration of callbacks, which are used by langchain to connect applications such Langfuse. This is achieves by using the "langchain code" node: Connects a LLM model sub-node to obtain the model variables (model name, temp and provider) - Creates a generic langchain initChatModel with the model parameters. Return the LLM to be used by the AI Agent node. π Prerequisites Langfuse instance (cloud or self-hosted) with API credentials LLM API key (Gemini, OpenAI, Anthropic, etc.) n8n >= 1.98.0 (required for LangChain code node support in AI Agent) βοΈ Setup Add these to your n8n instance: Langfuse configuration LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_BASEURL=https://cloud.langfuse.com # or your self-hosted URL LLM API key (example for Gemini) GOOGLE_API_KEY=your_api_key Alternative: Configure these directly in the LangChain code node if you prefer not to use environment variables Import the workflow JSON Connect your preferred LLM model node Send a test message to verify tracing appears in Langfuse
by Airtop
Automating Company Data Enrichment and ICP Calculation Use Case This automation identifies a company's LinkedIn profile, extracts key business data, and calculates an ICP (Ideal Customer Profile) score to qualify and enrich company records. It is perfect for sales teams, data enrichment pipelines, and CRM integrations. What This Automation Does Input Parameters Company domain**: The company's website domain (e.g., example.com). Airtop Profile (connected to LinkedIn)**: Your Airtop Profile authenticated for LinkedIn. Company LinkedIn* *(optional): If already known, skips search. Output Includes Verified LinkedIn company URL (if not provided) Company profile (name, tagline, website, location, about) Scale metrics (employee count and bracket) Classification (automation agency status, AI focus, technical level) ICP score with justifications Structured JSON object with all values merged How It Works LinkedIn Detection: If not provided, attempts to locate the LinkedIn URL using website scraping or search. Data Extraction: Uses Airtop to gather structured data from the companyβs LinkedIn profile. ICP Scoring: Applies a scoring rubric based on AI/tech orientation, scale, agency status, and geography. Merge Results: All data components are merged into a unified output. Setup Requirements Airtop API Key Airtop Profile with LinkedIn authentication Next Steps Combine with Person Enrichment**: Pair with workflows that enrich individuals tied to the company. Sync to CRM**: Connect the output to your CRM for record enrichment or scoring fields. Adjust ICP Scoring Logic**: Modify the rubric for your organization's ICP model. Read more about company data enrichment and ICP scoring
by Davi Saranszky Mesquita
Use case Workshop We are using this workflow in our workshops to teach how to use Tools a.k.a functions with artificial intelligence. In this specific case, we will use a generic "AI Agent" node to illustrate that it could use other models from different data providers. Enhanced Weather Forecasting In this small example, it's easy to demonstrate how to obtain weather forecast results from the Open-Meteo site to accurately display the upcoming days. This can be used to plan travel decisions, for example. What this workflow does We will make an HTTP request to find out the geographic coordinates of a city. Then, we will make other HTTP requests to discover the weather for the upcoming days. In this workshop, we demonstrate that the AI will be able to determine which tool to call firstβit will first call the geolocation tool and then the weather forecast tool. All of this within a single client conversation call. Setup Insert an OpenAI Key and activate the workflow. by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Lucas Perret
This workflow enriches new accounts in Pipedrive using Datagma API by adding data about ICP (ideal customer profile). Instead of Pipedrive, you can use any other CRM. In this example, ideal buyers are heads of sales/business development. Prerequisites Pipedrive account and Pipedrive credentials How it works Pipedrive trigger node starts the workflow when a new company is created. HTTP Request node queries data from Datagma. Pipedrive node updates Pipedrive contact with new data from Datagma. The Item Lists node simplifies returned data from Datagma that contain lists (arrays), enabling you to easily modify the structure for further processing without the need to use Function nodes and write custom JavaScript. IF node identifies if the lead corresponds ICP. HTTP Request node searches for emails in Datagma. Set node prepares data for further merging. Merge node combines data from multiple streams. Pipedrive node adds a new person in Pipedrive.