by Extruct AI
Who’s it for: Sales teams, marketers, and analysts who need to quickly access all the social media and public profile links for any company. How it works / What it does: When you enter a company into the form, this workflow automatically searches for and collects all available links to the company’s social media accounts, review sites, and public profiles from sources like Crunchbase and Zoominfo. All discovered URLs are added directly to your Google Sheet. How to set up: Create an Extruct account at www.extruct.ai/. Open the Extruct table template, find the table ID in your browser’s address bar, and copy it. Make a copy of the provided Google Sheets template to your own Google Drive. In n8n, paste the table ID into the variables node of your flow. Set up Bearer authentication in every HTTP Request node using your Extruct API token (found on the API page in Extruct). In the Google Sheets node, paste the link to your copied template and connect your Google account. Run the flow once to load the fields, then map the output fields to the correct columns in your sheet. Activate the flow and start adding companies via the form. Requirements: Extruct account and API token Extruct table template Google account with Google Sheets How to customize the workflow: You can add your own columns to the Extruct table and your Google Sheet. Just add the new column in both places and map it in the Google Sheets node in n8n.
by Extruct AI
Who’s it for: Sales and business development professionals who want to monitor company news, hiring trends, and business signals for their leads. How it works / What it does: Add a company to the form, and the workflow will automatically search for the latest news, recent hires, company stage, and LinkedIn activity. The results are sent straight to your Google Sheet, helping you stay up to date with your leads and prospects. How to set up: Register for Extruct at www.extruct.ai/. Open the Extruct table template, copy the table ID from the browser’s address bar. Make a copy of the Google Sheets template to your Drive. Enter the table ID into the variables node in your n8n flow. Set up Bearer authentication in all HTTP Request nodes using your Extruct API token. In the Google Sheets node, paste your template link and connect your Google account. Run the flow once to load the mapping fields, then match each output to the correct column. Activate the flow and start adding companies through the form. Requirements: Extruct account and API token Extruct table template Google account with Google Sheets How to customize the workflow: To track more business development signals, add new columns in both the Extruct table and your Google Sheet, then map them in the Google Sheets node.
by Arthur Braghetto
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Clean Web Content Extraction with Anti-Bot Fallback Extract clean and structured text from any webpage with optional fallback to an anti-bot scraping service. Ideal for AI tools and content workflows. 🧠 How it Works This sub-workflow enables reliable and clean scraping of any public webpage by simply passing a url parameter. It is designed to be embedded into other workflows or used as a tool for AI agents. It supports two output modes: fulltext:* true — returns *{ title, text } with full page content fulltext:* false — returns *{ title, url, content } with a short excerpt 💡 If the site is protected by anti-bot systems (like Cloudflare), it will automatically fallback to Scrape.do, a scraping API with a generous free plan. 🧩 This template requires the n8n-nodes-webpage-content-extractor community node, so it only works in self-hosted n8n environments. 🚀 Use Cases As a reusable sub-workflow, via Execute Sub-workflow node. As a tool for an AI Agent, compatible with Call n8n Workflow Tool. Perfect for chatbots, summarization workflows, or RSS/feed enrichment. Empowers your AI Agent with the ability to browse and extract readable content from websites automatically. 🔖 Parameters url (string): the webpage URL to scrape fulltext (boolean): set true for full page content, false for summarized output ⚙️ Setup Install the community node n8n-nodes-webpage-content-extractor in your self-hosted n8n instance. Create a free account at Scrape.do and obtain your API Token. In the workflow, locate the Scrape.do HTTP Request node and configure the credentials using your API Token. Detailed step-by-step instructions are available in the workflow notes. The Scrape.do API is only used as a fallback when conventional scraping fails, helping you preserve your API credits.
by Federico De Ponte
🔁 Loop & Optimize Meta Tags with Google Gemini This workflow automates the shortening of meta titles and descriptions for SEO—directly from your Google Sheet, row by row, using Google Gemini. ✅ What it does Reads rows from a Google Sheet (meta_title, meta_description, row_index) Loops through each row and checks if content exists Sends the data to Google Gemini for length-optimized output Cleans and parses the response Updates the original sheet with the shortened results 🛠️ Setup Requirements Google Sheets (OAuth2 credentials connected in n8n) Google Gemini API key (configured in n8n credentials) Sheet must contain: row_index meta_title meta_description Output will be written into: meta_titleFixed meta_descriptionFixed
by Jean-Marie Rizkallah
🧩 Jamf Patch Summary to Slack Stay on top of software patch compliance by automatically posting Jamf patch summaries to Slack. This helps IT and security teams quickly identify outdated installs and take action—without logging into Jamf. ✅ Prerequisites • A Jamf Pro API key with permissions to read software titles and patch summary • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or Add a webhook • Fetch a list of software titles from Jamf Pro • Filter to select the software you're tracking (e.g. Chrome, Edge) • Retrieve the patch summary for that software (latest version, up-to-date, out-of-date counts) • Format the summary into Slack Block Kit • Post the formatted summary into a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your server BaseURL variable in the Set Node • Add your Jamf Pro API credentials in the HTTP Request nodes (Get & Retrieve) • Set the target software ID in the Filter node • Add your Slack webhook URL or token in the final HTTP node • Optional: Adjust Slack formatting inside the Function node
by Ranjan Dailata
Who this is for? This workflow enables automated, scalable collection of high-quality, AI-ready data from websites using Bright Data’s Web Unlocker, with a focus on preparing that data for LLM training. Leveraging LLM Chains and AI agents, the system formats and extracts key information, then stores the structured embeddings in a Pinecone vector database. This workflow is tailored for: ML Engineers & Researchers building or fine-tuning domain-specific LLMs. AI Startups needing clean, structured content for product training. Data Teams preparing knowledge bases for enterprise-grade AI apps. LLM-as-a-Service Providers sourcing dynamic web content across niches. What problem is this workflow solving? Training a large language model (LLM) requires vast amounts of clean, relevant, and structured data. Manual collection is slow, error-prone, and lacks scalability. This workflow: Automatically extracts web data from specified URLs. Bypasses anti-bot measures using Bright Data’s Web Unlocker. Formats, cleans, and transforms raw content using LLM agents. Stores semantically searchable vectors in Pinecone. Makes datasets AI-ready for fine-tuning, RAG, or domain-specific training. What this workflow does This workflow automates the process of collecting, cleaning, and vectorizing web content to create structured, high-quality datasets that are ready to be used for LLM (Large Language Model) training or retrieval-augmented generation (RAG). Web Crawling with Bright Data Web Unlocker. AI Information Extraction and Data Formatting. AI Data Formatting to produce a JSON structured data. Persistence in Pinecone Vector DB. Handle Webhook notification of structured data. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Set Fields - URL and Webhook URL node with the URL for web data extraction and the Webhook notification URL. How to customize this workflow to your needs Set Your Target URLs. Target sites that are high-quality, domain-specific, and relevant to your LLM's purpose. Adjust Bright Data Web Unlocker Settings. Geo-location, Headers / User-Agent strings, Retry rules and proxies. Modify the Information Extraction Logic. Change prompts to extract specific attributes. Use structured templates or few-shot examples in prompts. Swap the Embedding Model. Use OpenAI, Hugging Face or other your own hosted embedding model API. Customize Pinecone Metadata Fields. Store extra fields in Pinecone for better filtering & semantic querying. Add Data Validation or Deduplication. Skip duplicates or low-quality content.
by Jimleuk
This n8n workflow demonstrates how to automate indexing of images to build a object-based image search. By utilising a Detr-Resnet-50 Object Classification model, we can identify objects within an image and store these associations in Elasticsearch along with a reference to the image. How it works An image is imported into the workflow via HTTP request node. The image is then sent to Cloudflare's Worker AI API where the service runs the image through the Detr-Resnet-50 object classification model. The API returns the object associations with their positions in the image, labels and confidence score of the classification. Confidence scores of less the 0.9 are discarded for brevity. The image's URL and its associations are then index in an ElasticSearch server ready for searching. Requirements A Cloudflare account with Workers AI enabled to access the object classification model. An ElasticSearch instance to store the image url and related associations. Extending this workflow Further enrich your indexed data with additional attributes or metrics relevant to your users. Use a vectorstore to provide similarity search over the images.
by Jonathan
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow Main use cases: Connect with apps and services that n8n doesn’t have integration with Web scraping How it works This workflow can be divided into three branches, each serving a distinct purpose: 1.Splitting into Items (HTTP Request - Get Mock Albums): The workflow initiates with a manual trigger (On clicking 'execute'). It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums." The obtained data is split into items using the Item Lists node, facilitating easier management. 2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract): Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random." The HTML Extract node extracts the article title from the fetched Wikipedia page. 3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request): It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node. The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page). This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
by n8n Team
This workflow syncs your GitHub issues to your Notion database. Whenever a new issue is opened in your GitHub repository, it will be shown in your Notion database, syncing the status property (opened/edited/closed/deleted). In case there’s no Notion database existing yet, a new one will be created automatically. Prerequisites Notion account and Notion credentials GitHub account and GitHub credentials How it works Github trigger starts the workflow when a new issue is created in a GitHub repository. If node splits the workflow conditionally, showing whether the issue is new or an update of an existing issue. If data is new, the Notion node will create a new database page in Notion. If the data is not new, the Function node will create a Notion filter that will find its specific database page by issue ID. Switch node will then conditionally route the data into the appropriate Notion page, based on the update made upon it.
by Ibrahim
Overview This n8n workflow is designed to extract specific interests from messages in a Telegram chat and retrieve related information using the Facebook Graph API. It aims to provide a streamlined solution for parsing and analyzing user-provided interests within the Telegram platform. Features Interest Extraction:** Automatically identifies and extracts interests from messages that start with the hashtag "#interest". Data Retrieval:** Utilizes the Facebook Graph API to retrieve information related to the extracted interests. Structured Outputs:** Presents the retrieved data in an organized format for further analysis and review. Requirements Operational instance of n8n (self-hosted or cloud version). Basic understanding of n8n workflows and nodes. Setup and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure Telegram Trigger Node: Ensure the Telegram trigger node is set up with the appropriate credentials and webhook ID. Configure and Test Nodes: Adjust node parameters as necessary and test the workflow to ensure proper functionality. How it Works Telegram Trigger: Listens for incoming messages in a specified Telegram chat. Check Message Contents: Verifies if the message begins with the specified hashtag and is from the designated chat ID. Extract Message: Extracts the content of the message for further processing. Split Message: Splits the extracted message to identify the interest and remaining content. Connect to Graph API: Utilizes the Facebook Graph API to search for information related to the extracted interest. Split Interests into a Table: Organizes the retrieved data into a structured table format. Get Variables: Maps the retrieved data to create a new JSON object containing specific fields related to the interest. Create a Spreadsheet: Generates a spreadsheet file in CSV format based on the retrieved and formatted data. Send the Spreadsheet File: Sends the generated spreadsheet file back to the original Telegram chat. Customization Modify the filtering conditions and fields to suit specific requirements. Adjust the frequency of the trigger node based on preference. Best Practices Regularly test the workflow to ensure consistent performance. Stay informed about any changes to external APIs that might affect the workflow's functionality. Contributing Your feedback and contributions are highly valued. Feel free to adapt, modify, and share enhancements with the n8n community.
by Joey D’Anna
This template is a set of building blocks to access Monday.com in ways not supported by the official Monday node. Prerequisites Monday account and Monday credentials. Included are setups to: Find a column value by the column's name (instead of a numerical index which can change when board structure is changed) Find a column value by the column's ID (again, instead of using a numerical index) Pull a board relation column, and get all the related pulses Pull an items subitems and split them out Upload a file to an item's files field Setup Create a Monday.com credential Update the nodes in the template to use your credential Copy/Paste the nodes you need from this template into any other workflow To retreive a column by name: Route a Monday.com node that gets an item to the COLUMN BY NAME node Edit the COLUMN BY NAME node, and enter the name in the first line of code. To retreive a column by its ID: Follow Monday.com's instructions to locate the column's ID Route a Monday.com node that gets an item to the COLUMN BY ID node -Edit the COLUMN BY ID node, and enter the ID in the first line of code. To retreive all linked pulses from a Board Relation column: Route a Monday.com node that gets an item to the GET BOARD RELATION node Edit the GET BOARD RELATION node to specify the column name. All linked pulses will be retrieved by the subsequent PULL LINKEDPULSE node To pull all subitems from an item: Route a Monday.com node that gets an item to the PULL SUBITEMS node All subitems will be retrieved by the subsequent GET EACH SUBITEM node To upload a File: Repalce the Convert to File node with whatever node you are using to output your binary file data Enable the MONDAY UPLOAD node If the destination column is named anything other then the default of "file" - edit the MONDAY UPLOAD node and change column_id:"file" in the first Value field to match the name of your file column
by Mutasem
Use Case This workflow aims to enrich new contacts in Intercom. The more relevant the Intercom profile, the more useful it is. Once active, this n8n workflow will update contact data (phone, email) as well as location data from ExactBuyer. Setup Add a webhook url in Intercom to call this workflow Add your Exact Buyer API key Add your Intercom API key Activate workflow How to adjust this template There's plenty of interesting info that ExactBuyer returns that could be helpful. Take a look and update this workflow to add what you need.