by Harshil Agrawal
This workflow allows you to store the output of a phantom in Airtable. This workflow uses the LinkedIn Profile Scraper phantom. Configure and launch this phantom from your Phantombuster account before executing the workflow. The workflow uses the following node: Phantombuster node: The Phantombuster node gets the output of the LinkedIn Profile Scraper phantom that ran earlier. You can select a different phantom from the Agent dropdown list, but make sure to configure the workflow accordingly. Set node: Using the Set node we are setting the data for the workflow. The data that we set in this node will be used by the next nodes in the workflow. Based on your use-case, you can modify the node. Airtable node: The Airtable node allows us to append the data in an Airtable. Based on your use-case you can replace this node with any other node. Instead of storing the data in Airtable, you can store the data in a database or Google Sheet, or send it as an email using the Send Email node, Gmail node, or Microsoft Outlook node.
by Marcel Claus-Ahrens
Instructions This automation overlays a background image with another image, making it easy to add watermarks or logos. You can use this automation to watermark your images by overlaying them with a transparent version of your logo. If you'd like to place your logo in a specific corner, feel free to adjust the position of the overlay image in the code node. How it Works Both images are downloaded, so we can process binary files (you can modify the source, tho.) We extract metadata, focusing on the dimensions of each image. The position of the overlay image is calculated (default: dead center of the background image). The two images are composited together. Limitations and Optimisation Opportunities The overlay image must be the same size or smaller than the background image for proper alignment. The overlay image does not automatically scale to match the proportions of the background image. Enjoy the workflow! ❤️ let the work flow — Workflow Automation & Development
by n8n Team
Who this template is for This template is for developers or teams who need to convert CSV data into JSON format through an API endpoint, with support for both file uploads and raw CSV text input. Use case Converting CSV files or raw CSV text data into JSON format via a webhook endpoint, with error handling and notifications. This is particularly useful when you need to transform CSV data into JSON as part of a larger automation or integration process. How this workflow works Receives POST requests through a webhook endpoint at /tool/csv-to-json Uses a Switch node to handle different input types: File uploads (binary data) Plain text CSV data JSON format data Processes the CSV data: For files: Uses the Extract From File node For raw text: Converts the text to CSV using a custom Code node that handles both comma and semicolon delimiters Aggregates the processed data and returns: Success response (200): Converted JSON data Error response (500): Error message with details In case of errors, sends notifications to a Slack error channel with execution details and a link to debug Set up steps Configure the webhook endpoint by deploying the workflow Set up Slack integration for error notifications: Update the Slack channel ID (currently set to "C0832GBAEN4") Configure OAuth2 authentication for Slack Test the endpoint using either: CURL for file uploads: bash Copy curl -X POST "https://yoururl.com/webhook-test/tool/csv-to-json" \ -H "Content-Type: text/csv" \ --data-binary @path/to/your/file.csv Or send raw CSV data as text/plain content type
by Joseph LePage
💡🌐 Essential Multipage Website Scraper with Jina.ai Use responsibly and follow local rules and regulations This N8N workflow enables automated multi-page website scraping using Jina.ai's powerful web scraping capabilities, with seamless integration to Google Drive for content storage. Here's how it works: Main Features The workflow automatically scrapes multiple pages from a website's sitemap and saves each page's content as a separate Google Drive document. Key Components Input Configuration Starts with a sitemap URL (default: https://ai.pydantic.dev/sitemap.xml)** Processes the sitemap to extract individual page URLs Includes filtering options to target specific topics or pages Scraping Process Uses Jina.ai's web scraper to extract content from each URL Converts webpage content into clean markdown format Extracts page titles automatically for document naming Storage Integration Creates individual Google Drive documents for each scraped page Names documents using the format "URL - Page Title" Saves content in markdown format for better readability Usage Instructions Set your target website's sitemap URL in the "Set Website URL" node Configure the "Filter By Topics or Pages" node to select specific content Adjust the "Limit" node (default: 20 pages) to control batch size Connect your Google Drive account Run the workflow to begin automated scraping Additional Features Built-in rate limiting through the Wait node to prevent overloading servers Batch processing capability for handling large sitemaps The workflow requires no API key for Jina.ai, making it accessible for immediate use while maintaining responsible scraping practices.
by Angel Menendez
CallForge - AI Sales Call Processing & Insights Extraction Automate sales call analysis with AI-powered insights for sales, marketing, and product teams. Who is This For? This workflow is designed for: ✅ Sales teams looking to extract structured insights from Gong call transcripts. ✅ Marketing professionals seeking AI-driven customer pain points & content strategy. ✅ Product teams needing feedback from sales calls to prioritize feature development. 🔍 What Problem Does This Workflow Solve? Manually analyzing Gong.io sales call transcripts is slow, inconsistent, and lacks structured insights. With CallForge, you can: ✔ Extract AI-powered insights about use cases, objections, competitors, and next steps. ✔ Provide structured marketing & product intelligence to enhance strategy. ✔ Automatically store call insights in Notion and Salesforce for easy access. ✔ Ensure resilience with automated reruns on failed workflows (handling Notion API limits). ✔ Improve decision-making with AI-powered competitor and sentiment analysis. 📌 Key Workflow Features 🎤 AI-Powered Transcript Analysis Uses AI to identify use cases, objections, competitors, and customer pain points. Categorizes insights for sales, marketing, and product teams. 📌 AI Agent Breakdown 🔹 Sales AI Agent – Extracts customer objections, pain points, competitors, and next steps. 🔹 Marketing AI Agent – Identifies recurring topics, keyword trends, and content opportunities. 🔹 Product AI Agent – Captures feature requests and AI/ML-related references. 📊 Structured Output Processing Sales Data Processor* → Stores insights in *Notion & Salesforce** for sales tracking. Marketing Data Processor* → Extracts *SEO & content strategy insights** for marketing teams. Product AI Data Processor* → Logs *customer feedback* to prioritize *feature development**. 💡 Competitor & Integration Analysis Tracks competing products mentioned in calls**. Identifies integration needs**, flagging workarounds used by prospects. 📢 Real-Time Slack Notifications Alerts teams on workflow progress** and completed call analyses. 🔄 Failure Resilience & Automated Re-Runs If a Notion API limit is reached, the process resumes automatically. 🚀 How This Works 🛠 1. Trigger & Call Data Processing The workflow retrieves Gong call transcripts and metadata. Normalizes data**, correcting common mispronunciations like "n8n." 🤖 2. AI Agents Analyze the Call Sales Agent** – Extracts actionable insights for sales follow-ups. Marketing Agent* – Identifies *recurring themes* and *keyword trends**. Product Agent* – Captures *feature requests and AI/ML usage mentions**. 📡 3. Data is Stored in Notion & Salesforce Logs AI-extracted insights* in *Notion** for structured tracking. Pushes sales-related data* to *Salesforce** for team accessibility. 🔔 4. Slack Alerts for Teams Notifies sales, marketing, and product teams** about extracted insights. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion 📊 Sample Output Data 1️⃣ Sales Insights { "UseCases": [ { "Summary": "A manufacturing company wants to automate inventory tracking and reduce manual entry delays.", "DepartmentTags": ["Operations"], "IndustryTags": ["Manufacturing"], "ImplementationStatus": "Evaluating" } ], "Objection": { "ObjectionTags": ["Feature Limitation"], "Nature": "The prospect wanted a deeper integration with their ERP system, which n8n currently lacks." }, "CallSummary": "The call focused on automation for supply chain processes. The prospect expressed interest but wanted confirmation on ERP integration capabilities.", "NextSteps": ["Schedule a follow-up demo for ERP integration."] } 2️⃣ Marketing Insights { "MarketingInsights": [ { "Tag": "Workflow Template Request", "Summary": "The prospect requested a template for automating CRM lead tracking." } ], "RecurringTopics": [ { "Topic": "CRM Integration", "Mentions": 3, "Context": "Discussed how n8n could sync CRM data automatically." } ], "ActionableInsights": [ { "RecommendationType": "Tutorial", "Title": "How to Automate CRM Lead Tracking with n8n", "Topic": "CRM Integration", "Rationale": "The prospect expressed a need for CRM automation templates." } ] } 3️⃣ Product Feedback { "ProductFeedback": [ { "Sentiment": "Positive", "Feedback": "The external speaker praised the simplicity of n8n's UI, making it easier for non-developers to automate tasks." }, { "Sentiment": "Negative", "Feedback": "The external speaker mentioned frustration over the lack of a dedicated ERP integration node." } ], "AI_ML_References": { "Exist": true, "Context": "The external speaker mentioned using AI for automating customer ticket categorization.", "Details": { "DevelopmentStatus": "Building", "Department": "Support", "RequiresAgents": true, "RequiresRAG": false, "RequiresChat": "Yes: External App (e.g., Slack)" } } } 🔧 How to Customize This Workflow 💡 🔗 Change Data Storage – Swap Notion for Airtable, HubSpot, or another CRM. 💡 📩 Customize Slack Notifications – Send alerts via email, webhook, or another channel. 💡 🛠 Modify AI Processing – Adjust AI models or processing prompts. 💡 📊 Add More Integrations – Sync insights with Pipedrive, HubSpot, or another CRM. 🚀 Why Use This Workflow? ✔ Automates Gong call transcript analysis, eliminating manual work. ✔ Improves collaboration by structuring insights for sales, marketing, and product teams. ✔ Boosts sales conversions by identifying objections and next steps. ✔ Enhances marketing and SEO strategy with AI-driven insights. ✔ Optimizes product roadmap decisions based on customer feedback. This workflow scales AI-powered sales intelligence for better decision-making, content strategy, and sales enablement. 🚀
by The { AI } rtist
Este workflow es para trabajar con tratamiento de texto usando n8n y poder iniciarte en como funciona. How To, Paso a Paso: https://comunidad-n8n.com/tratamiento-de-textos/ Comunidad de telegram: https://t.me/comunidadn8n
by Harshil Agrawal
This workflow allows you to insert and retrieve data from a table in Stackby. Set node: The Set node is used to set the values for the name and id fields for a new record. You might want to add data from an external source, for example an API or a CRM. Based on your use-case, add the respective node before the Set node and configure your Set node accordingly. Stackby node: This node appends data from the previous node to a table in Stackby. Based on the values you want add to your table, enter the column names in the Column field. Stackby1 node: This node fetches all the data that is stored in the table in Stackby.
by Askan
What problem does this solve? It fetches LinkedIn profiles for a multitude of purposes based on a keyword and location via Google search and stores them in an Excel file for download and in a NocoDB database. It tries to avoid using costly services and should be n8n beginner friendly. It uses the serpapi.com to avoid being blocked by Google Search and to process the data in an easier way. What does it do? Based on criteria input, it searches LinkedIn profiles It discards unnecessary data and turns the follower count into a real number The output is provided as an Excel table for download and in a NocoDB database How does it do it? Based on criteria input, it uses serpAPI.com to conduct Google search of the respective LinkedI profiles With OpenAI.com the name of the respective company is being added With OpenAI.com the follower number e.g., 300+ is turned into a real number: 300 All unnecessary metadata is being discarded As an output an Excel file is being created The output is stored in a nocodb.com table Step-by-step instruction Import the Workflow: Copy the workflow JSON from the "Template Code" section below. Import it into n8n via "Import from File" or "Import from URL". Set up a free account at serpapi.com and get API credentials to enable good Google search results Set up an API account at openai.com and get API key Set up a nocodb.com account (or self-host) and get the API credentials Create the credentials for serpapi.com, opemnai.com and nocodb.com in n8n. Set up a table in NocoDB with the fields indicated in the note above the NocoDB node Follow the instructions as detailed in the notes above individual nodes When the workflow is finished, open the Excel node and click download if you need the Excel file
by Jay Hartley
Disclaimer This template only works on n8n local instances! How it Works This workflow allows you to to receive webhooks from the public web and have your local workflow catch them, without any remote proxy. It is very useful for running quick tests without exposing your dev server. All you have to do is activate the workflow and use the public address as defined below. Set up steps If you use the default key-value storage, there are only three steps: Install the @horka.tv/n8n-nodes-storage-kv community node Put your n8n workflow address in Local Webhook Address Activate the workflow and, from Executions, note down your public webhook token from the inputs to Get Latest Requests. You can now use https://webhook.site/[YOUR TOKEN] as a webhook destination, to receive webhook requests from the public web.
by Marcel Claus-Ahrens
This automation syncs your Invoice PDFs from Stripe to a (AWS) S3 Bucket each month, in a folder of your choice, with the following subPath: yourFolder/invoiceYear/invoiceMonth/fileName Fill in your Credentials and Settings in the Nodes marked with "*". You can adjust this Workflow to your needs. You can also override the yearand month in the ENV* Node for manual syncs. It will sync every Invoice PDF which created-date is greater then the provided year and month. It will automatically set the day to the first day of the desired month. Enjoy the Workflow! ❤️ https://let-the-work-flow.com Workflow Automation & Development
by Zacharia Kimotho
Remember when you were doing some large research and wanted to quickly bookmark a page and save it, only to find premium options? Worry not; n8n got you covered. You can now create a simple bookmarking app straight to your browser using simple scrips on your browser called bookmarklets. A bookmarklet is a bookmark stored in a web browser that contains JavaScript commands that add new features to the browser. To create one, we need to add a short script to the bookmark tab of our browser like below A simple hack is to open a new tab and click on the star that appears on the right side Now that we have our bookmark, it's time for the fun part. Right-click on the bookmark we just created and select the edit option. This will allow you to set the name you want for your bookmark and the destination URL. The URL used here will be the script that shall "capture" the page we want to bookmark. The code below has been used and tested to work for this example javascript:(() => { var currentUrl = window.location.href; var webhookUrl = 'https://$yourN8nInstanceUrl/webhook/1rxsxc04b027-39d2-491a-a9c6-194289fe400c'; var xhr = new XMLHttpRequest(); xhr.open('POST', webhookUrl, true); xhr.setRequestHeader('Content-Type', 'application/json'); var data = JSON.stringify({ url: currentUrl }); xhr.send(data); })(); Your Bookmark should look like something like this Now that we have this setup, we are now going to n8n to receive the data sent by this script. Create a new webhook node that receives the POST request as in the workflow and replace $yourN8nInstanceUrl with your actual n8n instance. This workflow can then be configured to send this data to a notion database. Make sure the notion database has all the required permissions before executing the workflow. Otherwise the URLs will not be saved
by David Roberts
This workflow shows how you can get your OpenAI assistant to call an n8n workflow as a tool. Since you can put almost any functionality in an n8n workflow, this means you can give your assistant access to almost any data source. Note that to use this template, you need to be on n8n version 1.19.4 or later.