by Yang
Who is this for? This workflow is perfect for lead generation experts, digital marketers, SEO professionals, and virtual assistants who need to quickly collect local business information based on specific search terms without manually navigating Google Places. What problem is this workflow solving? Manually searching Google Places for business leads is time-consuming and inconsistent. This workflow automates the entire process using Dumpling AI’s Google Places search endpoint, helping users collect accurate and structured business data and log it into a Google Sheet automatically. What this workflow does This workflow runs daily at 1 PM. It starts by reading a list of business-related search terms from a Google Sheet (for example, “dentists in Dallas”). Each term is sent to Dumpling AI’s search-places endpoint, which returns local business listings from Google Places. The data is split, structured, and logged row-by-row in a connected Google Sheet. Nodes Overview Run Every Day at 1 PM A scheduled trigger that executes the workflow daily. Google Sheets (Input) – Fetch Search Terms from Sheet Pulls a list of search terms from a Google Sheet. Each term should describe a business category and location (e.g., “coffee shops in Atlanta”). HTTP Request – Scrape Google Places via Dumpling AI Sends each search term to Dumpling AI’s /search-places endpoint, returning data like business names, phone numbers, websites, ratings, and categories. Split In Batches – Split Places Result Breaks the list of businesses returned for each search term into individual items for processing. Google Sheets (Output) – Save Each Business to Sheet Saves the scraped data into a second Google Sheet. Each row contains: title address rating category phoneNumber website 📝 Notes You must set up Dumpling AI and generate your API key from: Dumpling AI You can change the run schedule in the schedule node to fit your needs (e.g., weekly or hourly).
by Bela
Sync your Google Sheets Data with your Postgres database table, requiring minimal adjustments. Follow these steps: Retrieve Data: Pull data from Google Sheets and PostgreSQL. Compare Datasets: Identify differences, focusing on new or updated entries. Update PostgreSQL: Apply changes to ensure both platforms mirror each other. Automate this process to regularly synchronize data. Before starting, grant necessary access to both Google Sheets and PostgreSQL, and specify the data details for synchronization. This streamlined workflow enhances data consistency across platforms. This example is a one-way synchronization from Google Sheets into your Postgres. With small adjustments, you can make it the other way around, or 2-way.
by Zacharia Kimotho
Create new Clickup Tasks from Slack commands This workflow aims to make it easy to create new tasks on Clickup from normal Slack messages using simple slack command. For example We can have a slack command as /newTask Set task to update new contacts on CRM and assign them to the sales team This will have an new task on Clickup with the same title and description on Clickup For most teams, getting tasks from Slack to Clickup involves manually entering the new tasks into Clickup. What if we could do this with a simple slash command? Step 1 The first step is to Create an endpoint URL for your slack command by creating an events API from the link [below] https://api.slack.com/apps/) STEP 2 Next step is defining the endpoint for your URL Create a new webhook endpoint from your n8n with a POST and paste the endpoint URL to your event API. This will send all slash commands associated with the Slash to the desired endpoint Step 3 Log on to slack API (https://api.slack.com/) and create an application. This is the one we use to run all automation and commands from Slack. Once your app is ready, navigate to the Slash Commands and create a new command This will include the command, the webhook URL and a description of what the slash command is all about Now that this is saved you can do a test by sending a demo task to your endpoint Once you have tested the webhook slash command is working with the webhook, create a new Clickup API that can be used to create new tasks in ClickUp This workflow creates a new task with the start dates on Clikup that can be assigned to the respective team members More details about the document setup can be found on this document below Happy Productivity
by CustomJS
This n8n template demonstrates how to convert HTML into a PDF, compress the generated PDF, and return it as a binary response using the PDF Toolkit from www.customjs.space. Notice Community nodes can only be installed on self-hosted instances of n8n. @custom-js/n8n-nodes-pdf-toolkit What this workflow does Convert** the requested HTML to PDF. Compress** the PDF file. Use** a Code node to handle URLs pointing to PDF files if they exceed 6MB. Compress** the PDF pages. Requirements Self-hosted** n8n instance CustomJS API key** for compressing PDF files. HTML** Data to convert PDF files Code node** for handling URL that indicates PDF file. Workflow Steps: Manual Trigger: Runs with user interaction. HTML to PDF: Request HTML Data Convert HTML to PDF Request PDF from URL. Compress Pages from PDF: Compress PDF as a binary file. Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Code node for handling URL that indicates PDF file. Compress PDF files. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes.
by Tom
This workflow shows a low code approach to parsing an XML file and storing its contents in a Google Sheets spreadsheet. To run the workflow: Make sure you are running n8n 0.197 or newer Have n8n authenticated with Google Sheets How it's done: This workflow first downloads an example file using the HTTP Request node and reads this file using the XML node. It then runs the Item Lists node to split out the individual food items from the example file. It then splits up the workflow into a separate branch creating a new spreadsheet file using the Google Sheets node. To read the column names we're using the Object.keys() method inside a Set node. Once the spreadsheet is created (the workflow waits for this using the Merge node), the data is appended to the newly created sheet (again using the Google Sheets node).
by Tom
This workflow identifies new rows in Google Sheets using a separate column keeping track of already processed rows. For this approach to work, the sheet needs to meet two requirements: A unique identifier for each row is required A column used to differentiate new/processed rows is present Our example sheet looks like this: So the row identifier is named ID, the new/processed column is called Processed. Update the workflow accordingly if your columns have different names. Now if the workflow runs, it discovers all three rows as new. After processing them, it will add a timestamp to the Processed column: The next time the workflow is executed it will skip the existing rows and only process newly added data:
by Yaron Been
Adamantiamable Lumi AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the adamantiamable/lumi model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: adamantiamable/lumi API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by AiAgent
Disclaimer This workflow contains a community node. What It Does Leverage the power of GPT-4o to seamlessly summarize a scientific research PDF of your choosing. By simply downloading a PDF of a scientific research article into a folder on your computer this powerful workflow will automatically read the article and produce a detailed summarization of the article. The workflow will then save this summarization onto your computer for future convenience. Who Is This For? The workflow is the perfect tool for all types of self-learners attempting to improve their knowledge base as efficiently as possible. It is a way to rapidly improve your knowledge base using peer reviewed scientific articles in a quick and efficient way. This workflow will provide a more detailed summary of the scientific research article than a typical abstract, while taking a fraction of the time it would take to read an entire paper. It will provide you with enough information to have a firm grasp on the information provided within the scientific article and will allow you to determine if you would like to dive deeper into the article. This workflow is perfect for professionals who need to stay current on the most recent literature in their field, as well as the self-learners who enjoy diving deep into a specific topic. It can aid anyone who is performing academic research, a literature review, or attempting to increase their knowledge base in a field using peer reviewed sources. How It Works Utilizing the power of GPT-4o, the moment you save a PDF of a scientific research article to a predesignated folder it will being to read the article and produce a summary that will be saved into another designated folder on your computer via the following steps below. Search the internet and your favorite journal databases for a scientific article that interests you. With the n8n workflow activated, download a PDF of the scientific article and save it to a specific designated folder. Saving the scientific article to this folder will trigger the workflow to initiate. The workflow will then extract the contents of the PDF and pass the data along to an AI agent utilizing the power of GPT-4o. This AI agent will produce a detailed summary of the scientific article. This summary will include the following: Introduction heading discussing the importance of the article and the specific aims of the study Methods heading detailing how the study was conducted, what variables they evaluated, what their inclusion and exclusion criteria were, and what their measurement standards were. Results heading providing specific data provided in the study for all variables tested as well as the statistical significance of each result. Summary heading evaluating the importance of the results, how it compares to other scientific articles in the same field, as well as the recommendations of the authors on how to interpret the data provided by the results. Conclusion heading summarizing the strengths and weaknesses of the scientific article as well as providing deficiencies in knowledge on the subject that would be a good topic for future studies. After the AI agent has completed its summary, it will convert the summary to text and save it to a designated folder on your computer for future viewing. Set Up Steps You will need to create a folder on your computer where you would like to save your scientific article PDFs. You will then copy the pathway to this folder into the local file trigger node. You will need to obtain an Open AI API key from platform.openai.com/api-keys After you obtain this Open AI API key you will need to connect it to the Open AI Chat Model connected to the Summarizer Tools Agent. You will now need to fund your Open AI account. GPT-4o costs ~$0.01 to run the workflow. Finally, create a folder on your computer you wish to have the summarizations saved to. Copy the pathway to this folder into the Save to Folder node. Customization This workflow is easy to customize to a specific area of research to provide the best possible summarization. If you have a specific expertise in a field of study, you can customize the output to provide data at a higher level of understanding for that field. For example, if you are a marine biologist, you can change the portion of the text prompt in the summarizer tool from "You are a research expert who is providing data to another researcher." to "You are a marine biologist expert who is providing data to another marine biologist." Disclaimer If the pdf is too large, open AI will not be able to summarize it and will provide the error that you have reached your limit of requests.
by Polina Medvedieva
Who is this template for This template is for marketers, SEO specialists, or content managers who need to analyze keywords to identify which ones contain references to a specific area or topic, in this case – IT software, services, tools, or apps. Use case Automating the process of scanning a large list of keywords to determine if they reference known IT products or services (like ServiceNow, Salesforce, etc.), and updating a Google Sheet with this classification. This helps in categorizing keywords for targeted SEO campaigns, content creation, or market analysis. How this workflow works Fetches keyword data from a Google Sheet Processes keywords in batches to prevent rate limiting Uses an AI agent (OpenAI) to analyze each keyword and determine if it contains a reference to an IT service/software Updates the original Google Sheet with the results in a "Service?" column Continues processing until all keywords are analyzed Set up steps Connect your Google Sheets account credentials Set the Google Sheet document ID (currently using "Copy of Sheet1 1") Configure the OpenAI API credentials for the AI agent Adjust the batch size (currently 6) if needed based on your API rate limits Ensure the Google Sheet has the required columns: "Number", "Keyword", and "Service?" The AI agent's prompt is highly customizable to match different identification needs. For example, instead of looking for IT software/services, you could modify the prompt to identify: Industry-specific terms (healthcare, finance, education) Geographic references (cities, countries, regions) Product categories (electronics, clothing, food) Competitor brand mentions Here's how you could modify the prompt for different use cases: Copy // For identifying educational content keywords "Check the keyword I provided and define if this keyword relates to educational content, courses, or learning materials and return yes or no." // For identifying local service keywords "Check the keyword I provided and determine if it contains location-specific terms (city names, neighborhoods, regions) that suggest local service intent and return yes or no." // For identifying competitor mentions "Check the keyword I provided and determine if it mentions any of our competitors (CompetitorA, CompetitorB, CompetitorC) and return yes or no." `
by Nskha
An innovative N8N workflow that monitors cryptocurrency prices on Binance, identifies significant market movements, and sends customized alerts through Telegram. Ideal for traders and enthusiasts seeking real-time market insights. How It Works Trigger Options: Choose between a manual trigger or a scheduled trigger to start the workflow. Fetch Market Data: The 'Binance 24h Price Change' node retrieves the latest 24-hour price changes for cryptocurrencies from Binance. Identify Significant Changes: The 'Filter by 10% Change rate' node filters out cryptocurrencies with price changes of 10% or more. Aggregate Data: The 'Aggregate' node combines all significant changes into a single dataset. Format Data for Telegram: The 'Split By 1K chars' node formats this data into chunks suitable for Telegram's message size limit. Send Telegram Message: The 'Send Telegram Message' node broadcasts the formatted message to a specified Telegram chat. Set Up Steps Estimated Time**: About 1-5 minutes for setup. Initial Configuration**: Set up a Binance API connection (Optional) and your Telegram bot credentials. Customization**: Adjust the trigger according to your preference (manual or scheduled) and update the Telegram chat ID. Create Telegram bot steps**:- Setting up a Telegram bot and obtaining its token involves several steps. Here's a detailed guide: Start a Chat with BotFather: Open Telegram and search for "BotFather". This is the official bot that allows you to create new bots. Start a chat with BotFather by clicking on the "Start" button at the bottom of the screen. Create a New Bot: In the chat with BotFather, type /newbot and send the message. BotFather will ask you to choose a name for your bot. This is a display name and can be anything you like. Next, you'll need to choose a username for your bot. This must be unique and end in bot. For example, my_crypto_alert_bot. Receive Your Token: After you've set the name and username, BotFather will provide you with a token. This token is like a password for your bot, so keep it secure. The message will look something like this: Done! Congratulations on your new bot. You will find it at t.me/my_crypto_alert_bot. You can now add a description, about section and profile picture for your bot, see /help for a list of commands. Use this token to access the HTTP API: 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11 The token in this case is 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11. Test Your Bot: You can find your bot by searching for its username in Telegram. Start a chat with your bot and try sending it a message. Although it won't respond yet, this step is essential to ensure it's set up correctly. Use the Token in n8n: In your n8n workflow, when setting up the Telegram node, you'll be prompted to enter credentials. Choose to add new credentials and paste the token you received from BotFather. Get Your Chat ID: To send messages to a specific chat, you need to know the chat ID. The easiest way to find this is to first message your bot, then use a bot like @userinfobot to get your chat ID. Once you have the chat ID, you can configure it in the Telegram node in your n8n workflow. Finalize Your Workflow: With the bot token and chat ID set up in n8n, your Telegram notifications should work as intended in your workflow. Remember, keep your bot token secure and never share it publicly. If your token is compromised, you can always generate a new one by chatting with BotFather and selecting /token. Example result Keywords: n8n workflow, cryptocurrency market, Binance API, Telegram bot, price alert system, automated trading signals, market analysis `
by Harshil Agrawal
This example workflow demonstrates how to handle pagination. This example assumes that the API you are making the request to has pagination, and returns a cursor (something that points to the next page). This example workflow makes a request to the HubSpot API to fetch contacts. You will have to modify the parameters based on your API. Config URL node: This node sets the URL that the HTTP Request node calls. HTTP Request node: This node makes the API call and returns the data from the API. Based on your API, you will have to modify the parameters of the node. NoOp node and Wait node: These nodes help me avoiding any rate limits. If you're API has rate limits, make sure you configure the correct time in the Wait node. Check if pagination: This IF node checks if the API returns any cursor. If the API doesn't return any cursor, it means that there is no data to be fetched, and the node returns false. If the API returns a cursor, it means that there is still some data that needs to be fetched. In this case, the node returns true. Set next URL: This Set node is used to set the URL. In the next cycle, the HTTP Request node makes a call to this URL. Combine all data: This node combines all the data that gets returned by the API calls from the HTTP Request node.
by Trung Tran
CV Extractor: Google Drive to Sheet + Slack Update for Recruiters Watch the demo video below: > This workflow automatically processes resumes (PDFs) uploaded or updated in a Google Drive folder. It extracts and structures the candidate’s information using AI, then updates or inserts the data into a Google Sheet, acting as a central talent database. Finally, it notifies the hiring team via Slack with a summary. Perfect for HR and TA teams, this automation eliminates the repetitive task of manually copying candidate details from CVs into spreadsheets, saving hours of admin work every week and keeping your hiring pipeline clean, fast, and up to date. 👤 Who’s it for This workflow is designed for: Recruiters* and *HR coordinators** who manage candidate profiles via Google Drive. Talent Acquisition teams** who want to automate CV parsing, enrichment, and database updating. Companies or hiring agencies** using spreadsheets for candidate tracking and CRM-like HR ops. ⚙️ How it works / What it does This smart and fully automated workflow: Monitors a Google Drive folder for any uploaded or updated resumes (PDFs). Downloads and extracts resume content using PDF parsing. Sends the raw text to GPT-4, which returns a structured profile (name, title, experience, skills, etc.). Verifies the profile and transforms it into a clean, row-based format. Upserts the candidate profile into a Google Sheet (insert or update by email). Notifies the hiring team in Slack or email that a profile was added or updated. This is a no-touch pipeline to keep your candidate data clean, current, and centralized. 🛠️ How to set up Step 1: Prepare your Google Drive folder Create a folder like /SmartHR/cv/ Upload sample resumes in .pdf format Step 2: Create your Google Sheet Columns to include: Email, FullName, JobTitle, Phone, Location, Experience, Education, Skills, etc. Optional: Add conditional formatting to highlight updates Step 3: Connect the n8n workflow Use the Google Drive Trigger: fileCreated → new profile uploaded fileUpdated → existing profile modified Use Google Drive (Download file) to fetch the resume Use Extract From PDF to get raw content Step 4: Configure GPT-4 node Use the structured system prompt to extract profile information Use json parser node to ensure safe formatting for next steps Step 5: Transform & Save Use a Function node to map fields to Google Sheet columns Use Append or update row (based on email as unique key) Optionally send Slack or email message to notify hiring team ✅ Requirements 🔑 OpenAI GPT-4 API key 🟩 n8n Cloud or Self-hosted with: Google Drive integration Google Sheets integration Email/Slack credentials (optional) 📄 Resume files in readable PDF format 📊 Google Sheet prepared with relevant headers ✏️ How to customize the workflow | Part | Customization Options | |----------------------------|----------------------------------------------------------------------------------------| | GPT Prompt | Tune for different job levels or fields (e.g., engineers vs marketers) | | Field Mapping | Update transform node to include other profile fields (LinkedIn, portfolio, etc.) | | Notification | Switch to Microsoft Teams, Telegram, or email alerts instead of Slack | | Data Store | Replace Google Sheet with Airtable, Notion, or database system | | Trigger Source | Trigger from email attachments or webhook instead of Google Drive if needed | | Output Format | Generate PDF profile cards or summary documents using HTML → PDF node |