by Niklas Hatje
Use Case In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas and collect them in Notion. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Notion, and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Add a Database in Notion with the columns Name and Creator Add your Notion credentials and add the integration to your Notion page. Fill the setup node below Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Notion and for example, add different types of ideas or areas that they impact You might wanna add different templates in Notion to make it easier for users to fill their ideas with details Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (see this template and this template) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters (see here)
by Rui Borges
his workflow automates time tracking using location-based triggers. How it works Trigger: It starts when you enter or exit a specified location, triggering a shortcut on your iPhone. Webhook: The shortcut sends a request to a webhook in n8n. Check-In/Check-Out: The webhook receives the request and records the time and whether it was a "Check-In" or "Check-Out" event. Google Sheets: This data is then logged into a Google Sheet, creating a record of your work hours. Set up steps Google Drive: Connect your Google Drive account. Google Sheets: Connect your Google Sheets account. Webhook: Set up a webhook node in n8n. iPhone Shortcuts: Create two shortcuts on your iPhone, one for "Check-In" and one for "Check-Out." Configure Shortcuts: Configure each shortcut to send a request to the webhook with the appropriate "Direction" header. It's easy to setup, around 5 minutes.
by Agent Studio
Overview This workflow provides Retell agent builders with a simple way to populate dynamic variables using n8n. The workflow fetches user information from a Google Sheet based on the phone number and sends it back to Retell. It is based on Retell's Inbound Webhook Call. Retell is a service that lets you create Voice Agents that handle voice calls simply, based on a prompt or using a conversational flow builder. Who is it for For builders of Retell's Voice Agents who want to make their agents more personalized. Prerequisites Have a Retell AI Account Create a Retell agent Purchase a phone number and associate it with your agent Create a Google Sheets - for example, make a copy of this one. Your Google Sheet must have at least one column with the phone number. The remaining columns will be used to populate your Retell agent’s dynamic variables. All fields are returned as strings to Retell (variables are replaced as text) How it works The webhook call is received from Retell. We filter the call using their whitelisted IP address. It extracts data from the webhook call and uses it to retrieve the user from Google Sheets. It formats the data in the response to match Retell's expected format. Retell uses this data to replace dynamic variables in the prompts. How to use it See the description for screenshots! Set the webhook name (keep it as POST). Copy the Webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/retell-dynamic-variables) and paste it into Retell's interface. Navigate to "Phone Numbers", click on the phone number, and enable "Add an inbound webhook". In your prompt (e.g., "welcome message"), use the variable with this syntax: {{variable_name}} (see Retell's documentation). These variables will be dynamically replaced by the data in your Google Sheet. Notes In Google Sheets, the phone number must start with '+. Phone numbers must be formatted like the example: with the +, extension, and no spaces. You can use any database—just replace Google Sheets with your own, making sure to keep the phone number formatting consistent. 👉 Reach out to us if you're interested in analysing your Retell Agent conversations.
by Ludovic Bablon
Who is this template for? This workflow template is built for SEO specialists and digital marketers looking to uncover keyword opportunities effortlessly. It uses Google's autocomplete magic to help you spot what's trending. How it works Just give it a keyword. The workflow then queries Google and collects all autocomplete suggestions by appending every letter from A to Z to your keyword. Output example with the keyword "n8n" : You can sort these keywords and give them to an LLM to produce entity-enriched text. Setup instructions It works right out of the box. 🛠️ However, you may want to tweak the output format to better fit your use case. Exporting the Keywords You can easily add a node to export the keywords in various ways: via a webhook by email as a file (e.g., saved to Google Drive) directly to a website Adapting the Language Autocomplete results depend on the selected language. You can change the &hl=en parameter in the Google Autocomplete node. Replace the "en" part with the language code of your choice. Examples: &hl=fr → French &hl=es → Spanish &hl=de → German
by Vitali
Template Description This n8n workflow template allows you to create a masked email address using the Fastmail API, triggered by a webhook. This is especially useful for generating disposable email addresses for privacy-conscious users or for testing purposes. Workflow Details: Webhook Trigger: The workflow is initiated by sending a POST request to a specific webhook. You can include state and description in your request body to customize the masked email's state and description. Session Retrieval: The workflow makes an HTTP request to the Fastmail API to retrieve session information. It uses this data to authenticate further requests. Create Masked Email: Using the retrieved session data, the workflow sends a POST request to Fastmail's JMAP API to create a masked email. It uses the provided state and description from the webhook payload. Prepare Output: Once the masked email is successfully created, the workflow extracts the email address and attaches the description for further processing. Respond to Webhook: Finally, the workflow responds to the original POST request with the newly created masked email and its description. Requirements: Fastmail API Access**: You will need valid API credentials for Fastmail configured with HTTP Header Authentication. Authorization Setup**: Optionally set up authorization if your webhook is exposed to the internet to prevent misuse. Custom Webhook Request**: Use a tool like curl or create a shortcut on macOS/iOS to send the POST request to the webhook with the necessary JSON payload, like so: curl -X POST -H 'Content-Type: application/json' https://your-n8n-instance/webhook/87f9abd1-2c9b-4d1f-8c7f-2261f4698c3c -d '{"state": "pending", "description": "my mega fancy masked email"}' This template simplifies the process of integrating masked email functionality into your projects or workflows and can be extended for various use cases. Feel free to use the companion shortcut I've also created. Please update the authorization header in the shortcut if needed. https://www.icloud.com/shortcuts/ac249b50eab34c04acd9fb522f9f7068
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Rodrigue Gbadou
What this workflow does This n8n workflow collects client feedback through a form (Tally, Typeform, or Google Forms) and uses AI to analyze it. It automatically generates a summary of the positive points, highlights areas for improvement, and drafts a short social media post based on the feedback. Ideal for: Freelancers Customer support teams Online service providers Coaches and educators Setup steps Connect your form tool to the Webhook node (POST method) and make sure it sends a feedback field. Add your DeepSeek (or other GPT-compatible) API key to the AI request node. Configure the email node with your SMTP credentials and desired recipient address. Replace the Telegram node with Slack, Buffer, or another integration if you prefer. (Optional) Customize the prompt in the function node for different tone/language. 🕐 Estimated setup time: ~15 minutes 💬 Sticky notes are included and clearly positioned to guide you. Technologies used n8n Webhook node n8n Function node DeepSeek Chat or compatible AI API Email node (SMTP) Telegram node (or other integration) Sticky Notes for setup guidance Use cases Analyze feedback from onboarding or satisfaction surveys Create ready-to-publish social media content from real customer praise Help support or marketing teams act on feedback immediately
by ist00dent
This n8n template allows you to instantly fetch a random dog image from the Dog CEO API by simply sending a webhook request. It's a fun and simple way to integrate random dog photos into your projects, whether for websites, applications, or playful automations. 🔧 How it works Trigger Webhook: This node acts as the entry point for the workflow. It listens for any incoming POST request. No specific data is required in the webhook body, as the workflow fetches a random image. Fetch Random Dog Image: This node makes an HTTP GET request to https://dog.ceo/api/breeds/image/random. The API responds with a JSON object containing the URL of a random dog image. Respond with Image URL: This node sends the URL of the random dog image back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Developers: Quickly integrate random dog images into web applications, bots, or prototypes. Content Creators: Get fresh, random dog photos for social media, blogs, or presentations. Learning n8n: A straightforward example of using a webhook to trigger an API call and return data. Anyone who loves dogs! 📑 Data Structure When you trigger the webhook, you can send an empty POST request body. The workflow will return a JSON response similar to this (the message URL will vary): { "message": "https://images.dog.ceo/breeds/hound-walker/n02089867_2626.jpg", "status": "success" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Trigger Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /get-dog-image). Activate Workflow: Save and activate the workflow. 📝 Tips Download the Image: Instead of just returning the URL, you can download the image and then process it. Insert another HTTP Request node after Fetch Random Dog Image to download the image binary. Set the HTTP Request node's 'Response Format' to 'Binary'. Use the expression ={{ $json.message }} for the URL. Save to Cloud Storage: After downloading the image (as described above), you can save it to various cloud storage services: Google Drive: Add a Google Drive node. Connect it to the output of the image download node. Configure it to upload the binary data to a specific folder. Amazon S3: Add an AWS S3 node. Configure it to upload the binary data, specifying your bucket and desired filename. Dropbox: Use the Dropbox node to upload the image file. Send as a Message: Share the dog image directly in a chat or email: Slack/Discord/Telegram: Use the respective integration node to send the image URL or the downloaded image as an attachment. Email: Attach the downloaded image to an email using an Email or Gmail node. Display on a Web Page: If you're embedding this into a web application, you can simply use the returned URL in an tag to display the image. Error Handling: You can add an Error Trigger node to catch any issues during the image fetching process (e.g., if the Dog CEO API is down) and send notifications.
by RedOne
This workflow is designed for e-commerce store owners, operations managers, and developers who use Shopify as their e-commerce platform and want an automated way to track and analyze their order data. It is particularly useful for businesses that: Need a centralized view of all Shopify orders Want to analyze order trends without logging into Shopify Need to share order data with team members who don't have Shopify access Want to build custom reports based on order information What Problem Is This Workflow Solving? While Shopify provides excellent order management within its platform, many businesses need their order data available in other systems for various purposes: Data accessibility**: Not everyone in your organization may have access to Shopify's admin interface Custom reporting**: Google Sheets allows for flexible analysis and report creation Data integration**: Having orders in Google Sheets makes it easier to combine with other business data Backup**: Creates an additional backup of your critical order information What This Workflow Does This n8n workflow creates an automated bridge between your Shopify store and Google Sheets: Listens for new order notifications from your Shopify store via webhooks Processes the incoming order data and transforms it into a structured format Stores each new order in a dedicated Google Sheets spreadsheet Sends real-time notifications to Telegram when new orders are received or errors occur Setup Create a Google Sheet Create a new Google Sheet to store your orders Add a sheet named "orders" with the following columns: orderId orderNumber created_at processed processed_at json customer shippingAddress lineItems totalPrice currency Set Up Telegram Bot Create a Telegram bot using BotFather (send /newbot to @BotFather) Save your bot token for use in n8n credentials Start a chat with your bot and get your chat ID (you can use @userinfobot) Configure the Workflow Set your Google Sheet ID in the "Edit Variables" node Enter your Telegram chat ID in the "Edit Variables" node Set up your Telegram API credentials in n8n Configure Shopify Webhook In your Shopify admin, go to: Settings > Notifications > Webhooks Create a new webhook for "Order creation" Set the URL to your n8n webhook URL (from the "Receive New Shopify Order" node) Set the format to JSON How to Customize This Workflow to Your Needs Additional data**: Modify the "Transform Order Data to Standard Format" function to extract more Shopify data Multiple sheets**: Duplicate the Google Sheets node to store different aspects of orders in separate sheets Telegram messages**: Customize the text in Telegram nodes to include more details or rich formatting Data processing**: Add nodes to perform calculations or transformations on order data Additional notifications**: Add more channels like Slack, Discord, or SMS Integrations**: Extend the workflow to send order data to other systems like CRMs, ERPs, or accounting software Final Notes This workflow serves as a foundation that you can build upon to create a comprehensive order management system tailored to your specific business needs.
by Zacharia Kimotho
This workflow is designed to generate prompts for AI agents and store them in Airtable. It starts by receiving a chat message, processes it to create a structured prompt, categorizes the prompt, and finally stores it in Airtable. 2. Setup Instructions Prerequisites AI model eg Gemini, openAI etc** Airtable base and table or other storage tool** Step-by-Step Guide Clone the Workflow Copy the provided workflow JSON and import it into your n8n instance. Configure Credentials Set up the Google Gemini(PaLM) API account credentials. Set up the Airtable Personal Access Token account credentials. Map Airtable Base and Table Create a copy of the Prompt Library in Airtable. Map the Airtable base and table in the Airtable node. Customize Prompt Template Edit the 'Create prompt' node to customize the prompt template as needed. Configuration Options Prompt Template:** Customize the prompt template in the 'Create prompt' node to fit your specific use case. Airtable Mapping:** Ensure the Airtable base and table are correctly mapped in the Airtable node. 4. Running and Troubleshooting Running the Workflow Trigger the Workflow: Send a chat message to trigger the workflow. Monitor Execution: Use the n8n interface to monitor the workflow execution. Check Completion: Verify that the prompt is stored in Airtable and check the chat interface for the result. Troubleshooting Tips API Issues:** Ensure that the APIs and Airtable credentials are correctly configured. Data Mapping:** Verify that the Airtable base and table are correctly mapped. Prompt Template:** Check the prompt template for any errors or inconsistencies. Use Case Examples This workflow is particularly useful in scenarios where you want to automate the generation and management of AI agent prompts. Here are some examples: Rapid Prototyping of AI Agents: Quickly generate and test different prompts for AI agents in various applications. Content Creation:** Generate prompts for AI models that create blog posts, articles, or social media content. Customer Service Automation:** Develop prompts for AI-powered chatbots to handle customer inquiries and support requests. Educational Tools:** Create prompts for AI tutors or learning assistants. Industries/Professionals: Software Development:** Developers building AI-powered applications. Marketing:** Marketers automating content creation and social media management. Customer Service:** Customer service managers implementing AI-driven chatbots. Education:** Educators creating AI-based learning tools. Practical Value: Time Savings:** Automates the prompt generation process, saving significant time and effort. Improved Prompt Quality:** Leverages Google Gemini and structured prompt engineering principles to generate more effective prompts. Centralized Prompt Management:** Stores prompts in Airtable for easy access, organization, and reuse. 4. Running and Troubleshooting Running the Workflow:** Activate the workflow in n8n. Send a chat message to the webhook URL configured in the "When chat message received" node. Monitor the workflow execution in the n8n editor. Monitoring Execution:** Check the execution log in n8n to see the data flowing through each node and identify any errors. Checking for Successful Completion:** Verify that a new record is created in your Airtable base with the generated prompt, name, and category. Confirm that the "Return results" node sends back confirmation of the prompt in the chat interface. Troubleshooting Tips:** Error:** 400: Bad Request in the Google Gemini nodes: Cause:** Invalid API key or insufficient permissions. Solution:** Double-check your Google Gemini API key and ensure that the API is enabled for your project. Error:** Airtable node fails to create a record: Cause:** Invalid Airtable credentials, incorrect Base ID or Table ID, or mismatched column names. Solution:** Verify your Airtable API key, Base ID, Table ID, and column names. Ensure that the data types in n8n match the data types in your Airtable columns. Follow me on Linkedin for more
by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8n’s built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8n’s credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by InfraNodus
Using the knowledge graphs instead of RAG vector stores This workflow creates an AI chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) A knowledge graph has a holistic view of your knowledge base Better retrieval of relations between the document chunks = higher quality responses How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, which can be accessed via a URL or embedded to any website) The AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge it has auto-generated by InfraNodus. The AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the user's chat (or a webhook endpoint) How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: