by n8n Team
This workflow creates a Slack thread when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as replies to the thread in Slack. Prerequisites Zendesk account and Zendesk credentials. Slack account and Slack credentials. Slack channel to create threads in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new thread/message in Slack. The Slack thread ID is then saved in one of the ticket's fields called "Slack thread ID". The next time a comment is added to the ticket, the workflow retrieves the Slack thread ID from the ticket's field and adds the comment to the thread/message in Slack as a reply. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Slack thread ID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Slack thread ID”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by vinci-king-01
Amazon Keyboard Product Scraper with AI and Google Sheets Integration 🎯 Target Audience E-commerce analysts and researchers Product managers tracking competitor keyboards Data analysts monitoring Amazon keyboard market trends Business owners conducting market research Developers building product comparison tools 🚀 Problem Statement Manual monitoring of Amazon keyboard products is time-consuming and error-prone. This template solves the challenge of automatically collecting, structuring, and storing keyboard product data for analysis, enabling data-driven decision making in the competitive keyboard market. 🔧 How it Works This workflow automatically scrapes Amazon keyboard products using AI-powered web scraping and stores them in Google Sheets for comprehensive analysis and tracking. Key Components Scheduled Trigger - Runs the workflow at specified intervals to keep data fresh and up-to-date AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract product information from Amazon search results with natural language processing Data Processing - Transforms and structures the scraped data for optimal spreadsheet compatibility Google Sheets Integration - Automatically saves product data to your spreadsheet with proper column mapping 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Product name and model | "Logitech MX Keys Advanced Wireless Illuminated Keyboard" | | url | URL | Direct link to Amazon product page | "https://www.amazon.com/dp/B07S92QBCX" | | category | String | Product category classification | "Electronics" | 🛠️ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize Amazon Search Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify search terms, filters, or categories as needed Adjust the user prompt to extract additional fields if required 5. Configure Schedule Trigger Set your preferred execution frequency (daily, weekly, etc.) Choose appropriate time zones for your business hours Consider Amazon's rate limits when setting frequency 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured 🔄 Workflow Customization Options Modify Search Criteria Change the Amazon URL to target specific keyboard categories Add price filters, brand filters, or rating requirements Update search terms for different product types Extend Data Collection Modify the user prompt to extract additional fields (price, rating, reviews) Add data processing nodes for advanced analytics Integrate with other data sources for comprehensive market analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic 📈 Use Cases Competitive Analysis**: Track competitor keyboard pricing and features Market Research**: Monitor trending keyboard products and categories Inventory Management**: Keep track of available keyboard options Price Monitoring**: Track price changes over time Product Development**: Research market gaps and opportunities 🚨 Important Notes Respect Amazon's terms of service and rate limits Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. News Article Scraping and Analysis with AI and Google Sheets Integration 🎯 Target Audience News aggregators and content curators Media monitoring professionals Market researchers tracking industry news PR professionals monitoring brand mentions Journalists and content creators Business analysts tracking competitor news Academic researchers collecting news data 🚀 Problem Statement Manual news monitoring is time-consuming and often misses important articles. This template solves the challenge of automatically collecting, structuring, and storing news articles from any website for comprehensive analysis and tracking. 🔧 How it Works This workflow automatically scrapes news articles from websites using AI-powered extraction and stores them in Google Sheets for analysis and tracking. Key Components Scheduled Trigger**: Runs automatically at specified intervals to collect fresh content AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract article titles, URLs, and categories from any news website Data Processing**: Formats extracted data for optimal spreadsheet compatibility Automated Storage**: Saves all articles to Google Sheets with metadata for easy filtering and analysis 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Article headline and title | "'My friend died right in front of me' - Student describes moment air force jet crashed into school" | | url | URL | Direct link to the article | "https://www.bbc.com/news/articles/cglzw8y5wy5o" | | category | String | Article category or section | "Asia" | 🛠️ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize News Source Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify the target news website URL as needed Adjust the user prompt to extract additional fields if required Test with a small website first before scaling to larger news sites 5. Configure Schedule Trigger Set your preferred execution frequency (daily, hourly, etc.) Choose appropriate time zones for your business hours Consider the news website's update frequency when setting intervals 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured 🔄 Workflow Customization Options Modify News Sources Change the website URL to target different news sources Add multiple news websites for comprehensive coverage Implement filters for specific topics or categories Extend Data Collection Modify the user prompt to extract additional fields (author, date, summary) Add sentiment analysis for article content Integrate with other data sources for comprehensive analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic 📈 Use Cases Media Monitoring**: Track mentions of your brand, competitors, or industry keywords Content Curation**: Automatically collect articles for newsletters or content aggregation Market Research**: Monitor industry trends and competitor activities News Aggregation**: Build custom news feeds for specific topics or sources Academic Research**: Collect news data for research projects and analysis Crisis Management**: Monitor breaking news and emerging stories �� Important Notes Respect the target website's terms of service and robots.txt Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a small website first before scaling to larger news sites Consider adding filters in the Code node to exclude certain article types or categories Monitor execution logs for any issues and adjust parameters accordingly Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by Miquel Colomer
This n8n workflow template automates the process of collecting and delivering the "Top Deals of the Day" from MediaMarkt, tailored to user preferences. By combining user-submitted forms, Bright Data web scraping, GPT-4o-mini deal generation, and email delivery, this workflow sends personalized product recommendations straight to a user’s inbox. > ⚠️ Note: This workflow uses community nodes (Bright Data and Document Generator) which only work on *self-hosted n8n instances*. 🚀 What It Does Collects user preferences via a form (categories + email) Scrapes MediaMarkt’s deals page using Bright Data Uses GPT-4o-mini (OpenAI) to recommend top deals Generates a structured HTML email using a template Sends the personalized deals directly via email 🧩 Community Node Integration We created and used the following community nodes: Bright Data** – To scrape MediaMarkt deals using proxy-based scraping Document Generator** – To generate a templated HTML document from deal data These nodes are not available in n8n Cloud and require self-hosted n8n. 🛠️ Step-by-Step Setup Install Community Nodes Make sure you're on a self-hosted n8n instance. Install: n8n-nodes-brightdata n8n-nodes-document-generator Configure Credentials Bright Data API Key (Proxy + Scraping setup) OpenAI API Key (GPT-4o-mini access) SMTP Credentials for sending emails Customize the Form Adapt the form node to collect desired categories and email addresses. Typical categories include appliances, phones, laptops, etc. Design Your HTML Template In the Document Generator node, you can tweak the HTML/CSS to change how deals appear in the final email. Test the Workflow Submit the form with test data and check that the entire flow—from scraping to email—executes as expected. 🧠 How It Works: Workflow Overview User Interaction via Form Users select product categories and enter their email. This triggers the workflow. Data Extraction via Bright Data Bright Data scrapes the MediaMarkt offers page and returns HTML content. HTML Parsing Key elements like product names, prices, and links are extracted for processing. GPT-4o-mini Recommendation Generation The extracted data is sent to OpenAI (GPT-4o-mini), which filters, ranks, and enhances deals based on the user’s preferences. Data Structuring & Split The result is split into individual deal items to be formatted. HTML Document Creation Document Generator populates a clean HTML template with the top recommended deals. Email Delivery The final document is emailed via SMTP to the user with a friendly message. 📨 Final Output Users receive a custom HTML email featuring a curated list of top MediaMarkt deals based on their selected categories. 🔐 Credentials Used Bright Data API** – Web scraping with proxy support OpenAI API** – Generating personalized recommendations SMTP** – Sending personalized deal emails ✨ Customization Tips Change the Data Source**: You can adapt this to scrape other e-commerce sites. Update the Email Template**: Make it match your branding or include images. Extend the Form**: Add preferences like price range or specific brands. Add Scheduling**: Use Cron to run the workflow daily or weekly. ❓Questions? Template and node created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Dave Bernier
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. This template aims to ease the process of deploying workflows from github. It has a companion repository that developers might find useful{. See below for more details How it works Automatically import and deploy n8n workflows from your GitHub repository to your production n8n instance using a secured webhook-based approach. This template enables teams to maintain version control of their workflows while ensuring seamless deployment through a CI/CD pipeline. Receives webhook notifications from GitHub when changes are pushed to your repository Lists all files in the repository and filters for .json workflow files Downloads each workflow file and saves it locally Imports all workflows into n8n using the CLI import command Cleans up temporary files after successful import To trigger the deployment, send a POST request to your webhook with the set up credentials (basic auth) with the following body: { "owner": "GITHUB_REPO_OWNER_NAME", "repository": "GITHUB_REPOSITORY_NAME" } Set up steps Once importing this template in n8n : Setup the webhook basic auth credentials Setup the github credentials Activate the workflow ! Companion repository There is a companion repository located at https://github.com/dynamicNerdsSolutions/n8n-git-flow-template that has a Github action already setup to work with this workflow. It provides a complete development environment with: Local n8n instance via Docker Automated workflow export and commit scripts Version control integration CI/CD pipeline setup This setup allows teams to maintain a clean separation between development and production environments while ensuring reliable workflow deployment.
by n8n Team
This workflow creates a Jira issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in Jira. Prerequisites Zendesk account and Zendesk credentials. Jira account and Jira credentials. Jira project to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in Jira. The Jira issue key is then saved in one of the ticket's fields (in setup we call this "Jira Issue Key"). The next time a comment is added to the ticket, the workflow retrieves the Jira issue key from the ticket's field and adds the comment to the issue in Jira. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give the trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Jira issue key. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Jira Issue Key". Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Jimleuk
This n8n template offers a simple yet capable chatbot assistant who can answer course enquiries over SMS. Given the right access to data, AI Agents are capable of planning and performing relatively complex research tasks to get their answers. In this example, the agent must first understand the database schema, retrieve lists of values before generating it's own query to search over the database. Checkout the example database here - https://airtable.com/appO5xvP1aUBYKyJ7/shr8jSFDaghubDOrw How it works A Twilio trigger gives us the ability to receive SMS input into our workflow via webhook. The message is then directed to our AI agent who is instructed to assist the user and use the course database as reference. The database is an Airtable base. The agent autonomously figures out which tool it needs to use and generates it's own "filter_by_formula" query to search over the available courses. On successful search results, the Agent can then use this information to answer the user's query. The Agent's output is logged in a second sheet of the Airtable base. We can use this later for analysis and lead gen. Finally, the response is sent back to the user through SMS using Twilio. How to use Ensure your Twilio number is set to forward messages to this workflow's webhook URL. Configure and update the course database as required. If you're not interested in courses, you can swap this out for inventory, deliveries or any other data relevant to your business. Ask questions like: "Can you help me find suitable courses to fill my Wednesday mornings?" "Which courses are being instructed by profession Lee?" "I'm interested in creative arts. What courses are available which could be relevant to me?" Requirements Twilio for SMS receiving and sending OpenAI for LLM and Agent Airtable for Course Database Customising this workflow Add additional tools and expand the range of queries the agent is able to answer or assist with. Not using Airtable? This technique also works with SQL databases like PostgreSQL.
by hani safaei
This template helps anyone track how often their website appears in Google’s AI Overview. a growing part of search results that can’t currently be tracked using traditional SEO tools. With this workflow, users can: Input a list of keywords (from Google Search Console or manual research). Use the SerpApi to pull Google search results. Extract AI Overview content and its list of sources. Map that information into a structured Google Sheet, including whether your site is listed in those sources. Setup is straightforward and fully automated, but you'll need: A SerpApi key A connected Google Sheets account Who is this for? This workflow is designed for SEO professionals, digital marketers, and site owners who want to track their website’s visibility in Google AI Overviews. What problem does it solve? AI Overviews are rapidly becoming more common in Google search results. However, there's no tool (yet) that tells you if your website is appearing in those answers. This is a blind spot for SEO. This workflow helps you check your site’s presence in AI Overviews manually, at scale. What does the workflow do? The workflow: Takes a list of target keywords (exported from GSC or elsewhere) Uses SerpApi to get search results from Google Extracts the AI Overview block and its sources Checks if your domain is among them Saves all results into a Google Sheet The final Google Sheet will contain: Keyword | AI Overview Exists | List of Sources | Is my domain listed Setup You’ll need: A SerpApi API key A Google Sheet with your list of keywords A connected Google Sheets account in n8n How to customize this workflow Change the list of keywords (pull from GSC or edit the sheet manually) Replace the placeholder domain with your own Adjust the Google Sheet column mapping as needed
by Dariusz Koryto
FTP to Google Drive Transfer Template What This Template Does This workflow automatically transfers files from an FTP server to Google Drive. It's perfect for: Backing up files from remote servers Migrating data from FTP to cloud storage Automating file synchronization tasks Creating scheduled backups of server content How It Works The workflow follows these steps: Manual Trigger - You start the process by clicking "Execute" Lists FTP Directory - Scans the specified FTP folder for all items Filters Files Only - Separates actual files from directories (folders) Downloads Files - Retrieves each file as binary data from the FTP server Uploads to Google Drive - Stores all downloaded files in your specified Google Drive folder Requirements Before using this template, you'll need: FTP Server Access**: Server address, username, and password Google Drive Account**: With OAuth2 authentication set up in n8n n8n Instance**: Self-hosted or cloud version Setup Instructions Step 1: Configure FTP Credentials In n8n, go to Settings → Credentials Create a new FTP credential Enter your FTP server details: Host: Your FTP server address Port: Usually 21 for FTP Username: Your FTP username Password: Your FTP password Test the connection and save Step 2: Set Up Google Drive Authentication Create a new Google Drive OAuth2 credential Follow n8n's Google Drive setup guide: Create a Google Cloud project Enable Google Drive API Create OAuth2 credentials Add your n8n callback URL Authorize the connection in n8n Step 3: Configure the Workflow Update FTP Path: Open the "List FTP Directory" node Change the path parameter from /_instalki to your desired FTP folder Set Google Drive Folder: Open the "Upload to Google Drive" node Replace the folderId with your target Google Drive folder ID To find folder ID: Open the folder in Google Drive and copy the ID from the URL Assign Credentials: Ensure both FTP nodes use your FTP credential Assign your Google Drive credential to the upload node How to Use Test First: Run the workflow manually with a few test files Monitor Execution: Check the execution log for any errors Verify Upload: Confirm files appear in your Google Drive folder Schedule (Optional): Add a schedule trigger if you want automatic runs Customization Options Filter Specific File Types Add a condition after "Filter Files Only" to process only certain file extensions: {{ $json.name.endsWith('.pdf') || $json.name.endsWith('.jpg') }} Add Error Handling Insert error-handling nodes to manage failed downloads or uploads gracefully. Organize by Date Modify the Google Drive upload to create date-based folders automatically. File Size Limits Add checks for file size before attempting upload (Google Drive has limits). Troubleshooting Common Issues: FTP Connection Failed**: Check server address, port, and credentials Google Drive Upload Error**: Verify OAuth2 setup and folder permissions Files Not Found**: Ensure the FTP path exists and contains files Large Files**: Consider Google Drive's file size limitations (15GB for free accounts) Tips: Test with small files first Check n8n execution logs for detailed error messages Ensure your Google Drive has sufficient storage space Verify FTP server allows multiple concurrent connections Security Notes Never hardcode credentials in the workflow Use n8n's credential system for all authentication Consider using SFTP instead of FTP for better security Regularly rotate your FTP passwords Review Google Drive sharing permissions Next Steps Once you have this basic transfer working, you might want to: Add email notifications for successful/failed transfers Implement file deduplication checks Create logs of transferred files Set up automatic cleanup of old files Add file compression before upload
by Jimleuk
This n8n demonstrates how to build a simple PostgreSQL MCP server to manage your PostgreSQL database such as HR, Payroll, Sale, Inventory and More! This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/postgres How it works A MCP server trigger is used and connected to 5 tools: 2 postgreSQL and 3 custom workflow. The 2 postgreSQL tools are simple read-only queries and as such, the postgreSQL tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our standard PostgreSQL node to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This PostgreSQL MCP server allows any compatible MCP client to manage a PostgreSQL database by supporting select, create and update operations. You will need to have a database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me check if Alex has an entry in the users table. If not, please help me create a record for her." "What was the top selling product in the last week?" "How many high priority support tickets are still open this morning?" Requirements PostgreSQL for database. This can be an external database such as Supabase or one you can host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Harsh Maniya
✅💬Build Your Own WhatsApp Fact-Checking Bot with AI Tired of misinformation spreading on WhatsApp? 🤨 This workflow transforms your n8n instance into a powerful, automated fact-checking bot\! Send any news, claim, or question to a designated WhatsApp number, and this bot will use AI to research it, provide a verdict, and send back a summary with direct source links. Fight fake news with the power of automation and AI\! 🚀 How it works ⚙️ This workflow uses a simple but powerful three-step process: 📬 WhatsApp Gateway (Webhook node): This is the front door. The workflow starts when the Webhook node receives an incoming message from a user via a Twilio WhatsApp number. 🕵️ The Digital Detective (Perplexity node): The user's message is sent to the Perplexity node. Here, a powerful AI model, instructed by a custom system prompt, analyzes the claim, scours the web for reliable information, and generates a verdict (e.g., ✅ Likely True, ❌ Likely False). 📲 WhatsApp Reply (Twilio node): The final, formatted response, complete with the verdict, a simple summary, and source citations, is sent back to the original user via the Twilio node. Setup Guide 🛠️ Follow these steps carefully to get your fact-checking bot up and running. Prerequisites A Twilio Account with an active phone number or access to the WhatsApp Sandbox. A Perplexity AI Account to get an API key. 1\. Configure Credentials You'll need to add API keys for both Perplexity and Twilio to your n8n instance. Perplexity AI: Go to your Perplexity AI API Settings. Generate and copy your API Key. In n8n, go to Credentials \& New, search for "Perplexity," and add your key. Twilio: Go to your Twilio Console Dashboard. Find and copy your Account SID and Auth Token. In n8n, go to Credentials \& New, search for "Twilio," and add your credentials. 2\. Set Up the Webhook and Tunnel To allow Twilio's cloud service to communicate with your n8n instance, you need a public URL. The n8n tunnel is perfect for this. Start the n8n Tunnel: If you are running n8n locally, you'll need to expose it to the web. Open your terminal and run: n8n start --tunnel Copy Your Webhook URL: Once the tunnel is active, open your n8n workflow. In the Receive Whatsapp Messages (Webhook) node, you will see two URLs: Test and Production. Copy the Test/Production URL. This is the public URL that Twilio will use. 3\. Configure Your Twilio WhatsApp Sandbox Go to the Twilio Console and navigate to Messaging \& Try it out \& Send a WhatsApp message. Select the Sandbox Settings tab. In the section "WHEN A MESSAGE COMES IN," paste your n8n Production Webhook URL. Make sure the method is set to HTTP POST. Click Save. How to Use Your Bot 🚀 Activate the Sandbox: To start, you (and any other users) must send a WhatsApp message with the join code (e.g., join given-word) to your Twilio Sandbox number. Twilio provides this phrase on the same Sandbox page. Fact-Check Away\! Once joined, simply send any claim or question to the Twilio number. For example: Did Elon Musk discover a new planet? Within moments, the workflow will trigger, and you'll receive a formatted reply with the verdict and sources right in your chat\! Further Reading & Resources 🔗 n8n Tunnel Documentation Twilio for WhatsApp Quickstart Perplexity AI API Documentation