by Yaron Been
Ndreca Hunyuan3d 2 Test AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the ndreca/hunyuan3d-2-test model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters image** (string): Input image for generating 3D shape Optional Parameters seed** (integer, default: 1234): Random seed for generation steps** (integer, default: 50): Number of inference steps num_chunks** (integer, default: 200000): Number of chunks for mesh generation max_facenum** (integer, default: 40000): Maximum number of faces for mesh generation guidance_scale** (number, default: 5.5): Guidance scale for generation octree_resolution** (string, default: 512): Octree resolution for mesh generation remove_background** (boolean, default: True): Whether to remove background from input image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: ndreca/hunyuan3d-2-test API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Description This workflow automatically monitors and tracks trending topics across multiple platforms and websites. It helps content creators and marketers stay ahead of the curve by identifying emerging trends before they go mainstream. Overview This workflow automatically monitors and tracks trending topics across multiple platforms and websites. It uses Bright Data to scrape trend data from social media, news sites, and other sources, then compiles the information into a structured format. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping trend data from various websites without getting blocked. Spreadsheets/Databases:** For storing and analyzing trend information. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Data Storage: Configure where you want to store the trend data. Customize: Specify which platforms to monitor and what topics to focus on. Use Cases Content Creators:** Stay on top of trending topics for content ideas. Marketers:** Identify emerging trends for timely campaigns. Researchers:** Track the evolution of topics and conversations over time. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #trends #trendtracking #brightdata #contentmarketing #trendanalysis #trendalerts #markettrends #trendmonitoring #n8nworkflow #workflow #nocode #trendresearch #emergingtrends #socialmediatrends #trendscraping #trenddata #contentideas #digitalmarketing #marketresearch #trendforecasting #trendspotting #dataanalysis #marketintelligence #trendautomation
by Davide
The Sound Effects Generator is an automated workflow that allows users to create realistic sound effects using AI and save them directly to Google Drive. It generates high-quality sound effects (up to 30 seconds long) based on user prompts. How It Works: User Input via Web Form A form is presented to the user asking for: A prompt describing the sound (e.g. "waves crashing", "laser blast"). A duration in seconds (up to 30 seconds). API Request to Generate Audio The input is sent to CassetteAI via a POST request using API with proper authentication. Status Polling The workflow waits for 10 seconds and then checks the status of the request. Conditional Flow If the audio generation is complete (COMPLETED), it proceeds to fetch the audio file URL. If not, it waits and retries. Download & Save The audio file is downloaded from the URL. It is automatically uploaded to a specific folder in the user’s Google Drive, with a timestamped filename. Key Advantages Fast & Efficient**: Generates up to 30 seconds of audio in just 1 second of processing time. No Coding Required**: Entire flow can be triggered via a simple form interface. Automated Storage**: Files are automatically saved to a preconfigured Google Drive folder. Scalable**: Can be reused for multiple projects by simply changing the input prompts. Secure**: Uses secure API key-based authentication for interaction with Fal.run and Google Drive. Customizable**: Easy to adapt or extend—for example, sending download links via email or Telegram. How It Works Form Submission: The workflow starts with a form where users input a prompt and the desired duration (max 30 seconds) for the sound effect. Audio Creation: The submitted data is sent to the CassetteAI Sound Effects Generator API via an HTTP request, which initiates the sound effect generation process. Status Check: The workflow periodically checks the status of the request. If the status is "COMPLETED," it proceeds to fetch the audio file. Audio Retrieval: The generated audio file is downloaded from the provided URL and uploaded to a specified Google Drive folder, with a timestamped filename for organization. Set Up Steps API Key Configuration: Create an account on fal.ai and obtain an API key. In the "Create audio" node, set the "Header Auth" with: Name: Authorization Value: Key YOURAPIKEY (replace YOURAPIKEY with your actual API key). Google Drive Integration: Ensure the Google Drive node is configured with the correct OAuth2 credentials and folder ID. Adjust the folder ID in the "Upload Audio" node if a different destination is preferred. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Davide
This workflow builds a conversational AI chatbot agent using Claude 3.7 Sonnet model with the new . It enhances standard LLM capabilities with Anthropic’s features: Web Search and Think: Real-time web search**, to answer up-to-date factual queries. A “Think” function, to support internal reasoning and memory-like behavior by Anthropic. A memory buffer, allowing the agent to maintain conversation history. A system prompt defining clear ethical, functional, and formatting rules for interaction. When a user sends a message (trigger), the chatbot evaluates the query, optionally performs a web search if needed, processes the result using Claude, and responds accordingly. ✅ Advantages 🧠 Enhanced Reasoning Abilities** The Think tool allows the agent to simulate deep thought processes or contextual memory storage, improving conversational intelligence. 🌐 Real-Time Knowledge via Web Search** The integrated web_search tool enables the agent to fetch the latest information from the internet, making it ideal for dynamic or news-driven use cases. 🧾 Contextual Responses with Memory Buffer** The inclusion of a memory buffer allows the agent to maintain state across messages, improving dialogue flow and continuity. 🛡️ Built-in Ethical Guidelines** The system prompt enforces privacy, factual integrity, neutrality, and ethical response generation, making the agent safe for public or enterprise use. How It Works Chat Trigger: The workflow begins when a chat message is received via a webhook. This triggers the AI Agent to process the user's query. AI Agent Processing: The AI Agent analyzes the query to determine if it requires information from the website or external sources. It follows a structured approach: For website-related queries, it uses the provided context. For external information, it employs the web_search tool to fetch up-to-date data from the internet. The Think tool is used for internal reasoning or caching thoughts without altering data. Language Model: The Anthropic Chat Model (Claude 3.7 Sonnet) generates responses based on the analyzed query, incorporating website context or web search results. Memory: A simple memory buffer retains context from previous interactions to maintain continuity in conversations. Output: The final response is delivered to the user, excluding internal processes like web searches or reasoning steps. Set Up Steps Configure Nodes: Chat Trigger: Set up the webhook to receive user messages. AI Agent: Define the system message and rules for handling queries. Anthropic Chat Model: Select the Claude 3.7 Sonnet model and configure parameters like maxTokensToSample. Memory: Initialize the memory buffer to store conversation context. Tools: web_search: Configure the HTTP request to the Anthropic API for web searches, including headers and authentication. Think: Set up the tool for internal reasoning. Connect Nodes: Link the Chat Trigger to the AI Agent. Connect the Anthropic Chat Model, Memory, and Tools (web_search and Think) to the AI Agent. Credentials: Ensure the Anthropic API credentials are correctly configured for both the chat model and the web_search tool. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Yaron Been
Description This workflow automatically searches multiple flight booking websites to find the cheapest flights for your desired routes. It leverages web scraping to compare prices across platforms, helping you save money on air travel. Overview This workflow automatically searches multiple flight booking websites to find the cheapest flights for your desired routes. It uses Bright Data to scrape flight prices and can notify you when prices drop below your target threshold. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping flight prices from booking websites. Notification Services:** Email, SMS, or other messaging platforms. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Notifications: Configure your preferred notification method. Customize: Set your routes, date ranges, and price thresholds. Use Cases Frequent Travelers:** Find the best deals for your regular routes. Travel Agencies:** Monitor flight prices for client bookings. Budget Travelers:** Get notified when flights to your dream destination become affordable. Connect with Me YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #travel #flights #brightdata #dealalerts #webscraping #flightdeals #cheapflights #travelhacks #budgettravel #travelplanning #airfare #flightprices #travelautomation #n8nworkflow #workflow #nocode #traveltech #flightbooking #savemoney #traveltools #flightcomparison #bestflightdeals #travelsmarter #automatedtravel
by Fernanda Silva
Workflow Description Your workflow is an intelligent chatbot, using ++OpenAI assistant++, integrated with a backend that supports WhatsApp Business, designed to handle various use cases such as sales and customer support. Below is a breakdown of its functionality and key components: Workflow Structure and Functionality Chat Input (Chat Trigger) The flow starts by receiving messages from customers via WhatsApp Business. Collects basic information, such as session_id, to organize interactions. Condition Check (If Node) Checks if additional customer data (e.g., name, age, dependents) is sent along with the message. If additional data is present, a customized prompt is generated, which includes this information. The prompt specifies that this data is for the assistant's awareness and doesn’t require a response. Data Preparation (Edit Fields Nodes) Formats customer data and the interaction details to be processed by the AI assistant. Compiles the customer data and their query into a single text block. AI Responses (OpenAI Nodes) The assistant’s prompt is carefully designed to guide the AI in providing accurate and relevant responses based on the customer’s query and data provided. Prompts describe the available functionalities, including which APIs to call and their specific purposes, helping to prevent “hallucinated” or irrelevant responses. Memory and Context (Postgres Chat Memory) Stores context and messages in continuous sessions using a database, ensuring the chatbot maintains conversation history. API Calls The workflow allows the use of APIs with any endpoints you choose, depending on your specific use case. This flexibility enables integration with various services tailored to your needs. The OpenAI assistant understands JSON structures, and you can define in the prompt how the responses should be formatted. This allows you to structure responses neatly for the client, ensuring clarity and professionalism. Make sure to describe the purpose of each endpoint in the assistant’s prompt to help guide the AI and prevent misinterpretation. Customer Response Delivery After processing and querying APIs, the generated response is sent to the backend and ultimately delivered to the customer through WhatsApp Business. Best Practices Implemented Preventing Hallucinations** Every API has a clear description in its prompt, ensuring the AI understands its intended use case. Versatile Functionality** The chatbot is modular and flexible, capable of handling both sales and general customer inquiries. Context Persistence** By utilizing persistent memory, the flow maintains continuous interaction context, which is crucial for longer conversations or follow-up queries. Additional Recommendations Include practical examples in the assistant’s prompt, such as frequently asked questions or decision-making flows based on API calls. Ensure all responses align with the customer’s objectives (e.g., making a purchase or resolving technical queries). Log interactions in detail for future analysis and workflow optimization. This workflow provides a solid foundation for a robust and multifunctional virtual assistant 🚀
by ConvertAPI
Who is this for? For developers and organizations that need to convert DOCX files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the DOCX file from the web. Converts the DOCX file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Adjust url_to_file in the Config node to URL pointing to your file. Optionally, additional Body Parameters can be added for the converter.
by phil
This workflow automates the process of summarizing or transcribing a WordPress article, converting the text into speech using Eleven Labs API, and uploading the resulting MP3 file back to WordPress. How It Works Trigger – The workflow starts manually when the user clicks “Test Workflow”. Retrieve Article – It fetches a WordPress article based on a given post ID. Summarize or Transcribe – An LLM (GPT-4o-mini) generates either: • A summary of the article, or • A full transcription, depending on the chosen prompt. Generate Speech – The processed text (summary or transcription) is converted into an MP3 audio file using Eleven Labs API. Upload MP3 to WordPress – The generated MP3 file is uploaded to WordPress. Update WordPress Post – The article is updated with an embedded audio player, allowing users to listen to the summary or transcription. Set Up Steps WordPress API Credentials • Configure your WordPress API credentials in n8n. Eleven Labs API Key • Obtain an API Key from Eleven Labs and configure it in n8n. Choose Between Summary or Transcription • Modify the AI prompt to either generate a summary or keep the full transcription. Test the Workflow • Run the workflow and ensure the MP3 file is correctly generated and uploaded. 💡 Customization Options • Modify the AI prompt to switch between a summary and a transcription. • Change the voice model in Eleven Labs for different speech styles. • Adjust output format to higher/lower quality MP3. 🚀 This automation improves content accessibility and engagement by allowing users to listen to a summarized or full version of the article. Phil | Inforeole
by Afnan
This n8n workflow automates the process of finding, summarizing, and posting breaking news headlines on X (formerly Twitter). It combines Google Custom Search for finding the latest news articles with Groq's LLaMA 3 model to generate short, engaging headlines — complete with hashtags — and posts them on your X account. 🔧 Features Custom topic support (e.g., "AI", "health", "technology") Automated scheduling every few hours Google Custom Search to find the most recent news articles Groq LLaMA3-based headline generation with hashtags Auto-post to X (Twitter) Built-in credential separation for API keys and access tokens 📦 Included Nodes Schedule Trigger Set (Set Topic, Google API Key, Custom Search CX, etc.) HTTP Request (Google Search API) Code Node (Format prompt and extract article data) HTTP Request (Groq API for headline generation) Twitter Node (Post to X) ⚙️ How It Works (Step-by-Step) Trigger The workflow starts on a scheduled interval (default: every 5 hours, at a random minute within the hour). Set Topic You can define your own topic keyword (e.g., AI, mental health, climate change) by editing the Set Topic node. Build Search Query Constructs a Google search query like: latest {topic} news. Google API Config Injects your own Google API Key and Custom Search CX (replace the placeholders in the Google Config node). Search for News Performs a real-time search using Google Custom Search API and fetches the latest article result. Generate Prompt for AI A JavaScript Function node extracts the top article’s title and link, formats it into a clean prompt including instructions to append hashtags. Groq AI Request Sends the prompt to Groq’s LLaMA 3 model to generate a concise, tweet-length headline with 1–2 relevant hashtags. Post to Twitter (X) The generated headline is posted to your connected X account via the Twitter OAuth2 API. ✅ Requirements Google API Key Google Custom Search Engine (CX) Groq API Key Twitter Developer App with OAuth2 credentials 💡 Customization Tips Change the topic in the Set Topic node to anything you like. Adjust the posting frequency in the Schedule Trigger node. Modify prompt behavior in the Function node to fit a specific tone or brand voice. Add logging, filtering, or multiple post variations as needed.
by Varritech
Workflow: Publish to Contentful with Rich Text Formatting ⚡ About the Creators This workflow was created by Varritech Technologies, an innovative agency that leverages AI to engineer, design, and deliver software development projects 500% faster than traditional agencies. Based in New York City, we specialize in custom software development, web applications, and digital transformation solutions. If you need assistance implementing this workflow or have questions about content management solutions, please reach out to our team. 🏗️ Architecture Overview This workflow takes a JSON article payload, splits its markdown content into logical chunks, converts each chunk into Contentful Rich Text JSON via an AI agent, merges the resulting rich text nodes back into a single document, formats the entire entry according to Contentful's field schema, and finally publishes it to Contentful. Trigger → Executes when called by another workflow Split by Headings → Breaks markdown into ##-delimited chunks Markdown → Rich Text → AI agent converts each chunk to Contentful Rich Text JSON Combine Rich Text Objects → Aggregates all chunk outputs into one document Format Entry → Wraps metadata and rich-text content into Contentful schema Publish Entry → HTTP POST to Contentful API 📦 Node-by-Node Breakdown flowchart LR A[When Executed by Another Workflow] --> B[Split by Headings] B --> C[Markdown to Contentful format] C --> D[Combine Rich Text Objects] D --> E[Merge1] E --> F[Format1] F --> G[Create newly formatted Contentful Entry] 1. When Executed by Another Workflow Type: Execute Workflow Trigger Input Example: title, slug, category.id, description, keywords, content, metaTitle, metaDescription, readingTime, difficulty Purpose: Receives the JSON payload from the upstream workflow. 2. Split by Headings Type: Code Logic: Splits input.content into an array of markdown chunks at each second-level heading (##). Emits one item per chunk with index, slug, title, and contentChunk. 3. Markdown to Contentful format Type: LangChain Agent (+ OpenAI Chat model) System Prompt: Defines rules for generating valid Contentful Rich Text JSON (must include nodeType, data:{}, content:[], etc.). Provides examples for paragraphs, headings, lists, links, and images. User Prompt: Here is the markdown content to convert: Purpose: Converts each markdown chunk into an array of rich-text nodes. 4. Combine Rich Text Objects Type: Code Logic: Parses and merges all content arrays returned by the AI agent into one combined content array under a document root. 5. Merge1 Type: Merge Purpose: Joins the original item (with metadata) and the combined rich-text document into a single data stream. 6. Format1 Type: Code Logic: Maps workflow data into the Contentful entry schema by setting each field (title, slug, category link, description, keywords, rich-text content, metaTitle, metaDescription, readingTime, difficulty) under the appropriate locale and structure required by Contentful. 7. Create newly formatted Contentful Entry Type: HTTP Request Method: POST URL: https://api.contentful.com/spaces Headers: Authorization: Bearer token for Contentful Management API Content-Type: application/vnd.contentful.management.v1+json X-Contentful-Version: entry version number X-Contentful-Content-Type: content type ID Body: The formatted fields object produced by the previous node Purpose: Publishes the new entry with rich-text content to Contentful. 🔍 Design Rationale & Best Practices Chunked Conversion Splitting by headings prevents AI context limits and keeps conversions modular. Strict Rich Text Schema Enforcing nodeType, data, and content structure avoids validation errors on Contentful. Two-Phase Merge Separating "combine AI outputs" and "format entry" keeps transformations clear and testable. Idempotent Publish Uses explicit versioning and content type headers to ensure correct entry creation.
by Joseph LePage
The 🌐🤖 AI Agent Chatbot with Jina.ai Webpage Scraper workflow is a powerful automation designed to integrate real-time web scraping capabilities into an AI-driven chatbot. Here's how it works and why it's important: How It Works 💬 Chat Trigger: The workflow begins when a user sends a chat message, triggering the "When chat message received" node. 🧠 AI Agent Processing: The input is passed to the "Jina.ai Web Scraping Agent," which uses advanced AI logic to interpret the user’s query and determine the information needed. 🌐 Web Scraping: The agent utilizes the "HTTP Request" node to scrape real-time data from a user-provided URL, enabling the chatbot to fetch and analyze live website content. 🗂️ Memory Management: The "Window Buffer Memory" node ensures context retention by storing and managing conversational history, allowing for seamless interactions. 🤖 Language Model Integration: The scraped data is processed using the "gpt-4o-mini" language model, which generates clear, accurate, and contextually relevant responses for the user. Why It's Cool ⏱️ Real-Time Information Retrieval**: This workflow empowers users to access up-to-date web content directly through a chatbot, eliminating manual web searches. ✨ Enhanced User Experience**: By combining web scraping with conversational AI, it delivers precise answers tailored to user queries in real time. 🔄 Versatility**: It can be applied across various domains, such as customer support, research, or data analysis, making it a valuable tool for businesses and individuals alike. ⚙️ Automation Efficiency**: Automating web scraping and response generation saves time and effort while ensuring accuracy.
by LukaszB
n8n Workflow Backup to Google Drive – Automated Export of All Your Workflows This workflow is designed to automatically create backups of all your workflows in n8n and store them as individual .json files in Google Drive. It's a fully automated system that helps developers, agencies, or automation teams ensure their automation logic is always safe, versioned, and ready to restore or share. What is this for? If you’re building and managing multiple automations inside n8n, losing a workflow due to accidental deletion or misconfiguration can cost you hours of work. This template solves that by exporting all your workflows into separate files and storing them in a dated Google Drive folder. It helps with disaster recovery, version tracking, and team collaboration — without any manual exporting. How this works: -Once triggered (manually or via a schedule), the workflow performs the following steps: -Creates a new folder in your Google Drive, named with today’s date (e.g. “Workflow Backups Monday 16-05-2025”). -Connects to your n8n instance using the internal API and retrieves a list of all existing workflows. -Iterates over each workflow, converts it into a .json file using the built-in file conversion node. -Uploads each individual .json file to the newly created folder in Google Drive. -Optionally, the workflow finds and deletes old backup folders to keep your Google Drive clean and avoid clutter. You get a clean, timestamped folder with all your flows — ready to restore, send, or store securely. You can trigger it manually or schedule it (e.g., to run weekly on Monday mornings). How to set it up: Import the provided workflow JSON into your n8n instance. Set up your credentials: -Replace the placeholder “Google demo” with your actual Google Drive OAuth2 credentials in all Google Drive nodes. -Replace the placeholder “n8n demo” with your n8n API credentials so the workflow can fetch your flows. -Go to the node “Create new folder” and replace the folder ID with your own destination folder in Google Drive where backups should be stored. -(Optional) Enable the “Schedule Trigger” to run the backup automatically once a week or on your preferred interval. You’re ready to go — test it with the Manual Trigger first and check your Google Drive for results.