by WeblineIndia
This workflow, developed by our AI developers at WeblineIndia, is designed to automate the process of capturing form submissions and storing them in Airtable. By leveraging automation, it eliminates manual data entry, ensuring a smooth and efficient way to handle form data. The purpose of creating this workflow is to streamline data management, helping businesses save time, reduce errors, and maintain an organized, structured database for easy access and future use. Steps: Trigger on Form Submission (Form Node)** What It Does: Activates the workflow whenever a form is submitted. How to Set It Up: Use the Form Submission Trigger node to detect new form submissions. This ensures the workflow starts automatically when a user fills out the form. Store Data in Airtable (Airtable Node)** What It Does: Transfers the form data into an Airtable base. How to Set It Up: Use the Airtable Node to map form fields to corresponding columns in your Airtable table, storing the data accurately. Finalize and Activate** What It Does: Completes the setup to automate data storage upon form submission. How to Set It Up: Save and activate the workflow. Once active, it will automatically record all new form submissions in Airtable.
by Lucas Perret
Who is this for? This workflow is for all sales reps and lead generation manager who need to prepare their prospecting activities, and find relevant information to personalize their outreach. Use Case This workflow allows you to do account research with the web using AI. It has the potential to replace manual work done by sales rep when preparing their prospecting activities by searching complex information available online. What this workflow does The advanced AI module has 2 capabilities: Research Google using SerpAPI Visit and get website content using a sub-workflow From an unstructured input like a domain or a company name. It will return the following properties: domain company Linkedin Url cheapest plan has free trial has entreprise plan has API market (B2B or B2C) The strength of n8n here is that you can adapt this workflow to research whatever information you need. You just have to precise it in the prompt and to precise the output format in the "Strutured Output Parser" module. Detailed instructions + video guide can be found by following this link.
by Felipe Braga
An intelligent chatbot that assists employees by answering common HR or IT questions, supporting both text and audio messages. This unique feature ensures employees can conveniently ask questions via voice messages, which are transcribed and processed just like text queries. How It Works Message Capture: When an employee sends a message to the chatbot in WhatsApp or Telegram (text or audio), the chatbot captures the input. Audio Transcription: For audio messages, the chatbot transcribes the content into text using an AI-powered transcription service (e.g., Whisper, Google Cloud Speech-to-Text). Query Processing: The transcribed text (or directly entered text) is sent to an AI service (e.g., OpenAI) to generate embeddings. These embeddings are used to search a vector database (e.g., Supabase or Qdrant) containing the company’s internal HR and IT documentation. The most relevant data is retrieved and sent back to the AI service to compose a concise and helpful response. Response Delivery: The chatbot sends the final response back to the employee, whether the input was text or audio. Set Up Steps Estimated Time**: 20–25 minutes Prerequisites**: Create an account with an AI provider (e.g., OpenAI). Connect WhatsApp or Telegram credentials in n8n. Set up a transcription service (e.g., Whisper or Google Cloud Speech-to-Text). Configure a vector database (e.g., Supabase or Qdrant) and add your internal HR and IT documentation. Import the workflow template into n8n and update environment variables for your credentials.
by Mark Shcherbakov
Video Guide I prepared a detailed guide showcasing the process of building an AI agent that interacts with a Snowflake database using n8n. This setup enables conversational querying, secure execution of SQL queries, and dynamic report generation with rich visualization capabilities. Youtube Link Who is this for? This workflow is designed for developers, data analysts, and business professionals who want to interact with their Snowflake data conversationally. It suits users looking to automate SQL query generation with AI, manage large datasets efficiently, and produce interactive reports without deep technical knowledge. What problem does this workflow solve? Querying Snowflake databases typically requires SQL proficiency and can lead to heavy token usage if large datasets are sent to AI models directly. This workflow addresses these challenges by: Guiding AI to generate accurate SQL queries based on user chat input while referencing live database schema to avoid errors. Executing queries safely on Snowflake with proper credential management. Aggregating large result sets to reduce token consumption. Offering a user-friendly report link with pagination, filtering, charts, and CSV export instead of returning overwhelming raw data. Providing an error-resilient environment that prompts regenerations for SQL errors or connectivity issues. What this workflow does The scenario consists of multiple focused n8n workflows orchestrated for smooth, secure, and scalable interactions: Agent Workflow Starts with a chat node and sets the system role as "Snowflake SQL assistant." AI generates SQL after verifying database schema and table definitions to avoid hallucinations. Reinforcement rules ensure schema validation before query creation. Data Retrieval Workflow Receives SQL queries from the agent workflow. Executes them against the Snowflake database using user-provided credentials (hostname, account, warehouse, database, schema, username, password). Optionally applies safety checks on SQL to prevent injection attacks. Aggregation and Reporting Decision Aggregates returned data into arrays for efficient processing. Applies a threshold (default 100 records) to decide whether to return raw data or generate a dynamic report link. Prepares report links embedding URL-encoded SQL queries to securely invoke a separate report workflow. Report Viewing Workflow Triggered via webhook from the report link. Re-executes SQL queries to fetch fresh data. Displays data with pagination, column filtering, and selectable chart visualizations. Supports CSV export and custom HTML layouts for tailored user experience. Provides proper error pages in case of SQL or data issues. Schema and Table Definition Retrieval Tools Two helper workflows that fetch the list of tables and column metadata from Snowflake. Require the user to replace placeholders with actual database and data source names. Crucial for AI to maintain accurate understanding of the database structure. N8N Workflow Preparation Create your Snowflake credentials in n8n with required host and account details, warehouse (e.g., "computer_warehouse"), database, schema, username, and password. Replace placeholder variables in schema retrieval workflows with your actual database and data source names. Verify the credentials by testing the connection; reset passwords if needed. Workflow Logic The Agent Workflow listens to user chats, employs system role "Snowflake SQL assistant," and ensures schema validation before generating SQL queries. Generated SQL queries pass to the Data Retrieval Workflow, which executes them against Snowflake securely. Retrieved data is aggregated and evaluated against a configurable threshold to decide between returning raw data or creating a report link. When a report link is generated, the Report Viewing Workflow renders a dynamic interactive HTML-based report webpage, including pagination, filters, charts, and CSV export options. Helper workflows periodically fetch or update the current database schema and table definitions to maintain AI accuracy and prevent hallucinations in SQL generation. Error handling mechanisms provide user-friendly messages both in the agent chat and report pages when issues arise with SQL or connectivity. This modular, secure, and extensible setup empowers you to build intelligent AI-driven data interactions with Snowflake through n8n automations and custom reporting.
by Jaruphat J.
Who’s it for? This workflow is built for: AI storytellers, **content creators, YouTubers, and short-form video marketers Anyone looking to transform text prompts into cinematic AI-generated videos fully automatically Educators, **trainers, or agencies creating story-based visual content at scale What It Does This n8n workflow allows you to automatically turn a simple text prompt into a multi-scene cinematic video, using the powerful Fal.AI Seedance V1.0 model (developed by ByteDance — the creators of TikTok). It combines the creativity of GPT-4o, the motion synthesis of Seedance, and the automation power of n8n to generate AI videos with ambient sound and publish-ready format. How It Works Accepts a prompt from Google Sheets (configurable fields like duration, aspect ratio, resolution, scene count) Uses OpenAI GPT-4o to write a vivid cinematic narrative Splits the story into n separate scenes For each scene: GPT generates a structured cinematic description (characters, camera, movement, sound) The Seedance V1.0 model (via Fal.AI API) renders a 5s animated video Optional: Adds ambient audio via Fal’s MM-Audio model Finally: Merges all scene videos using Fal’s FFmpeg API Optionally uploads to YouTube automatically Why This Is Special Fal.AI Seedance V1.0** is a highly advanced motion video model developed by ByteDance, capable of generating expressive, stylized 5–6 second cinematic clips from text. This workflow supports full looping, scene count validation, and wait-polling for long render jobs. The entire story, breakdown, and scene design are AI-generated — no manual effort needed. Output is export-ready: MP4 with sound, ideal for YouTube Shorts, Reels, or TikTok. Requirements n8n (Self-hosted recommended) API Keys: Fal.AI (https://fal.ai) OpenAI (GPT-4o or 3.5) Google Sheets Example Google Sheet How to Set It Up Clone the template into your n8n instance Configure credentials: Fal.AI Header Token OpenAI API Key Google Sheets OAuth2 (Optional) YouTube API OAuth Prepare a Google Sheet with these columns: story (short prompt) number_of_scene duration (per clip) aspect_ratio, resolution, model Run manually or trigger on Sheet update. How to Customize Modify the storytelling tone in GPT prompts (e.g., switch to fantasy, horror, sci-fi) Change Seedance model params like style or seed Add subtitles or branding overlays to final video Integrate LINE, Notion, or Telegram for auto-sharing Example Output Prompt: “A rabbit flies to the moon on a dragonfly and eats watermelon together” → Result: 3 scenes, each 5s, cinematic camera pans, soft ambient audio, auto-uploaded to YouTube Result
by Eugene Green
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This template helps you discover trending Instagram Reels from competitors or any public profiles you choose. 📺 How It Works The workflow automatically monitors selected Instagram accounts using Apify, pulls recent Reels, and analyzes their performance. It calculates engagement levels, detects early “hot” content, translates text if needed, and organizes all results into a structured Notion database — ready for review and production. I've recorded a video walkthrough to show you how the system works in detail. 👉 https://www.youtube.com/watch?v=rdfRNHpHX8o @youtube 📄 Download Notion Database Structure You can download the Notion table structure (with all required columns and formats) here: 👉 https://drive.google.com/file/d/1FVaS_-ztp6PDAJbETUb1dkg8IqE4qHqp/view?usp=sharing 👤 Who’s it for This workflow is for marketers, content creators, social media managers, and automation enthusiasts who want to stay ahead of Instagram Reels trends. Whether you're building a content pipeline or studying competitors, this tool saves hours of manual tracking. 🔧 How to set up Create the required databases in Notion (structure file provided). Import the .json workflow into your n8n instance. Set up credentials for Notion, Apify, and Gemini API. Link those credentials in the workflow. Adjust the Variables node with your test account list and settings. Run a test with 3–5 profiles and validate the output. Once working, update Variables with your full config. 📋 Requirements An n8n instance (self-hosted or cloud) A Notion account (separate workspace recommended) Apify account (usage-based pricing for Instagram scraping) Gemini API key for AI processing (usage-based pricing) 🧩 How to customize the workflow The system is fully modular. You can: In the Set Prompt node, you can define your own rules for detecting content categories and video types. Change video filters (e.g., what counts as “hot” or “early hot”) Modify Notion fields or adapt to your own database structure Add more parsing logic to Variables Switch translation language Integrate with your content production flow Each part of the workflow is clearly labeled and editable — feel free to adapt it to your goals.
by Shahrukh
Create AI-Driven Website Audits & Personalized Outreach with Lighthouse and GPT-4 Who is this for? This workflow is perfect for marketing agencies, SEO consultants, and growth specialists who need to scale personalized outreach without spending hours on manual research. What problem does it solve? Traditional cold outreach feels generic and gets ignored. This template automates website audits and personalized email creation, making your outreach look deeply researched and relevant—at scale. What this workflow does Pulls business details from a Google Sheet (which you can fill via tools like Google Maps scrapers) Finds company emails using an AI-powered scraper Captures a screenshot of the business homepage Runs a Lighthouse audit (Performance, SEO, Accessibility, Best Practices) Performs UI analysis to spot design gaps using GPT-4 Generates a personalized outreach email that references real site data, tone, and scores Result: You end up with dozens of qualified leads, each with a full audit report and a ready-to-send outreach email. Requirements n8n account** (self-hosted or cloud) Google Sheets credentials** (use n8n’s built-in credential manager) OpenAI API key** (stored securely in n8n credentials) Lighthouse node installed** How to Set Up Connect Google Sheets → Use it as your lead source Add your OpenAI and Google credentials via n8n credential manager Replace placeholder variables in the “Set” nodes for your campaign Enable the Lighthouse node for audits Run the workflow manually or schedule it How to Customize Change the email prompt in the OpenAI node to match your tone Modify the Google Sheet structure for different niches Add extra steps (e.g., push to a CRM or email sender like Instantly) Feel Free to drop me an email if you need help with building a custom automation for your business at : shahrukh@marketingbyprof.com
by Davide
This WooCommerce-integrated chatbot is designed to transform post-sales customer support by combining automation and artificial intelligence to deliver fast, secure, and personalized assistance. The chatbot retrieves real-time order information, including shipping details and tracking numbers, after verifying the customer's identity through a strict email-based authentication system. Beyond order management, the chatbot answers frequently asked questions about return policies, delivery times, and terms of service using RAG. If a request is too complex, the system seamlessly escalates it to a human operator via Telegram, guaranteeing no customer query goes unresolved. Key Features of the Chatbot Order Tracking: Retrieves real-time tracking information for WooCommerce orders, including carrier URLs and pickup dates. Order Details Retrieval: Provides customers with past/current order information after strict email verification. Policy & FAQ Assistance: Answers questions about shipping, returns, and store policies using a vectorized knowledge base (ToS tool). Identity Verification: Ensures privacy by requiring exact email-order matches before sharing sensitive data. Human Escalation: Transfers complex issues to human agents via Telegram when the AI cannot resolve them. Context-Aware Conversations: Maintains dialogue context using memory buffers for seamless interactions. Who Benefits from This Chatbot? E-commerce Stores: WooCommerce businesses needing 24/7 automated post-sales support. Customer Support Teams: Reduces ticket volume by handling repetitive queries (tracking, policies). SMBs: Small-to-medium businesses lacking resources for full-time support staff. Customers: Shoppers who want instant, self-service access to order status and FAQs. How It Works Customer Interaction: The workflow starts when a customer sends a chat message, triggering the AI agent. Identity Verification: The agent requests the order number and associated email, strictly verifying the match before proceeding. Order & Tracking Retrieval: Using WooCommerce API tools (get_order, get_tracking), it fetches order details and tracking information. Policy & Support: The ToS tool answers shipping and policy questions, while human_assistance escalates unresolved issues to a human agent via Telegram. Memory & Context: A buffer memory retains conversation context for coherent interactions. Set Up Steps Configure Qdrant Vector Store: Replace QDRANTURL and COLLECTION in the nodes to set up document storage. Add Telegram Chat ID: Insert your Telegram CHAT_ID in the human_assistance node for escalations. Integrate WooCommerce Tracking Plugin: Install the YITH WooCommerce Order Tracking plugin and update the HTTP request URL in the tracking node. Test & Activate: Verify the workflow by testing order queries and ensuring proper email verification. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Nisa
Description This workflow automatically classifies user queries and retrieves the most relevant information based on the query type. 🌟 It uses adaptive strategies like; Factual, Analytical, Opinion, and Contextual to deliver more precise and meaningful responses by leveraging n8n's flexibility. Integrated with Qdrant vector store and Google Gemini, it processes each query faster and more effectively. 🚀 How It Works? Query Reception: A user query is triggered (e.g., through a chatbot interface). 💬 Classification: The query is classified into one of four categories: Factual: Queries seeking verifiable information. Analytical: Queries that require in-depth analysis or explanation. Opinion: Queries looking for different perspectives or subjective viewpoints. Contextual: Queries specific to the user or certain contextual conditions. Adaptive Strategy Application: Based on classification, the query is restructured using the relevant strategy for better results. Response Generation**: The most relevant documents and context are used to generate a tailored response. 🎯 Set Up Steps Estimated Time: ⏳ 10-15 minutes Prerequisites: You need an n8n account and a Qdrant vector store connection. Steps: Import the n8n workflow: Load the workflow into your n8n instance. Connect Google Gemini and Qdrant: Link these tools for query processing and data retrieval. Connect the Trigger Interface: Integrate with a chatbot or API to trigger the workflow. Customize: Adjust settings based on the query types you want to handle and the output format. 🔧 For more detailed instructions, please check the sticky notes inside the workflow. 📌
by Ole Andre Torjussen
This n8n workflow sets up a smart home assistant using OpenAI and Homey integration. It uses LangChain agent tools to allow natural language queries (in Norwegian) to trigger workflows for controlling lights, curtains, temperature, TVs, and other devices across different rooms (e.g., living room, bedroom, cinema). The system uses tool-based workflows connected to specific smart home actions and responds in Norwegian. It’s designed to be modular and easily extended with new devices or capabilities.
by Luís Philipe Trindade
What's up Guys. I'm Luís 🙋🏻♂️ Let me make one thing clear up front: this isn't just another WhatsApp summary workflow. It’s a fully structured automation built for people who actually need to stay informed without wasting time and with total control over what gets summarized. What this workflow does: Receives messages via webhook from Evolution API Checks if the message is from a group or an individual Routes messages by type: text or audio (with automatic transcription using OpenAI) Stores everything in a Google Sheet organized by group, sender, timestamp and message sended Creates a Control Panel with a checkbox for each group. So, you decide which groups should receive summaries (this is the main differentiator about this workflow) Collects all messages from yesterday, groups them by chat, and sends them to GPT to generate a summary Sends the summary in a clean, formatted in Whatsapp every morning (fully automated). 🧩 How the flow is structured This workflow is strategically divided into two independent parts to ensure clarity, organization, and easy scalability: Part 1 – Message Capture and Storage Triggered via webhook, this part: Receives messages from Evolution API Checks if the message is from a group Distinguishes between text and audio (with automatic transcription) Stores the message in Google Sheets -Checks if the group exists in the control tab If it doesn't, it creates a new row with a checkbox so you can enable/disable summaries for that group Part 2 – Summary Generation and Delivery Scheduled to run daily at 08:00 AM or choose your preferred trigger time Pulls all messages from the previous day Groups them by chat and checks if that group is enabled for summaries Sends the messages to OpenAI to generate a digest Delivers the summary directly into the WhatsApp group using Evolution API This structure makes the flow easier to manage, customize, and scale — plug in other tools without breaking the logic. Tools used: ✅ Evolution API (WhatsApp connection API non-official) ✅ Google Sheets (template provided) ✅ OpenAI (for transcription and summarization) How to set it up: Set up the webhook on Evolution and connect it to n8n Use the included Google Sheets template. Click here to make your copy 👉🏻 [[Template] Log - Group Summary](https://docs.google.com/spreadsheets/d/1ymkWd0thcFRTtWdNrenUg1k8lAmn19ebznSHtvKHaoE/edit?usp=sharing) Connect your Google Sheets credentials Add your OpenAI API key (Optional) Customize the prompt and choose your preferred trigger time Why this workflow stands out: 📊 *Real control panel: enable or disable summaries per group with a single click* 🔍 Fully traceable and modular logic with clear branching and error handling ⚙️ Built for scale. Ideal for teams, communities, or educational groups 📬 Automatically delivers structured daily insights straight to your Whatsapp Groups ✅ Works on both n8n Cloud and Self-hosted 🔐 100% secure. No hacks. No shortcuts. Want to adapt this flow for your business, team, or community? 📩 Custom requests: WhatsApp me at +5534992569346 Português <> PT-BR Fala, galera! Eu sou o Luís 🙋🏻♂️ Eu já vou deixar uma coisa clara: esse não é só mais um fluxo de resumo do WhatsApp. É uma automação completa, estruturada do início ao fim, feita pra quem realmente precisa se manter informado sem perder tempo e com controle total sobre o que vai ou não pro resumo. O que esse fluxo faz: Recebe mensagens via webhook da Evolution API Verifica se a mensagem é de grupo ou contato individual Separa as mensagens por tipo: texto ou áudio (com transcrição automática via OpenAI) Armazena tudo no Google Sheets, organizado por grupo, autor, horário e conteúdo Cria um Painel de Controle com checkbox para cada grupo — você decide quais grupos vão ou não receber o resumo (esse é o grande diferencial do fluxo) Coleta todas as mensagens do dia anterior, agrupa por grupo e envia para a IA gerar o resumo Envia o resumo formatado direto no grupo do WhatsApp todas as manhãs (100% automático) 🧩 Como o fluxo está estruturado Esse fluxo foi estrategicamente dividido em duas partes independentes, garantindo clareza, organização e escalabilidade: Parte 1 – Captura e Armazenamento das Mensagens Ativado por webhook: Recebe mensagens da Evolution API Verifica se é de grupo Separa entre texto e áudio (com transcrição automática) Armazena a mensagem no Google Sheets Verifica se o grupo já existe na aba de controle Caso não exista, cria uma nova linha com checkbox para ativar ou não os resumos daquele grupo Parte 2 – Geração e Envio do Resumo Agendado para rodar todo dia às 08:00 (ou no horário que você quiser) Coleta todas as mensagens do dia anterior Agrupa por grupo e valida se o grupo está habilitado no painel de controle Envia as mensagens para o OpenAI gerar o resumo Entrega o resumo diretamente no grupo via Evolution API Essa estrutura torna o fluxo muito mais fácil de manter, adaptar e escalar — pode integrar novas ferramentas sem bagunçar nada Ferramentas utilizadas: ✅ Evolution API (conexão com o WhatsApp, API não oficial) ✅ Google Sheets (modelo incluso) ✅ OpenAI (para transcrição e geração do resumo) Como configurar: Configure o webhook no Evolution e conecte ao n8n Use a planilha modelo que acompanha esse fluxo. Faça sua cópia clicando aqui 👉🏻 [[Template] Log - Group Summary](https://docs.google.com/spreadsheets/d/1ymkWd0thcFRTtWdNrenUg1k8lAmn19ebznSHtvKHaoE/edit?usp=sharing) Conecte suas credenciais do Google Sheets Adicione sua chave da OpenAI (Opcional) Personalize o prompt da IA e defina o melhor horário de execução Por que esse fluxo se destaca: 📊 *Painel de controle real: ative ou desative os resumos por grupo com 1 clique* 🔍 Lógica rastreável e modular, com ramificações claras e tratamento de exceções ⚙️ Pronto pra escalar. Ideal para times, comunidades ou grupos educacionais 📬 Entrega automática de resumos diários direto nos grupos do WhatsApp ✅ Compatível com n8n Cloud e Self-hosted 🔐 100% seguro. Sem gambiarra. Sem atalhos. Quer adaptar esse fluxo para seu negócio, time ou comunidade? 📩 Solicitações personalizadas: me chama no WhatsApp +5534992569346
by Joseph LePage
Transform simple queries into comprehensive, well-structured content with this n8n workflow that leverages Perplexity AI for research and GPT-4 for content transformation. Create professional blog posts and HTML content automatically while maintaining accuracy and depth. Intelligent Research & Analysis 🚀 Automated Research Pipeline Harnesses Perplexity AI's advanced research capabilities Processes complex topics into structured insights Delivers comprehensive analysis in minutes instead of hours 🧠 Smart Content Organization Automatically structures content with clear hierarchies Identifies and highlights key concepts Maintains technical accuracy while improving readability Creates SEO-friendly content structure Content Transformation Features 📝 Dynamic Content Generation Converts research into professional blog articles Generates clean, responsive HTML output Implements proper semantic structure Includes metadata and categorization 🎨 Professional Formatting Responsive Tailwind CSS styling Clean, modern HTML structure Proper heading hierarchy Mobile-friendly layouts Blockquote highlighting for key insights Perfect For 📚 Content Researchers Save hours of manual research by automating the information gathering and structuring process. ✍️ Content Writers Focus on creativity while the workflow handles research and technical formatting. 🌐 Web Publishers Generate publication-ready HTML content with modern styling and proper structure. Technical Implementation ⚡ Workflow Components Webhook endpoint for query submission Perplexity AI integration for research GPT-4 powered content structuring HTML transformation engine Telegram notification system (optional) Transform your content creation process with an intelligent system that handles research, writing, and formatting while you focus on strategy and creativity.