by Recrutei Automações
Overview: Automated Vacancy Launch & AI Marketing This workflow streamlines the entire job opening process by connecting your ATS to your operational and marketing tools. It not only manages deadlines but also automates the promotion of the vacancy. Key Features: Schedule: Creates SLA and Expiration events in Google Calendar based on ATS dates. Track: Creates a central task in ClickUp to manage the selection process. Content Generation: Uses GPT-4o to analyze the job description and write a compelling marketing post. Publish: Automatically posts the job to LinkedIn and logs the action back in the ClickUp task. Setup Instructions Webhook: Configure your Recrutei ATS (or similar) to trigger this workflow. Google Calendar: Select the calendar for deadline tracking. ClickUp: Map the Team, Space, and List where vacancy tasks should be created. OpenAI: Ensure you have a valid API Key. LinkedIn: Connect your profile or company page.
by Nitesh
🧠 How It Works This intelligent workflow turns ancient stories and legendary characters into modern-style vlog ideas — then automatically builds cinematic prompts ready to generate short videos using Veo3. Think: “What if biblical figures had GoPros?” — funny, emotional, and visually stunning AI-made videos. 🔄 Workflow Steps ✨ 1. Concept Generator (AI Node 1) The first AI agent creates a video concept inspired by a biblical or mythological theme. It structures output as JSON with: 🎬 caption – Short, emotional or humorous line with emojis & hashtags 💭 concept – A short summary of the story or moment captured on camera 🌄 setting – Visual and mood details (lighting, style, colors) 📋 status – Stage label like “draft” or “to produce” Example Output: { "caption": "POV: Moses trying to record a vlog mid–Red Sea split 🌊📹 #faithvibes #holyshorts", "concept": "Moses looks straight into the camera, trying to act calm while walls of water rise dramatically beside him.", "setting": "Vast sea corridor glowing in blue light, reflections dancing on wet sand, robes fluttering in the wind.", "status": "to produce" } 🎬 2. Cinematic Prompt Builder (AI Node 2) This agent converts the concept and setting into a Veo3-ready cinematic prompt that guides realistic video generation. Each output includes: Scene layout & description 🌅 Character framing & expression 🎭 Camera movement (pan, orbit, dolly-in, etc.) 🎥 Lighting style & atmosphere 💡 Textural realism (dust, wind, shadows, fabrics) Example Output: > A robed man stands between two towering walls of water, facing the camera as waves shimmer in the light. The handheld camera slowly pushes forward, capturing ripples and wind-blown fabric. His tone is confident yet tense. The atmosphere feels surreal — reflections glisten and mist drifts through golden rays. ☁️ 3. Send to Veo3 API The cinematic description is sent directly to the Veo3 video generation API to create the visual clip. POST Request https://queue.fal.run/fal-ai/veo3 Header → Authorization: Key YOUR_API_KEY Body → { "prompt": "{{ $json.output }}" } The API responds with a request_id to track progress. ⏳ 4. Track Video Progress Monitor generation status and retrieve your final clip details. GET Request https://queue.fal.run/fal-ai/veo3/requests/{{ $json.request_id }} Header → Authorization: Key YOUR_API_KEY When complete, the Veo3 model delivers your AI-generated short film. ⚙️ Setup Guide 1. Connect APIs • Create a Veo3 (fal.run) account • Copy your API key → Add it under Header Auth: Authorization: Key YOUR_API_KEY 2. Customize Prompts • Change the core question in Node 1 to explore other themes — e.g., “Greek myths,” “ancient warriors,” or “historic leaders.” • Refine the camera and lighting tone in Node 2 for different cinematic vibes (gritty, vintage, surreal). 3. Run & Validate • Trigger manually to test flow • Check JSON output → must include caption, concept, setting, status • Ensure Veo3 receives your cinematic prompt correctly 4. Automate & Expand • Add a Scheduler to generate new ideas daily or weekly • Send results to Google Sheets, Notion, or Discord for creative collaboration 🚀 Ideal For • 🎬 Creators & Filmmakers → Quickly generate cinematic ideas & AI-shot scripts • 🙏 Faith-Based Artists → Reimagine ancient lessons with modern storytelling • 💡 Creative Studios → Automate short video ideation for campaigns • 🧠 Educators & Animators → Visualize history or mythology through creative AI prompts
by Avkash Kakdiya
How it works This workflow automatically generates and publishes marketing blog posts to WordPress using AI. It begins by checking your PostgreSQL database for unprocessed records, then uses OpenAI to create SEO-friendly, structured blog content. The content is formatted for WordPress, including categories, tags, and meta descriptions, before being published. After publishing, the workflow updates the original database record to track processing status and WordPress post details. Step-by-step Trigger workflow** Schedule Trigger – Runs the workflow at defined intervals. Fetch unprocessed record** PostgreSQL Trigger – Retrieves the latest unprocessed record from the database. Check Record Exists – Confirms the record is valid and ready for processing. Generate AI blog content** OpenAI Chat Model – Processes the record to generate blog content based on the title. Blog Post Agent – Structures AI output into JSON with title, content, excerpt, and meta description. Format and safeguard content** Code Node – Prepares structured data for WordPress, ensuring categories, tags, and error handling. Publish content and update database** WordPress Publisher – Publishes content to WordPress with proper categories, tags, and meta. Update Database – Marks the record as processed and stores WordPress post ID, URL, and processing timestamp. Why use this? Automates end-to-end blog content generation and publishing. Ensures SEO-friendly and marketing-optimized posts. Maintains database integrity by tracking published content. Reduces manual effort and accelerates content workflow. Integrates PostgreSQL, OpenAI, and WordPress seamlessly for scalable marketing automation.
by Davide
This workflow integrates a Retrieval-Augmented Generation (RAG) system with a post-sales AI agent for WooCommerce. It combines vector-based search (Qdrant + OpenAI embeddings) with LLMs (Google Gemini and GPT-4o-mini) to provide accurate and contextual responses. Both systems are connected to VAPI webhooks, making the workflow usable in a voice AI assistant via Twilio phone numbers. The workflow receives JSON payloads from VAPI via webhooks, processes the request through the appropriate chain (Agent or RAG), and sends a structured response back to VAPI to be read out to the user. Advantages ✅ Unified AI Support System: Combines knowledge retrieval (RAG) with transactional support (WooCommerce). ✅ Data Privacy & Security: Enforces strict email/order verification before sharing information. ✅ Multi-Model Power: Leverages both Google Gemini and OpenAI GPT-4o-mini for optimal responses. ✅ Scalable Knowledge Base: Qdrant vector database ensures fast and accurate context retrieval. ✅ Customer Satisfaction: Provides real-time answers about orders, tracking, and store policies. ✅ Flexible Integration: Easily connects with VAPI for voice assistants and phone-based customer support. ✅ Reusable Components: The RAG part can be extended for FAQs, while the post-sales agent can scale with more WooCommerce tools. How it Works It has two main components: RAG System (Knowledge Retrieval & Q\&A) Uses OpenAI embeddings to store documents in Qdrant. Retrieves relevant context with a Vector Store Retriever. Sends the information to a Question & Answer Chain powered by Google Gemini. Returns precise, context-based answers to user queries via webhook. Post-Sales Customer Support Agent Acts as a WooCommerce virtual assistant to: Retrieve customer orders (get_order, get_orders). Get user profiles (get_user). Provide shipment tracking (get_tracking) using YITH WooCommerce Order Tracking plugin. Enforces strict verification rules: customer email must match the order before disclosing details. Communicates professionally, providing clear and secure customer support. Integrates with GPT-4o-mini for natural conversation flow. Set Up Steps To implement this workflow, follow these three main steps: 1. Infrastructure & Credentials Setup in n8n: Ensure all required nodes have their credentials configured: OpenAI API Key: For the GPT 4o-mini and Embeddings OpenAI nodes. Google Gemini API Key: For the Google Gemini Chat Model node. Qdrant Connection Details: For the Qdrant Vector Store1 node (points to a Hetzner server). WooCommerce API Keys: For the get_order, get_orders, and get_user nodes (for magnanigioielli.com). WordPress HTTP Auth Credentials: For the Get tracking node in the sub-workflow. Pre-populate the Vector Database:** The RAG system requires a pre-filled Qdrant collection with your store's knowledge base (e.g., policy documents, product info). The "Sticky Note2" provides a link to a guide on building this RAG system. 2. Workflow Activation in n8n: Save this JSON workflow in your n8n instance. Activate the workflow.** This is crucial, as n8n only listens for webhook triggers when the workflow is active. Note the unique public webhook URLs generated for the Webhook (post-sales agent) and rag (RAG system) nodes. You will need these URLs for the next step. 3. VAPI Configuration: Create Two API Tools in VAPI:** Tool 1 (Post-Sales): Create an "API Request" tool. Connect it to the n8n Webhook URL. Configure the request body to send parameters email and n_order based on the conversation with the user. Tool 2 (RAG): Create another "API Request" tool. Connect it to the n8n rag webhook URL. Configure the request body to send a search parameter containing the user's query. Build the Assistant:** Create a new assistant in VAPI. Write a system prompt that instructs the AI on when to use each of the two tools you created. In the "Tools" tab, add both tools. Go Live:** Add a phone number (e.g., from Twilio) to your VAPI assistant and set it to "Inbound" to receive customer calls. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Masaki Go
Automatically extract structured information from emails using AI-powered document analysis. This workflow processes emails from specified domains, classifies them by type, and extracts structured data from various attachment formats. Who is this for Operations teams, coordinators, and business professionals who receive proposals or reports from multiple sources via email and need to consolidate the information into a structured database. What this workflow does Monitors Gmail every 30 minutes for emails from specified domains Classifies emails into three categories based on customizable keywords Processes attachments intelligently based on file type and email classification Extracts structured data: dates, times, names, amounts, and other fields Saves to Google Sheets with full metadata and classification Labels processed emails in Gmail for tracking Setup requirements Gmail OAuth2 credentials OpenAI API key (GPT-4 Vision) Google Sheets OAuth2 credentials AWS S3 bucket for temporary image storage ConvertAPI account for PPTX/PDF conversion How to customize Edit the domain list and classification keywords in the code nodes to adapt for your specific use case.
by Davide
🤖📈 This workflow is my personal solution for the Agentic Arena Community Contest, where the goal is to build a Retrieval-Augmented Generation (RAG) AI agent capable of answering questions based on a provided PDF knowledge base. Key Advantages ✅ End-to-End RAG Implementation Fully automates the ingestion, processing, and retrieval of knowledge from PDFs into a vector database. ✅ Accuracy through Multi-Layered Retrieval Combines embeddings, Qdrant search, and Cohere reranking to ensure the agent retrieves the most relevant policy information. ✅ Robust Evaluation System Includes an automated correctness evaluation pipeline powered by GPT-4.1 as a judge, ensuring transparent scoring and continuous improvement. ✅ Citation-Driven Compliance The AI agent is instructed to provide citations for every answer, making it suitable for high-stakes use cases like policy compliance. ✅ Scalability and Modularity Can easily integrate with different data sources (Google Drive, APIs, other storage systems) and be extended to new use cases. ✅ Seamless Collaboration with Google Sheets Both the evaluation set and the results are integrated with Google Sheets, enabling easy monitoring, iteration, and reporting. ✅ Cloud and Self-Hosted Flexibility Works with self-hosted Qdrant on Hetzner, Mistral Cloud for OCR, and OpenAI/Cohere APIs, combining local control with powerful cloud AI services. How it Works Knowledge Base Ingestion (The "Setup" Execution): When started manually, the workflow first clears an existing Qdrant vector database collection. It then searches a specified Google Drive folder for PDF files. For each PDF found, it performs the following steps: Uploads the file to the Mistral AI API. Processes the PDF using Mistral's OCR service to extract text and convert it into a structured markdown format. Splits the text into manageable chunks. Generates embeddings for each text chunk using OpenAI's model. Stores the embeddings in the Qdrant vector store, creating a searchable knowledge base. Agent Evaluation (The "Testing" Execution): The workflow is triggered by an evaluation Google Sheet containing questions and correct answers. For each question, the core AI Agent is activated. This agent: Uses the RAG tool to search the pre-populated Qdrant vector store for relevant information from the PDFs. Employs a Cohere reranker to refine the search results for the highest quality context. Leverages a GPT-4.1 model to generate an answer based strictly on the retrieved context. The agent's answer is then passed to an "LLM as a Judge" (another GPT-4.1 instance), which compares it to the ground truth answer from the evaluation sheet. The judge provides a detailed score (1-5) based on factual correctness and citation accuracy. Finally, both the agent's answer and the correctness score are saved back to a Google Sheet for review. Set up Steps To implement this solution, you need to configure the following components and credentials: Configure Core AI Services: OpenAI API Credentials: Required for the main AI agent, the judge LLM, and generating embeddings. Mistral AI API Credentials: Necessary for the OCR service that processes PDF files. Cohere API Credentials: Used for the reranker node that improves retrieval quality. Google Service Accounts: Set up OAuth for Google Sheets (to read questions and save results) and Google Drive (to access the PDF source files). Set up the Vector Database (Qdrant): This workflow uses a self-hosted Qdrant instance. You must deploy and configure your own Qdrant server. Update the Qdrant Vector Store and RAG nodes with the correct API endpoint URL and credentials for your Qdrant instance. Ensure the collection name (agentic-arena) is created or matches your setup. Connect Data Sources: PDF Source: In the "Search PDFs" node, update the folderId parameter to point to your own Google Drive folder containing the contest PDFs. Evaluation Sheet: In the "Eval Set" node, update the documentId to point to your own copy of the evaluation Google Sheet containing the test questions and answers. Results Sheet: In the "Save Eval" node, update the documentId to point to the Google Sheet where you want to save the evaluation results. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Shun Nakayama
🚀 Turn your random ideas into concrete automation specs This workflow acts as your interactive "n8n Consultant." Simply write down a rough automation idea in Google Tasks (e.g., "Send weather updates to Telegram"), and the AI will research, design, and send a detailed n8n implementation plan to your Slack. ✨ Why is this workflow special? Unlike simple notification workflows, this features a Human-in-the-Loop review process. You don't just get a message; you get control. Regenerate:** Not satisfied with the AI's plan? Click a button in Slack to have the AI rewrite it instantly. Archive:* Happy with the plan? Click "Approve" to automatically save the detailed specs to *Google Sheets** and mark the task as complete. How it works Fetch: The workflow periodically checks a specific Google Tasks list for new ideas. AI Design: The AI (OpenAI) analyzes your idea and generates a structured plan, including node configuration and potential pitfalls. Human Review: It sends the plan to Slack with interactive "Approve" and "Regenerate" buttons. The workflow waits for your input. If Regenerate: The AI re-analyzes the idea and creates a new variation. If Approve: The workflow proceeds to the next step. Archive: The approved plan (Title, Nodes, Challenges) is saved to a Google Sheet for future development. Close: The original Google Task is updated with a "Processed" flag. How to set up Google Tasks: Create a new list named "n8n Ideas". Google Sheets: Create a new sheet with the following headers in the first row (A to H): Date Added Idea Title Status Recommended Nodes Key Challenges Improvement Ideas Alternatives Source Task ID Credentials: Configure credentials for Google Tasks, Google Sheets, OpenAI, and Slack. Configure Nodes: [Step 1] Fetch New Ideas: Select your Task list. [Step 4] Slack — Review & Approve: Select your target channel. [Action] Archive to Sheets: Select your Spreadsheet and Sheet. [Close] Mark Task Done: Select your Task list again. Requirements Google Tasks account Google Sheets account OpenAI API Key Slack account
by Cheng Siong Chin
n8n Template Submission: AI-Powered Multi-Document Analysis & Recommendation Engine 1. Title AI Multi-Document Analyzer with Smart Recommendations & Reporting How It Works This workflow automates intelligent document analysis by processing multiple uploaded files through parallel AI pipelines to extract insights, generate comparative analysis, and produce actionable recommendations delivered via email. Designed for business analysts, consultants, and researchers, it enables efficient synthesis of insights from diverse document types into strategic, data-driven conclusions. The workflow eliminates the manual effort of reviewing documents, identifying patterns, cross-referencing information, and formulating recommendations by orchestrating structured data extraction, routing content through specialized AI models (OpenAI and Claude), aggregating and validating results, and formatting professional-grade reports. End-to-end processing includes batch document ingestion, structured extraction, parallel AI analysis, comparative evaluation, recommendation generation, report formatting, and tracked delivery via Gmail. Setup Steps Configure NVIDIA NIM API credentials for creative content analysis Add OpenAI API key with GPT-4 access for strategic evaluation Connect Anthropic Claude API for technical assessment capabilities Set up Google Sheets integration with read/write permissions Configure Gmail OAuth2 credentials for automated report delivery Customize analysis prompts and recommendation thresholds Prerequisites NVIDIA NIM API access, OpenAI API key (GPT-4), Anthropic Claude API key Use Cases Multi-vendor proposal evaluation, regulatory compliance document review Customization Adjust AI model parameters per analysis depth, modify recommendation scoring algorithms Benefits Processes multiple documents 90% faster than manual review, eliminates bias through multi-model
by Rahul Joshi
Description Automatically score candidate questionnaire responses using Azure OpenAI (GPT-4o-mini), combine them with existing evaluations from Google Sheets, and keep your candidate database up to date—all in near real time. Get consistent, structured scores and key takeaways for faster, fairer decisions. ⚡📊 What This Template Does Monitors new questionnaire submissions in Google Sheets every minute. ⏱️ Evaluates responses with Azure OpenAI and returns structured JSON (score + takeaways). 🤖 Parses model output safely and normalizes fields. 🧩 Retrieves existing candidate data from a central Google Sheet. 📂 Calculates combined final scores and updates/append records by candidate name. ➕ Key Benefits Consistent, objective scoring across all responses. 🎯 Real-time processing from form submission to database update. 🚀 Clear JSON outputs for downstream reporting and analytics. 📈 No-code customization of questions, weights, and fields. 🛠 Scales effortlessly with high submission volumes. 📥 Features Continuous polling of the “BD Questionarie” → “Form Responses 1” sheet. 🔄 AI evaluation with GPT-4o-mini returning score (0–30) and takeaways. 🧠 Resilient JSON parsing (handles code fences and errors). 🧼 Candidate lookup in “Resume store” → “Sheet2” for data fusion. 🔗 Additive scoring model: Final Score = Existing Score + Questionnaire Score. ➕ Append or update records by name while preserving existing data. 📝 Requirements n8n instance (Cloud or self-hosted). 🌐 Google Sheets access: “BD Questionarie” spreadsheet (sheet: “Form Responses 1”) for new responses. “Resume store” spreadsheet (sheet: “Sheet2”) for existing profiles. Credentials configured in n8n (OAuth/Service Account) with read/write where needed. 🔐 Azure OpenAI access with a GPT-4o-mini deployment for evaluation and JSON output. 🤖 Ability to customize evaluation questions and scoring weights within the workflow. ⚙️ Target Audience Teams evaluating candidate questionnaires and consolidating scores. 👥 Operations teams centralizing hiring data in Google Sheets. 🗂️ Organizations seeking real-time, AI-assisted screening. 🧭 No-code/low-code builders standardizing hiring workflows. 🧱 *Step-by-Step Setup Instructions * Connect Google Sheets in n8n Credentials; grant access to “BD Questionarie” and “Resume store.” 🔑 Add Azure OpenAI credentials in n8n; ensure a GPT-4o-mini deployment is available. 🤝 Import the workflow, assign credentials to each node, and set the sheet IDs/ranges. 📋 Confirm name is the matching key, and adjust evaluation weights or questions as needed. ⚖ Run once to validate parsing and score calculation, then enable polling (every minute). ▶️
by n8n Automation Expert | Template Creator | 2+ Years Experience
Overview Transform your receipt management with this comprehensive n8n workflow that automatically processes receipts through Telegram, extracts transaction data using AI, and stores it across multiple platforms for seamless expense tracking. Key Features 📱 Telegram Bot Integration**: Send receipts via photo or manual text entry 🔍 OCR Processing**: Automatic text extraction from receipt images using OCR.space API 🤖 AI Data Extraction**: OpenAI GPT-4 intelligently extracts vendor, amount, date, and category 📊 Multi-Platform Storage**: Automatically saves to Google Sheets, Notion, and custom APIs 💾 Receipt Archival**: Stores original receipt images in Google Drive ✅ Smart Validation**: Validates extracted data and handles errors gracefully 📲 Real-time Feedback**: Sends confirmation messages with transaction details How It Works Input Methods: Send receipt photos or text messages to your Telegram bot Image Processing: Downloads and processes receipt images using OCR technology AI Analysis: GPT-4 extracts structured transaction data from OCR text Data Validation: Ensures data quality and handles missing information Multi-Storage: Simultaneously saves to Google Sheets, Notion database, and external APIs Confirmation: Sends formatted confirmation with all transaction details Use Cases Personal expense tracking and budgeting Small business receipt management Travel expense documentation Tax preparation and record keeping Automated bookkeeping workflows Required Credentials Telegram Bot API (for bot functionality) OCR.space API (for receipt text extraction) OpenAI API (for AI data processing) Google Sheets OAuth2 (for spreadsheet storage) Google Drive OAuth2 (for image storage) Notion API (for database integration) Setup Notes Replace placeholder values in the workflow: YOUR_GOOGLE_SHEET_ID_HERE in Google Sheets node YOUR_NOTION_DATABASE_ID_HERE in Notion node YOUR_API_KEY_HERE and API endpoint in Website API node This workflow provides a complete solution for automated receipt processing, making expense tracking effortless through simple Telegram interactions while maintaining data across multiple platforms for maximum accessibility and backup.
by Mikhail
How it works A user query is received via the Chat Trigger node. The Planning Agent decides whether the question requires a general knowledge answer or a research-oriented response. If a research query is detected, the arXiv Search node queries the arXiv API and retrieves recent relevant papers. JSON Parsers** process the API response and extract metadata such as titles, abstracts, and links. The arXiv-Grounded Agent summarizes each paper and generates a final answer to the user question based strictly on retrieved content. The final response includes summaries and clickable citations from arXiv.
by Victor Manuel Lagunas Franco
I wanted a journal but never had the discipline to write one. Most of my day happens in Discord anyway, so I built this to do it for me. Every night, it reads my Discord channel, asks GPT-4 to write a short reflection, generates an image that captures the vibe of the day, and saves everything to Notion. I wake up with a diary entry I didn't have to write. How it works Runs daily at whatever time you set Grabs messages from a Discord channel (last 100) Filters to today's messages only GPT-4 writes a title, summary, mood, and tags DALL-E generates an image based on the day's themes Uploads image to Cloudinary (Notion needs a public URL) Creates a Notion page with everything formatted nicely Setup Discord Bot credentials (read message history permission) OpenAI API key Free Cloudinary account for image hosting Notion integration connected to your database Notion database properties needed Title (title) Date (date) Summary (text) Mood (select): 😊 Great, 😌 Good, 😐 Neutral, 😔 Low, 🔥 Productive Message Count (number) Takes about 15 minutes to set up. I use Gallery view in Notion with the AI image as cover - looks pretty cool after a few weeks.