by Evoort Solutions
🔗 Automated Semrush Backlink Checker with n8n and Google Sheets 📘 Description This n8n workflow automates backlink data extraction using the Semrush Backlink Checker API available on RapidAPI. By submitting a website via a simple form, the workflow fetches both backlink overview metrics and detailed backlink entries, saving the results directly into a connected Google Sheet. This is an ideal solution for SEO professionals who want fast, automated insights without logging into multiple tools. 🧩 Node-by-Node Explanation On form submission** – Starts the workflow when a user submits a website URL through a web form. HTTP Request* – Sends the URL to the *Semrush Backlink Checker API** using a POST request with headers and form data. Reformat 1** – Extracts high-level backlink overview data like total backlinks and referring domains. Reformat 2** – Extracts individual backlink records such as source URLs, anchors, and metrics. Backlink overview** – Appends overview metrics into the "backlink overflow" tab of a Google Sheet. Backlinks** – Appends detailed backlink data into the main "backlinks" tab of the same Google Sheet. ✅ Benefits of This Workflow No-code integration**: Built entirely within n8n—no scripting required. Time-saving automation**: Eliminates the need to manually log in or export reports from Semrush. Centralized results**: All backlink data is organized in Google Sheets for easy access and sharing. Powered by RapidAPI: Uses the **Semrush Backlink Checker API hosted on RapidAPI for fast, reliable access. Easily extendable**: Can be enhanced with notifications, dashboards, or additional data enrichment. 🛠️ Use Cases 📊 SEO Audit Automation – Auto-generate backlink insights for multiple websites via form submissions. 🧾 Client Reporting – Streamline backlink reporting for SEO agencies or consultants. 📥 Lead Capture Tool – Offer a free backlink analysis tool on your site to capture leads while showcasing value. 🔁 Scheduled Backlink Monitoring – Modify the trigger to run on a schedule for recurring reports. 📈 Campaign Tracking – Monitor backlinks earned during content marketing or digital PR campaigns. 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Semrush Backlink Checker API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Yves Junqueira
Who's it for Digital marketing agencies and Meta Ads managers who need to generate comprehensive performance reports across multiple client accounts automatically. Perfect for agencies handling 5+ Meta Ads accounts who want to save hours on manual reporting while delivering AI-powered insights to their teams. What it does Pulls performance data from multiple Meta Ads accounts for a specified time period (last 7, 14, or 30 days) Uses Claude AI with Pipeboard's Meta Ads MCP to analyze campaign performance, identify trends, and generate actionable insights Generates professional reports with AI-driven recommendations for optimization Automatically delivers formatted reports to your Slack channels Runs on a schedule (weekly/daily) or triggered manually How to set up Set up Claude AI integration (requires Anthropic API key) Configure Pipeboard's Meta Ads MCP connection Connect Slack to n8n via OAuth2 Create a list of client account IDs in the workflow configuration Customize your reporting template and Slack delivery settings Requirements n8n version 1.109.2 or newer. Claude AI API access (Anthropic) Pipeboard account Slack workspace access How to customize the workflow Adjust the date range and metrics to track Modify the AI prompts for different types of insights Configure multiple Slack channels for different clients Set up custom scheduling intervals Add email delivery as an additional output channel
by AppUnits AI
Generate Invoices and Send Reminders for Customers with Jotform and Xero This workflow automates the entire process of receiving a product/service order, checking or creating a customer in Xero, generating an invoice, emailing it — all triggered by a form submission (via Jotform), and sending invoice reminders. How It Works Receive Submission Triggered when a user submits a form. Collects data like customer details, selected product/service, etc. Create/Update The Customer Creates/Updates the customer. Create The Invoice Generates a new invoice for the customer using the item selected. Send The Invoice Automatically sends the invoice via email to the customer. Store The Invoice In DB Stores the needed invoice details in the DB. Send Reminders Every day at 8 AM, the automation checks each invoice to decide whether to: send a reminder email, skip and send it later, or delete the invoice from the DB (if it's paid or all reminders have been sent). Who Can Benefit from This Workflow? Freelancers** Service Providers** Consultants & Coaches** Small Businesses** E-commerce or Custom Product Sellers** Requirements Jotform webhook setup, more info here Xero credentials, more info here Make sure that products/services values in Jotform are exactly the same as your item Code in your Xero account Email setup, update email nodes (Send email & Send reminder email & Send reminders sent summary) Create data table with the following columns: invoiceId (string) remainingAmount (number) currency (string) remindersSent (number) lastSentAt (date time) Update Add reminders config node so update the data table id and intervals in days (default is after 2 days, then after 3 days and finally after 5 days ) LLM model credentials
by AppUnits AI
Generate Invoices and Send Reminders for Customers with Jotform, QuickBooks and Outlook This workflow automates the entire process of receiving a product/service order, checking or creating a customer in QuickBooks Online (QBO), generating an invoice, emailing it — all triggered by a form submission (via Jotform), and sending invoice reminders. How It Works Receive Submission Triggered when a user submits a form. Collects data like customer details, selected product/service, etc. Check If Customer Exists Searches QBO to determine if the customer already exists. If Customer Exists:* *Update** customer details (e.g., billing address). If Customer Doesn’t Exist:* *Create** a new customer in QBO. Get The Item Retrieves the selected product or service from QBO. Create The Invoice Generates a new invoice for the customer using the item selected. Send The Invoice Automatically sends the invoice via email to the customer. Store The Invoice In DB Stores the needed invoice details in the DB. Send Reminders Every day at 8 AM, the automation checks each invoice to decide whether to: send a reminder email, skip and send it later, or delete the invoice from the DB (if it's paid or all reminders have been sent). Who Can Benefit from This Workflow? Freelancers** Service Providers** Consultants & Coaches** Small Businesses** E-commerce or Custom Product Sellers** Requirements Jotform webhook setup, more info here QuickBooks Online credentials, more info here Email setup, update email nodes (Send reminder email & Send reminders sent summary), more info about Outlook setup here Create data table with the following columns: invoiceId (string) remainingAmount (number) currency (string) remindersSent (number) lastSentAt (date time) Update Add reminders config node so update the data table id and intervals in days (default is after 2 days, then after 3 days and finally after 5 days ) LLM model credentials
by Kamran habib
## | N8N Workflow | AI-Powered Twitter Automation with Content Generation and Engagement 🚀 This n8n template automates Twitter (X) activity — from generating tweet content with AI to engaging with posts and even sending DMs — all powered by Google Gemini or OpenRouter AI. It’s designed for creators, marketers, brands, and agencies who want to automate social media presence with authentic, on-brand AI content and engagement. How It Works The workflow begins with a form trigger, where users input their topic, tone, and action type (Tweet, Engage, or DM). Those inputs are passed into Workflow Configuration, which sets key parameters like max tweet length and model URLs. Depending on your chosen action: Post Tweet: AI generates a tweet under 280 characters and can attach an image. Engage with Posts: AI can like, retweet, or reply to niche-relevant content. Send Direct Message: AI drafts a personalized DM for outreach or networking. If your workflow includes visuals, the AI Agent - Create Image From Prompt node builds a detailed image prompt (based on your topic and instructions) and sends it to Google Gemini or other image APIs. The HTTP Request - Create Image node generates a custom image via an external model (default: Pollinations.ai). Finally, all tweet text and image data merge together in Merge Tweet Text and Image, before being posted directly via the Create Tweet node. How To Use Download and Import the JSON workflow into your n8n interface. Set up the following credentials: OpenRouter API for text generation. Google Gemini (PaLM) for chat and image prompt creation. Twitter OAuth2 API for posting and engagement actions. Configure your form input fields (Topic, Tone, Action, Instructions). Enable or disable the nodes you want: Create Tweet → To post automatically. Twitter Engagement Tool → For likes/retweets/replies. Twitter DM Tool → For automated DMs. Trigger the Twitter Content Form via n8n’s web interface. Enter your content preferences and submit. The workflow generates your tweet text, optionally creates a matching image, and posts or saves it automatically. Requirements A Twitter Developer Account (with OAuth2 credentials). A Google Gemini or OpenRouter account with text and image model access. (Optional) Connection to Pollinations or another AI image generation API. How To Customize Update “Fields – Set Values” node to change: Default image size (1080 × 1920 px). Model name (e.g., “flux”, “turbo”, “kontext”). Modify Workflow Configuration to tweak AI parameters like: imageGenerationChance (default: 0.3). maxTweetLength (default: 280). Replace Google Gemini Chat Model with any supported model such as OpenAI GPT-4 or Mistral. Adjust AI Agent - Create Image From Prompt system message for your preferred image style or guidelines. Toggle which Twitter actions are active — Post, Engage, or DM — to tailor automation to your goals.
by Nishant
Overview Confused which credit card to actually get or swipe? With 100+ cards in the market, hidden caps, and milestone rules, most people end up leaving rewards, perks, and cashback on the table. This workflow uses n8n + GPT + Google Sheets + Telegram to recommend the best credit card for each user’s lifestyle in under 3 seconds, while keeping the logic transparent with a ₹-value breakdown. What does this workflow do? This workflow: Captures User Inputs – Users answer a 7-question lifestyle quiz via Telegram. Stores Responses – Google Sheets logs all answers for resumption & deduplication. Scores Answers – n8n Function nodes map single & multi-select inputs into scores. Generates Recommendations – GPT analyses profile vs. 30+ card dataset. Breaks Down Value – Outputs a transparent table of rewards, milestones, lounge value. Delivers Results – Top 3 card picks returned instantly on Telegram. Why is this useful? Most card comparison tools only list features — they don’t personalise or calculate actual value. This workflow builds a decision engine: 🔍 Personalised → matches lifestyle to best-fit cards 💸 Transparent → shows value in real currency (rewards, milestones, lounges) ⏱ Fast → answers in under 3 seconds 🗂 Organised → Google Sheets keeps audit trail of every user + dedupe Tools used n8n (Orchestrator): Orchestration + logic branching Telegram: User-facing quiz bot Google Sheets: Database of credit cards + logs of user answers OpenAI (GPT): Analyses user profile & generates recommendations Who is this for? 🧑💻 Fintech product builders → see how AI can power recommendation engines 💳 Cardholders → understand which card fits their lifestyle best ⚙️ n8n makers → learn how to combine Sheets + GPT + chat interface into one workflow 🌍 How to adapt it for your country/location This workflow uses a credit card dataset stored in Google Sheets. To make it work for your country: Build your dataset → scrape or collect card details from banks, comparison sites, or official portals Fields to include: Fees, Reward rate, Lounge access, Forex markup, Reward caps, Milestones, Eligibility. You can use web crawlers (e.g., Apify, PhantomBuster) to automate data collection. Update the Google Sheet → replace the India dataset with your country’s cards. Adjust scoring logic → modify Function nodes if your cards use different reward structures (e.g., cashback %, miles, points value). Run the workflow → GPT will analyse against the new dataset and generate recommendations specific to your country. This makes the workflow flexible for any geography. Workflow Highlights ✅ End-to-end credit card recommendation pipeline (quiz → scoring → GPT → result) ✅ Handles single + multi-select inputs fairly with % match scoring ✅ Transparent value breakdown in local currency (rewards, milestones, lounge access) ✅ Google Sheets for persistence, dedupe & audit trail ✅ Delivers top 3 cards in <3 seconds on Telegram ✅ Fully customisable for any country by swapping the dataset
by Nishant
Overview Tired of cookie-cutter “AI LinkedIn post generators”? This workflow goes beyond just text generation — it orchestrates the entire lifecycle of a LinkedIn post. From idea capture to deduplication, from GPT-powered drafting to automatic image generation and link storage, it creates ready-to-publish posts while keeping your content unique and audit-friendly. What does this workflow do? This workflow: Captures Ideas & Briefs – Inputs are logged in Google Sheets with audience, goals, and angles. Deduplicates Smartly – Avoids repeating hooks or ideas with fuzzy GPT-based dedupe + GSheet logs. Generates Posts – GPT (OpenAI) drafts sharp LinkedIn-ready posts based on your brief. Creates Images – Post hook + body is sent to an Image Gen model (DALL·E / SDXL) → PNG asset. Stores & Links – Final text + image uploaded to Google Drive with shareable links. Audit Trail – GSheets keeps full history: raw idea, draft, final post, assets, notes. Why is this useful? Most “AI post generators” just spit out text. This workflow builds a real publishing pipeline: 🔄 No duplicates → keeps posts fresh & original. 🖼 Images included → auto-generated visuals increase engagement on LinkedIn. 📊 Audit-ready → every post has a traceable log in Sheets. ⚡ Fast iteration → from half-baked thought → polished post in minutes. Tools used n8n (Orchestrator): Automates triggers, merges, retries, and Google connectors. OpenAI (LLM): Idea generation, drafting, fuzzy dedupe, and voice conformity. Google Sheets: Source of truth — stores ideas, dedupe logs, audit trail. Google Drive: Stores rendered images and shares links for publishing. Image Generation (DALL·E / SDXL): Creates header graphics from hook + body. Who is this for? 🧑💻 Product Managers / Founders who want to post consistently but don’t have time. 🎨 Creators who want to add unique visuals without hiring a designer. ⚙️ n8n Builders who want to see how AI + automation + storage can be stitched into one pipeline. Workflow Highlights ✅ Full content pipeline (ideas → images → final copy). ✅ GPT-based fuzzy dedupe to avoid repetition. ✅ Auto-generated images for higher engagement. ✅ Clean logs in Google Sheets for future reuse & audits. ✅ Ready-to-publish LinkedIn post in minutes.
by Cj Elijah Garay
📋 WORKFLOW OVERVIEW Automate reactions for Telegram Channel Posts - Automated Telegram reaction system for specific posts Flow: User sends message to a receiver bot AI parses request (emoji type & quantity) Code processes and validates data Loop sends reactions one by one User receives confirmation Key Features: Natural language processing by sending a message to a chat bot to react to a post on a different channel Reiterates through bot token rotation. This means that if you use 100 bots then you will be able to have 100 reactions per post of your choice Rate limit protection Error handling with helpful messages You will need to first add the bots that you personally own which can be acquired from BotFather to the channel that you would want them to react posts to and allow it to manage messages. Required Bot Permissions: Bot Must Be an Administrator The bot needs to be added as an admin to the channel (regular member status won't work for reactions). Specific Admin Rights Needed: When adding the bot as admin, you need to enable: ✅ "Post Messages" - This is actually the key permission needed ✅ "Add Subscribers" (optional, but sometimes required depending on channel settings) Credentials needed are: Target Channel ID, Bot tokens, Bot Receiver token, OpenAI API Example Usage: "https://t.me/channel/123 needs 10 hearts and 10 fire reactions If in need of help contact me at: elijahmamuri@gmail.com
by Javier Rieiro
Overview This workflow automates static security analysis for JavaScript, PHP, and Python codebases. It’s designed for bug bounty hunters and security researchers who need fast, structured, and AI-assisted vulnerability detection across multiple sources. Features 🤖 AI-Powered Analysis: Specialized agents for each language: AI JavaScript Expert AI PHP Expert AI Python Expert Each agent detects only exploitable vulnerabilities (AST + regex heuristics). Returns strict JSON with: { "results": [ { "url": "file or URL", "code": "lines + snippet", "severity": "medium|high|critical", "vuln": "vulnerability type" } ] } 🧩 Post-Processing: Cleans, formats, and validates JSON results. Generates HTML tables with clear styling for quick visualization. Output ✅ JSON vulnerability reports per file. 📊 HTML table summaries grouped by language and severity. Usage Import the workflow into n8n. Configure credentials: OpenAI API key GitHub API Key Google Drive API Key Run via the provided webhook form. Select analysis mode and input target. View structured vulnerability reports directly in n8n or Google Drive. Notes Performs static analysis only (no code execution). Detects exploitable findings only; ignores low-impact issues.
by Robert Breen
This n8n workflow automates bulk AI image generation using Freepik's Text-to-Image API. It reads prompts from a Google Sheet, generates multiple variations of each image using Freepik's AI, and automatically uploads the results to Google Drive with organized file names. This is perfect for content creators, marketers, or designers who need to generate multiple AI images in bulk and store them systematically. Key Features: Bulk image generation from Google Sheets prompts Multiple variations per prompt (configurable duplicates) Automatic file naming and organization Direct upload to Google Drive Batch processing for efficient API usage Freepik AI-powered image generation Step-by-Step Implementation Guide Prerequisites Before setting up this workflow, you'll need: n8n instance (cloud or self-hosted) Freepik API account with Text-to-Image access Google account with access to Sheets and Drive Google Sheet with your prompts Step 1: Set Up Freepik API Credentials Go to Freepik API Developer Portal Create an account or sign in Navigate to your API dashboard Generate an API key for Text-to-Image service Copy the API key and save it securely In n8n, go to Credentials → Add Credential → HTTP Header Auth Configure as follows: Name: "Header Auth account" Header Name: x-freepik-api-key Header Value: Your Freepik API key Step 2: Set Up Google Credentials Google Sheets Access: Go to Google Cloud Console Create a new project or select existing one Enable Google Sheets API Create OAuth2 credentials In n8n, go to Credentials → Add Credential → Google Sheets OAuth2 API Enter your OAuth2 credentials and authorize with spreadsheets.readonly scope Google Drive Access: In Google Cloud Console, enable Google Drive API In n8n, go to Credentials → Add Credential → Google Drive OAuth2 API Enter your OAuth2 credentials and authorize Step 3: Create Your Google Sheet Create a new Google Sheet: sheets.google.com Set up your sheet with these columns: Column A: Prompt (your image generation prompts) Column B: Name (identifier for file naming) Example data: | Prompt | Name | |-------------------------------------------|-------------| | A serene mountain landscape at sunrise | mountain-01 | | Modern office space with natural lighting | office-02 | | Cozy coffee shop interior | cafe-03 | Copy the Sheet ID from the URL (the long string between /d/ and /edit) Step 4: Set Up Google Drive Folder Create a folder in Google Drive for your generated images Copy the Folder ID from the URL when viewing the folder Note: The workflow is configured to use a folder called "n8n workflows" Step 5: Import and Configure the Workflow Copy the provided workflow JSON In n8n, click Import from File or Import from Clipboard Paste the workflow JSON Configure each node as detailed below: Node Configuration Details: Start Workflow (Manual Trigger) No configuration needed Used to manually start the workflow Get Prompt from Google Sheet (Google Sheets) Document ID**: Your Google Sheet ID (from Step 3) Sheet Name**: Sheet1 (or your sheet name) Operation**: Read Credentials**: Select your "Google Sheets account" Double Output (Code Node) Purpose**: Creates multiple variations of each prompt JavaScript Code**: const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, ]; Customization**: Add more runs for additional variations Loop (Split in Batches) Processes items in batches to manage API rate limits Options**: Keep default settings Reset**: false Create Image (HTTP Request) Method**: POST URL**: https://api.freepik.com/v1/ai/text-to-image Authentication**: Generic → HTTP Header Auth Credentials**: Select your "Header Auth account" Send Body**: true Body Parameters**: Name: prompt Value: ={{ $json.Prompt }} Split Responses (Split Out) Field to Split Out**: data Purpose**: Separates multiple images from API response Convert to File (Convert to File) Operation**: toBinary Source Property**: base64 Purpose**: Converts base64 image data to file format Upload Image to Google Drive (Google Drive) Operation**: Upload Name**: =Image - {{ $('Get Prompt from Google Sheet').item.json.Name }} - {{ $('Double Output').item.json.run }} Drive ID**: My Drive Folder ID**: Your Google Drive folder ID (from Step 4) Credentials**: Select your "Google Drive account" Step 6: Customize for Your Use Case Modify Duplicate Count: Edit the "Double Output" code to create more variations Update File Naming: Change the naming pattern in the Google Drive upload node Adjust Batch Size: Modify the Loop node settings for your API limits Add Image Parameters: Enhance the HTTP request with additional Freepik parameters (size, style, etc.) Step 7: Test the Workflow Ensure your Google Sheet has test data Click Execute Workflow on the manual trigger Monitor the execution flow Check that images are generated and uploaded to Google Drive Verify file names match your expected pattern Step 8: Production Deployment Set up error handling for API failures Configure appropriate batch sizes based on your Freepik API limits Add logging for successful uploads Consider webhook triggers for automated execution Set up monitoring for failed executions Freepik API Parameters Basic Parameters: prompt: Your text description (required) negative_prompt: What to avoid in the image guidance_scale: How closely to follow the prompt (1-20) num_inference_steps: Quality vs speed trade-off (20-100) seed: For reproducible results Example Enhanced Body: { "prompt": "{{ $json.Prompt }}", "negative_prompt": "blurry, low quality", "guidance_scale": 7.5, "num_inference_steps": 50, "num_images": 1 } Workflow Flow Summary Start → Manual trigger initiates the workflow Read Sheet → Gets prompts and names from Google Sheets Duplicate → Creates multiple runs for variations Loop → Processes items in batches Generate → Freepik API creates images from prompts Split → Separates multiple images from response Convert → Transforms base64 to binary file format Upload → Saves images to Google Drive with organized names Complete → Returns to loop for next batch Contact Information Robert A Ynteractive For support, customization, or questions about this workflow: 📧 Email: rbreen@ynteractive.com 🌐 Website: https://ynteractive.com/ 💼 LinkedIn: https://www.linkedin.com/in/robert-breen-29429625/ Need help implementing this workflow or want custom automation solutions? Get in touch for professional n8n consulting and workflow development services.
by Daniel
Adaptive LLM Router for Optimized AI Chat Responses Elevate your AI chatbots with intelligent model selection: automatically route simple queries to cost-effective LLMs and complex ones to powerful ones, balancing performance and expenses seamlessly. What It Does This workflow listens for chat messages, uses a lightweight Gemini model to classify query complexity, then selects and routes to the optimal LLM (Gemini 2.5 Pro for complex, OpenAI GPT-4.1 Nano for simple) to generate responses—ensuring efficient resource use. Key Features Complexity Classifier** - Quick assessment using Gemini 2.0 Flash Dynamic Model Switching** - Routes to premium or budget models based on needs Chat Trigger** - Webhook-based for real-time conversations Current Date Awareness** - Injects $now into system prompt Modular Design** - Easy to add more models or adjust rules Cost Optimization** - Reserves heavy models for demanding tasks only Perfect For Chatbot Developers**: Build responsive, cost-aware AI assistants Customer Support**: Handle routine vs. technical queries efficiently Educational Tools**: Simple facts vs. in-depth explanations Content Creators**: Quick ideas vs. detailed writing assistance Researchers**: Basic lookups vs. complex analysis Business Apps**: Optimize API costs in production environments Technical Highlights Harnessing n8n's LangChain nodes, this workflow demonstrates: Webhook triggers for instant chat handling Agent-based classification with strict output rules Conditional model selection for AI chains Integration of multiple LLM providers (Google Gemini, OpenAI) Scalable architecture for expanding model options Ideal for minimizing AI costs while maximizing response quality. No coding required—import, configure credentials, and deploy!
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin What This Workflow Does This workflow creates an AI-powered chatbot that can answer natural language questions about your QuickBooks Online data. Using OpenAI's GPT models and the Model Context Protocol (MCP), the agent can retrieve customer information, analyze balances, and provide insights through a conversational interface. Users can simply ask questions like "How many customers do we have?" or "What's our total customer balance?" and get instant answers from live QuickBooks data. Key Features Natural language queries**: Ask questions about your QuickBooks data in plain English MCP architecture**: Uses Model Context Protocol to manage tools efficiently, making it easy to expand with additional QuickBooks operations Public chat interface**: Share the chatbot URL with team members who need QuickBooks insights without direct access Real-time data**: Queries live QuickBooks data for up-to-date answers Common Use Cases Customer service teams checking account balances without logging into QuickBooks Sales teams quickly looking up customer information Finance teams getting quick answers about customer data Managers monitoring key metrics through conversational queries Setup Requirements QuickBooks Developer Account: Register at developer.intuit.com and create an app with Accounting scope permissions. You'll receive a Client ID and Client Secret. Configure OAuth: In your Intuit Developer dashboard, add the redirect URL provided by n8n when creating QuickBooks credentials. Set the environment to Sandbox for testing, or complete Intuit's app approval process for Production use. OpenAI API: Add your OpenAI API credentials to power the chat model. The workflow uses GPT-4.1-mini by default, but you can select other models based on your performance and cost requirements. Chat Access: The chat trigger is set to public by default. Configure access settings based on your security requirements before sharing the chat URL.