by Sira Ekabut
This workflow automates AI-based image generation using the Fal.ai Flux API. Define custom prompts, image parameters, and effortlessly generate, monitor, and save the output directly to Google Drive. Streamline your creative automation with ease and precision. Who is this for? This template is for content creators, developers, automation experts, and creative professionals looking to integrate AI-based image generation into their workflows. It’s ideal for generating custom visuals with the Fal.ai Flux API and automating storage in Google Drive. What problem is this workflow solving? Manually generating AI-based images, checking their status, and saving results can be tedious. This workflow automates the entire process — from requesting image generation, monitoring its progress, downloading the result, and saving it directly to a Google Drive folder. What this workflow does Sets Custom Image Parameters: Allows you to define the prompt, resolution, guidance scale, and steps for AI image generation. Sends a Request to Fal.ai: Initiates the image generation process using the Fal.ai Flux API. Monitors Image Status: Checks for completion and waits if needed. Downloads the Generated Image: Fetches the completed image once ready. Saves to Google Drive: Automatically uploads the generated image to a specified Google Drive folder. Setup 1. Prerequisites: • Fal.ai API Key: Obtain it from the Fal.ai platform and set it as the Authorization header in HTTP Header Auth credentials. • Google Drive OAuth Credentials: Connect your Google Drive account in n8n. 2. Configuration: • Update the “Edit Fields” node with your desired image parameters: • Prompt: Describe the image (e.g., “Thai young woman net idol 25 yrs old, walking on the street”). • Width/Height: Define image resolution (default: 1024x768). • Steps: Number of inference steps (e.g., 30). • Guidance Scale: Controls image adherence to the prompt (e.g., 3.5). • Set your Google Drive folder ID in the “Google Drive” node to save the image. 3. Run the Workflow: • Trigger the workflow manually to generate the image. • The workflow waits, checks status, and saves the final output seamlessly. Customization • Modify Image Parameters: Adjust the prompt, resolution, steps, and guidance scale in the “Edit Fields” node. • Change Storage Location: Update the Google Drive node with a different folder ID. • Add Notifications: Integrate an email or messaging node to alert you when the image is ready. • Additional Outputs: Expand the workflow to send the generated image to Slack, Dropbox, or other platforms. This workflow streamlines AI-based image generation and storage, offering flexibility and customization for creative automation.
by Ranjan Dailata
Who this is for? This workflow enables automated, scalable collection of high-quality, AI-ready data from websites using Bright Data’s Web Unlocker, with a focus on preparing that data for LLM training. Leveraging LLM Chains and AI agents, the system formats and extracts key information, then stores the structured embeddings in a Pinecone vector database. This workflow is tailored for: ML Engineers & Researchers building or fine-tuning domain-specific LLMs. AI Startups needing clean, structured content for product training. Data Teams preparing knowledge bases for enterprise-grade AI apps. LLM-as-a-Service Providers sourcing dynamic web content across niches. What problem is this workflow solving? Training a large language model (LLM) requires vast amounts of clean, relevant, and structured data. Manual collection is slow, error-prone, and lacks scalability. This workflow: Automatically extracts web data from specified URLs. Bypasses anti-bot measures using Bright Data’s Web Unlocker. Formats, cleans, and transforms raw content using LLM agents. Stores semantically searchable vectors in Pinecone. Makes datasets AI-ready for fine-tuning, RAG, or domain-specific training. What this workflow does This workflow automates the process of collecting, cleaning, and vectorizing web content to create structured, high-quality datasets that are ready to be used for LLM (Large Language Model) training or retrieval-augmented generation (RAG). Web Crawling with Bright Data Web Unlocker. AI Information Extraction and Data Formatting. AI Data Formatting to produce a JSON structured data. Persistence in Pinecone Vector DB. Handle Webhook notification of structured data. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Set Fields - URL and Webhook URL node with the URL for web data extraction and the Webhook notification URL. How to customize this workflow to your needs Set Your Target URLs. Target sites that are high-quality, domain-specific, and relevant to your LLM's purpose. Adjust Bright Data Web Unlocker Settings. Geo-location, Headers / User-Agent strings, Retry rules and proxies. Modify the Information Extraction Logic. Change prompts to extract specific attributes. Use structured templates or few-shot examples in prompts. Swap the Embedding Model. Use OpenAI, Hugging Face or other your own hosted embedding model API. Customize Pinecone Metadata Fields. Store extra fields in Pinecone for better filtering & semantic querying. Add Data Validation or Deduplication. Skip duplicates or low-quality content.
by Jimleuk
This n8n workflow demonstrates how to create a really simple yet effective customer support channel and pipeline by combining Slack, Linear and AI tools. Built on n8n's ability to integrate anything, this workflow is intended for small support teams who want to maximise re-use of the tools they already have with an interface which is doesn't require any onboarding. Read the blog post here: https://blog.n8n.io/automated-customer-support-tickets-with-n8n-slack-linear-and-ai/ How it works The workflow is connected to a slack channel setup with the customer to capture support issues. Only messages which are tagged with a "✅" reaction are captured by the workflow. Messages are tagged by the support team in the channel. Each captured support issue is sent to the AI model to classify, prioritise and rewrite into a support ticket. The generated support ticket is uploaded to Linear for the support team to investigate and track. Support team is able to report back to the user via the channel when issue is fixed. Requirements Slack channel to be monitored Linear account and project Customising this workflow Don't have Linear? This workflow can work just as well with traditional ticketing systems like JIRA.
by Adam Crafts
🎥 n8n Workflow: Generate AI Videos with HeyGen 🚀 Overview This automation connects directly to HeyGen's powerful AI video generation platform. It allows you to programmatically create videos with digital avatars and voiceovers, perfect for scaling your content creation for social media, marketing campaigns, or personalized messages without ever opening a video editor. 😩 The Problem Creating video content is incredibly time-consuming and expensive. You have to write scripts, record audio, find actors or create complex animations, piece everything together in an editor, and then wait for it to render. Every minor change or personalization requires repeating the entire frustrating process. This manual work is a major bottleneck, making it nearly impossible to produce large volumes of high-quality video content quickly and affordably. ✨ The Solution This workflow acts as your personal, automated video production assistant! When you provide a script, the automation instantly sends instructions to HeyGen to begin creating your video. It tells the AI which avatar and voice to use and starts the generation process. Then, it cleverly waits and periodically checks the status until your new video is finished and ready. It’s a completely hands-off process that transforms simple text into professional AI videos on demand. 🔧 What It Does Send a request to HeyGen's API to generate a video with: Custom avatar Scripted voice-over Background color and dimension Wait 30 seconds Check video status Loop until video is completed, failed, or still processing ⚙️ Simple Setup This workflow is a pre-built blueprint, designed to be up and running in minutes! Upload:** Simply upload the provided JSON file into your n8n instance. Connect:** Connect your app credentials (e.g., your HeyGen account). The workflow will show you exactly where. Activate:** Turn the workflow on, and it's ready to go! Let your new automated employee get to work. This free n8n workflow allows you to generate AI videos using HeyGen via their API. 🌐 Explore more workflows ❤️ Buy more workflows at: adamcrafts 🦾 Custom workflows at: adamcrafts@cloudysoftwares.com adamaicrafts@gmail.com > Build once, customize endlessly, and scale your video content like never before. 🚀
by ist00dent
This n8n workflow provides a simple yet powerful utility to convert Unix timestamps (seconds since epoch) into the universally recognized ISO 8601 date and time format. This is crucial for harmonizing date data across different systems, databases, and applications. 🔧 How it works Receive Timestamp Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing a single property: timestamp, which should be a Unix timestamp in seconds (e.g., 1678886400). Convert to ISO 8601: This node takes the timestamp received from the webhook. Since JavaScript's Date object typically uses milliseconds, it multiplies the Unix timestamp by 1000. It then uses new Date(...).toISOString() to convert this into an ISO 8601 formatted string (e.g., 2023-03-15T00:00:00.000Z) and assigns it to a new property called convertedTime. Respond with Converted Time: This node sends the convertedTime property back as the response to the original webhook caller. 👤 Who is it for? This workflow is extremely useful for: Developers & Integrators: When working with APIs or databases that return dates as Unix timestamps, and you need to display them in a human-readable or standardized format in your applications or dashboards. Data Analysts & Scientists: For cleaning and transforming raw timestamp data from logs, event streams, or legacy systems into a consistent format for analysis. System Administrators: For debugging logs where timestamps are often in Unix format. Anyone Managing Data Imports/Exports: Ensuring date compatibility when moving data between different platforms. Automators: As a building block in larger workflows where incoming data has Unix timestamps that need to be normalized before further processing (e.g., adding to a spreadsheet, sending in an email, or performing date calculations). 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "timestamp": 1678886400 } The workflow will return a JSON response similar to this: { "convertedTime": "2023-03-15T00:00:00.000Z" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Timestamp Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /convert-timestamp or /unix-to-iso). Activate Workflow: Save and activate the workflow. 📝 Tips This simple conversion workflow can be drastically enhanced and leveraged in many ways: Dynamic Output Formats: Upgrade: Modify the Convert to ISO 8601 node (or add a Function node after it) to accept an optional format parameter in the webhook. Leverage: Allow users to request formats like MM/DD/YYYY HH:mm:ss, YYYY-MM-DD, DD-MM-YYYY, or just the time, making the output directly usable in various contexts without further processing. Example using a Function node: const date = new Date($json.timestamp * 1000); const format = $json.format || 'iso'; // Default to ISO let output; switch (format.toLowerCase()) { case 'iso': output = date.toISOString(); break; case 'locale': // e.g., "3/15/2023, 12:00:00 AM UTC" output = date.toLocaleString('en-US', { timeZone: 'UTC' }); break; case 'dateonly': // e.g., "2023-03-15" output = date.toISOString().split('T')[0]; break; case 'timeonly': // e.g., "00:00:00 UTC" output = date.toLocaleTimeString('en-US', { timeZone: 'UTC', hour12: false }); break; default: output = date.toISOString(); // Fallback } return [{ json: { convertedTime: output } }]; Timezone Conversion: Upgrade: Combine this with the Time Zone Converter workflow (or integrate moment-timezone.js if using a Code node and have a self-hosted instance). Accept an optional targetTimeZone parameter in the webhook. Leverage: Convert the Unix timestamp directly into a human-readable date and time in a specific target timezone, which is incredibly valuable for global scheduling or reporting. Error Handling and Input Validation: Upgrade: Add an IF node after the Receive Timestamp Webhook. Check if isNaN($json.body.timestamp) or if typeof $json.body.timestamp !== 'number'. Leverage: If the input timestamp is invalid, branch to a Respond to Webhook node that returns a clear error message (e.g., "Invalid timestamp provided. Please provide a numeric Unix timestamp in seconds."). This makes your API more robust. Reverse Conversion (ISO to Unix): Upgrade: Create a separate workflow, or add another branch to this one, to convert an ISO 8601 string back to a Unix timestamp. This provides a complete conversion utility. Example Set node value: ={{ new Date($json.body.isoString).getTime() / 1000 }} Integration with Data Pipelines: Upgrade: Use this workflow as a microservice in larger ETL (Extract, Transform, Load) pipelines. Leverage: If you're pulling data from a source that provides Unix timestamps (e.g., a logging system, IoT device, certain databases), send that data through this workflow to normalize the dates before loading them into your analytics database, CRM, or data warehouse. Automated Reporting: Upgrade: If you have a system that generates reports with Unix timestamps, trigger this webhook for each timestamp. Leverage: Produce reports with human-readable dates for better readability and decision-making for non-technical stakeholders. This workflow is a cornerstone for any automation involving diverse date and time data. By implementing the suggested upgrades, you can transform it from a basic converter into a highly flexible and reliable date-time processing hub.
by Jimleuk
This n8n workflow demonstrates how to automate indexing of images to build a object-based image search. By utilising a Detr-Resnet-50 Object Classification model, we can identify objects within an image and store these associations in Elasticsearch along with a reference to the image. How it works An image is imported into the workflow via HTTP request node. The image is then sent to Cloudflare's Worker AI API where the service runs the image through the Detr-Resnet-50 object classification model. The API returns the object associations with their positions in the image, labels and confidence score of the classification. Confidence scores of less the 0.9 are discarded for brevity. The image's URL and its associations are then index in an ElasticSearch server ready for searching. Requirements A Cloudflare account with Workers AI enabled to access the object classification model. An ElasticSearch instance to store the image url and related associations. Extending this workflow Further enrich your indexed data with additional attributes or metrics relevant to your users. Use a vectorstore to provide similarity search over the images.
by Yaron Been
🚀 Automated Job Hunter: Upwork Opportunity Aggregator & AI-Powered Notifier! Workflow Overview This cutting-edge n8n automation is a sophisticated job discovery and notification tool designed to transform freelance job hunting into a seamless, intelligent process. By intelligently connecting Apify, OpenAI, Google Sheets, and Gmail, this workflow: Discovers Job Opportunities: Automatically scrapes Upwork job listings Tracks recent freelance opportunities Eliminates manual job searching efforts Intelligent Data Processing: Filters and extracts key job details Structures job information Ensures comprehensive opportunity tracking AI-Powered Summarization: Generates concise job summaries Creates human-readable job digests Provides quick, actionable insights Seamless Notification: Automatically logs jobs to Google Sheets Sends personalized email digests Enables rapid opportunity assessment Key Benefits 🤖 Full Automation: Zero-touch job discovery 💡 Smart Filtering: Targeted job opportunities 📊 Comprehensive Tracking: Detailed job market insights 🌐 Multi-Platform Synchronization: Seamless data flow Workflow Architecture 🔹 Stage 1: Job Discovery Scheduled Trigger**: Daily job scanning Apify Integration**: Upwork job scraping Intelligent Filtering**: Recent job postings Specific keywords Relevant opportunities 🔹 Stage 2: Data Extraction Comprehensive Job Metadata Parsing** Key Information Retrieval** Structured Data Preparation** 🔹 Stage 3: AI Summarization OpenAI GPT Processing** Professional Summary Generation** Contextual Job Insight Creation** 🔹 Stage 4: Multi-Platform Distribution Google Sheets Logging** Gmail Integration** Automated Job Digest Delivery** Potential Use Cases Freelancers**: Opportunity tracking Job Seekers**: Automated job discovery Recruitment Agencies**: Market intelligence Skill Development Professionals**: Trend monitoring Career Coaches**: Client opportunity identification Setup Requirements Apify Upwork scraping actor API token Configured scraping parameters OpenAI API GPT model access Summarization configuration API key management Google Sheets Connected Google account Prepared job tracking spreadsheet Appropriate sharing settings Gmail Account Connected email Job digest configuration Appropriate sending permissions n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Advanced job matching algorithms 📊 Multi-platform job aggregation 🔔 Customizable alert mechanisms 🌐 Expanded job category tracking 🧠 Machine learning job recommendation Technical Considerations Implement robust error handling Use secure API authentication Maintain flexible data processing Ensure compliance with platform guidelines Ethical Guidelines Respect job poster privacy Use data for legitimate job searching Maintain transparent information gathering Provide proper attribution Hashtag Performance Boost 🚀 #FreelanceJobHunting #CareerAutomation #JobDiscovery #AIJobSearch #WorkflowAutomation #FreelanceTech #CareerIntelligence #JobMarketInsights #ProfessionalNetworking #TechJobSearch Workflow Visualization [Daily Trigger] ⬇️ [Fetch Upwork Jobs] ⬇️ [Format Job Fields] ⬇️ [Log to Google Sheets] ⬇️ [AI Summarization] ⬇️ [Send Email Digest] Connect With Me Ready to revolutionize your job hunting strategy? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your job search with intelligent, automated workflows!
by ARRE
Good to know: This workflow automatically processes product images from Google Drive, generates AI-powered background prompts using multiple AI models (ChatGPT, Claude, or Groq), creates professional background scenes using Pixelcut.ai, and saves enhanced images back to your Google Drive. Perfect for e-commerce businesses and product photography workflows. Who is this for? ➖E-commerce store owners who need professional product backgrounds ➖Product photographers looking to automate background generation ➖Marketing teams creating consistent product imagery ➖Small businesses wanting to enhance their product photos without expensive studio setups ➖Anyone who needs to quickly transform transparent product images into commercial-ready photos What problem is this workflow solving? This workflow solves the challenge of creating professional product photography backgrounds at scale. Instead of manually editing each product image or setting up expensive photo shoots, it automatically generates contextually appropriate backgrounds for your products using AI technology. It eliminates the time-consuming process of background creation while maintaining professional quality and consistency across your product catalog. What this workflow does: ✅Automatically fetches product images from your Google Drive folder ✅Downloads transparent/background-free product images ✅Uses advanced AI models (ChatGPT, Claude, or Groq) to generate intelligent background prompts based on product analysis ✅Creates professional backgrounds using Pixelcut.ai API with AI-generated or custom prompts ✅Saves enhanced product images back to Google Drive with organized naming ✅Processes multiple images in batch automatically How it works: 1️⃣Google Drive node searches for PNG product images in your specified folder 2️⃣Binary download node retrieves the actual image files for processing 3️⃣Optional AI agent analyzes products using your chosen AI model (OpenAI GPT-4, Claude, or Groq) and generates appropriate background prompts 4️⃣Pixelcut.ai API processes images and adds professional backgrounds using AI-generated or manual prompts 5️⃣Enhanced images are automatically saved back to Google Drive with "enhanced-" prefix How to use: Set up Google Drive OAuth2 credentials in n8n Create a Pixelcut.ai account and get your API key Configure your source folder ID in the Google Drive nodes Set up your output folder ID for enhanced images Choose and configure your preferred AI model credentials (OpenAI for ChatGPT, Anthropic for Claude, or Groq) Replace placeholder API keys with your actual credentials Execute the workflow to process your product images Requirements: ✅n8n instance (cloud or self-hosted) ✅Google Drive account with OAuth2 access ✅Pixelcut.ai API account and key ✅Product images in PNG format (transparent backgrounds recommended) ✅AI API credentials for automatic prompt generation (choose from): OpenAI API (for ChatGPT/GPT-4) Anthropic API (for Claude) Groq API (for fast inference) ✅Basic understanding of n8n workflows Customizing this workflow: 🟢Modify the image format filter to support JPG, WEBP, or other formats 🟢Switch between different AI models (ChatGPT, Claude, Groq) for prompt generation 🟢Customize background prompts for different product categories 🟢Add background removal step for products with existing backgrounds 🟢Switch to different AI background services (Deep-Image.ai, Remove.bg, etc.) 🟢Configure different AI model parameters for varied prompt creativity 🟢Add image resizing or quality optimization steps 🟢Create multiple output folders for different product categories 🟢Add error handling and retry mechanisms for failed processes 🟢Implement A/B testing with different AI models for prompt quality comparison
by Daniel Shashko
This workflow enables you to automate the daily monitoring of how an AI model (like ChatGPT) responds to specific queries relevant to your market. It identifies mentions of your brand and predefined competitors, logs detailed interactions in Google Sheets, and delivers a comprehensive email report. Main Use Cases Monitor how your brand is mentioned by AI in response to relevant user queries. Track mentions of key competitors to understand AI's comparative positioning. Gain insights into AI's current knowledge and portrayal of your brand and market landscape. Automate daily intelligence gathering on AI-driven brand perception. How it works The workflow operates as a scheduled process, organized into these stages: Configuration & Scheduling Triggers daily (or can be run manually). Key variables are defined within the workflow: your brand name (e.g., "YourBrandName"), a list of queries to ask the AI, and a list of competitor names to track in responses. AI Querying For each predefined query, the workflow sends a request to the OpenAI ChatGPT API (via an HTTP Request node). Response Analysis Each AI response is processed by a Code node to: Check if your brand name is mentioned (case-insensitive). Identify if any of the listed competitors are mentioned (case-insensitive). Extract the core AI response content (limited to 500 characters for brevity in logs/reports). Data Logging to Google Sheets Detailed results for each query—including timestamp, date, the query itself, query index, your brand name, the AI's response, whether your brand was mentioned, and any errors—are appended to a specified Google Sheet. Email Report Generation A comprehensive HTML email report is compiled. This report summarizes: Total queries processed, number of times your brand was mentioned, total competitor mentions, and any errors encountered. A summary of competitor mentions, listing each competitor and how many times they were mentioned. A detailed table listing each query, whether your brand was mentioned, and which competitors (if any) were mentioned in the AI's response. Automated Reporting The generated HTML email report is sent to specified recipients, providing a daily snapshot of AI interactions. Summary Flow: Schedule/Workflow Trigger → Initialize Brand, Queries, Competitors (in Code node) → For each Query: Query ChatGPT API → Process AI Response (Check for Brand & Competitor Mentions) → Log Results to Google Sheets → Generate Consolidated HTML Email Report → Send Email Notification Benefits: Fully automated daily monitoring of AI responses concerning your brand and competitors. Provides objective insights into how AI models are representing your brand in user interactions. Delivers actionable competitive intelligence by tracking competitor mentions. Centralized logging in Google Sheets for historical analysis and trend spotting. Easily customizable with your specific brand, queries, competitor list, and reporting recipients.
by Jonathan
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow Main use cases: Connect with apps and services that n8n doesn’t have integration with Web scraping How it works This workflow can be divided into three branches, each serving a distinct purpose: 1.Splitting into Items (HTTP Request - Get Mock Albums): The workflow initiates with a manual trigger (On clicking 'execute'). It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums." The obtained data is split into items using the Item Lists node, facilitating easier management. 2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract): Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random." The HTML Extract node extracts the article title from the fetched Wikipedia page. 3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request): It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node. The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page). This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
by n8n Team
This workflow syncs data between Notion and Asana whenever a new task or an update is done in one of the apps. Prerequisites Asana account and Asana credentials Notion account and Notion credentials How it works Go to Asana account. Create a new task in Asana. Notice a new task created in Notion account. Update the task in Asana. Notice the task is updated in Notion.
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Enter the ID of the YTB channel to trigger the workflow when a new video is posted 2 - Apify scrape the last YTB video of the channel 3 - Wait until the dataset is completed in Apify and get it 4 - Verify if Metadata are not already generated and generate them with LLM 5 - Format all the data created and update YTB Video 📺 YouTube Video Tutorial: SETUP Setup Input YTB Chanel : Go to the channel's page on YouTube, and look at the URL of the page. The channel ID is the value that comes after channel/ in the URL. Add it after "?channel_id=" You can also use free tools available to retrieve channel ID. Setup Output YTB Video Update : Connect your YTB account to your n8n instance thanks to the Google Cloud Console. You can find tutorials by typing "youtube api Oauth" on Google. APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started Youtube : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.youtube/?utm_source=n8n_app&utm_medium=node_settings_modal-credential_link&utm_campaign=n8n-nodes-base.youTube#templates-and-examples 👨💻 More Workflows : https://n8n.io/creators/nasser/