by Meak
Auto-Call Leads from Google Sheets with VAPI → Log Results + Book Calendar This workflow calls new leads from a Google Sheet using VAPI, saves the call results, and (if there’s a booking request) creates a Google Calendar event automatically. Benefits Auto-call each new lead from your call list Save full call outcomes back to Google Sheets Parse “today/tomorrow + time” into a real datetime (IST) Auto-create calendar events for bookings/deliveries Batch-friendly to avoid rate limits How It Works Trigger: New row in Google Sheets (call_list). Prepare: Normalize phone (adds +), then process in batches. Call: Send number to VAPI (/call) with your assistantId + phoneNumberId. Receive: VAPI posts results to your Webhook. Store: Append/Update Google Sheet with: name, role, company, phone, email, interest level, objections, next step, notes, etc. Parse Time: Convert today/tomorrow + HH:MM AM/PM to start/end in IST (+1 hour). Book: Create Google Calendar event with the parsed times. Respond: Send response back to VAPI to complete the cycle. Who Is This For Real estate / local service teams running outbound calls Agencies doing voice outreach and appointment setting Ops teams that want call logs + auto-booking in one place Setup Google Sheets Trigger:** select your spreadsheet Vapi_real-estate and tab call_list. VAPI Call:** set assistantId, phoneNumberId, and add Bearer token. Webhook:** copy the n8n webhook URL into VAPI so results post back. Google Calendar:** set the calendar ID (e.g., you@domain.com). Timezone:* the booking parser formats times to *Asia/Kolkata (IST)**. Batching:** adjust SplitInBatches size to control pace. ROI & Monetization Save 2–4 hours/week on manual dialing + data entry Faster follow-ups with instant booking creation Package as an “AI Caller + Auto-Booking” service ($1k–$3k/month) Strategy Insights In the full walkthrough, I show how to: Map VAPI tool call JSON safely into Sheets fields Handle missing/invalid times and default to safe slots Add no-answer / retry logic and opt-out handling Extend to send Slack/email alerts for hot leads Check Out My Channel For more voice automation workflows that turn leads into booked calls, check out my YouTube channel where I share the exact setups I use to win clients and scale to $20k+ monthly revenue.
by Jay Emp0
🐱 MemeCoin Art Generator - using Gemini Flash NanoBanana & upload to Twitter Automatically generates memecoin art and posts it to Twitter (X) powered by Google Gemini, NanoBanana image generation, and n8n automation. 🧩 Overview This workflow creates viral style memecoin images (like Popcat) and posts them directly to Twitter with a witty, Gen Z style tweet. It combines text to image AI, scheduled triggers, and social publishing, all in one seamless flow. Workflow flow: Define your memecoin mascot (name, description, and base image URL). Generate an AI image prompt and a meme tweet. Feed the base mascot image into Gemini Image Generation API. Render a futuristic memecoin artwork using NanoBanana. Upload the final image and tweet automatically to Twitter. 🧠 Workflow Diagram ⚙️ Key Components | Node | Function | |------|-----------| | Schedule Trigger | Runs automatically at chosen intervals to start meme generation. | | Define Memecoin | Defines mascot name, description, and base image URL. | | AI Agent | Generates tweet text and creative image prompt using Google Gemini. | | Google Gemini Chat Model | Provides trending topic context and meme phrasing. | | Get Source Image | Fetches the original mascot image (e.g., Popcat). | | Convert Source Image to Base64 | Prepares image for AI based remixing. | | Generate Image using NanoBanana | Sends the prompt and base image to Gemini Image API for art generation. | | Convert Base64 to PNG | Converts the AI output to an image file. | | Upload to Twitter | Uploads generated image to Twitter via media upload API. | | Create Tweet | Publishes the tweet with attached image. | 🪄 How It Works 1️⃣ Schedule Trigger - starts the automation (e.g., hourly or daily). 2️⃣ Define Memecoin - stores your mascot metadata: memecoin_name: popcat mascot_description: cat with open mouth mascot_image: https://i.pinimg.com/736x/9d/05/6b/9d056b5b97c0513a4fc9d9cd93304a05.jpg 3️⃣ AI Agent - prompts Gemini to: Write a short 100 character tweet in Gen Z slang. Create an image generation prompt inspired by current meme trends. 4️⃣ NanoBanana API - applies your base image + AI prompt to create art. 5️⃣ Upload & Tweet - final image gets uploaded and posted automatically. 🧠 Example Output Base Source Image: Generated Image (AI remix): Published Tweet: Example tweet text: > Popcat's about to go absolutely wild, gonna moon harder than my last test score! 🚀📈 We up! #Popcat #Memecoin 🧩 Setup Tutorial 1️⃣ Prerequisites | Tool | Purpose | |------|----------| | n8n (Cloud or Self hosted) | Workflow automation platform | | Google Gemini API Key | For generating tweet and image prompts | | Twitter (X) API OAuth1 + OAuth2 | For uploading and posting tweets | 2️⃣ Import the Workflow Download memecoin art generator.json. In n8n, click Import Workflow → From File. Set up and connect credentials: Google Gemini API Twitter OAuth (Optional) Adjust Schedule Trigger frequency to your desired posting interval. 3️⃣ Customize Your MemeCoin In the Define Memecoin node, edit these fields to change your meme theme: memecoin_name: "doggo" mascot_description: "shiba inu in astronaut suit" mascot_image: "https://example.com/shiba.jpg" That’s it - next cycle will generate your new meme and post it. 4️⃣ API Notes Gemini Image Generation API Docs:** https://ai.google.dev/gemini-api/docs/image-generation#gemini-image-editing API Key Portal:** https://aistudio.google.com/api-keys
by Ranjan Dailata
This workflow automates company research and intelligence extraction from Glassdoor using Decode API for data retrieval and Google Gemini for AI-powered summarization. Who this is for This workflow is ideal for: Recruiters, analysts, and market researchers looking for structured insights from company profiles. HR tech developers and AI research teams needing a reliable way to extract and summarize Glassdoor data automatically. Venture analysts or due diligence teams conducting company research combining structured and unstructured content. Anyone who wants instant summaries and insights from Glassdoor company pages without manual scraping. What problem this workflow solves Manual Data Extraction**: Glassdoor company details and reviews are often scattered and inconsistent, requiring time-consuming copy-paste efforts. Unstructured Insights**: Raw reviews contain valuable opinions but are not organized for analytical use. Fragmented Company Data**: Key metrics like ratings, pros/cons, and FAQs are mixed with irrelevant data. Need for AI Summarization**: Business users need a concise, executive-level summary that combines employee sentiment, culture, and overall performance metrics. This workflow automates data mining, summarization, and structuring, transforming Glassdoor data into ready-to-use JSON and Markdown summaries. What this workflow does The workflow automates the end-to-end pipeline for Glassdoor company research: Trigger Start manually by clicking “Execute Workflow.” Set Input Fields Define company_url (e.g., a Glassdoor company profile link) and geo (country). Extract Raw Data from Glassdoor (Decodo Node) Uses the Decodo API to fetch company data — including overview, ratings, reviews, and frequently asked questions. Generate Structured Data (Google Gemini + Output Parser) The Structured Data Extractor node (powered by Gemini AI) processes raw data into well-defined fields: Company overview (name, size, website, type) Ratings breakdown Review snippets (pros, cons, roles) FAQs Key takeaways Summarize the Insights (Gemini AI Summarizer) Produces a detailed summary highlighting: Company reputation Work culture Employee sentiment trends Strengths and weaknesses Hiring recommendations Merge and Format Combines structured data and summary into a unified object for output. Export and Save Converts the final report into JSON and writes it to disk as C:\{{CompanyName}}.json. Binary Encoding for File Handling Prepares data in base64 for easy integration with APIs or downloadable reports. Setup Prerequisites n8n instance** (cloud or self-hosted) Decodo API credentials** (added as decodoApi) Google Gemini (PaLM) API credentials** Access to the Glassdoor company URLs Make sure to install the Decodo Community Node. Steps Import this workflow JSON file into your n8n instance. Configure your credentials for: Decodo API Google Gemini (PaLM) API Open the Set the Input Fields node and replace: company_url → with the Glassdoor URL geo → with the region (e.g., India, US, etc.) Execute the workflow. Check your output folder (C:\) for the exported JSON report. How to Customize This Workflow You can easily adapt this template to your needs: Add Sentiment Analysis** Include another Gemini or OpenAI node to rate sentiment (positive/negative/neutral) per review. Export to Notion or Google Sheets** Replace the file node with a Notion or Sheets integration for live dashboarding. Multi-Company Batch Mode** Convert the manual trigger to a spreadsheet or webhook trigger for bulk research automation. Add Visualization Layer** Connect the output to Looker Studio or Power BI for analytical dashboards. Change Output Format** Modify the final write node to generate Markdown or PDF summaries using the pypandoc or reportlab module. Summary This n8n workflow combines Decode web scrapping with Google Gemini’s reasoning and summarization power to build a fully automated Glassdoor Research Engine. With a single execution, it: Extracts structured company details Summarizes thousands of employee reviews Delivers insights in an easy-to-consume format Ideal for: Recruitment intelligence Market research Employer branding Competitive HR analysis
by Aadarsh Jain
Document Analyzer and Q&A Workflow AI-powered document and web page analysis using n8n and GPT model. Ask questions about any local file or web URL and get intelligent, formatted answers. Who's it for Perfect for researchers, developers, content analysts, students, and anyone who needs quick insights from documents or web pages without uploading files to external services. What it does Analyzes local files**: PDF, Markdown, Text, JSON, YAML, Word docs Fetches web content**: Documentation sites, blogs, articles Answers questions**: Using GPT model with structured, well-formatted responses Input format: path_or_url | your_question Examples: /Users/docs/readme.md | What are the installation steps? https://n8n.io | What is n8n? Setup Import workflow into n8n Add your OpenAI API key to credentials Link the credential to the "OpenAI Document Analyzer" node Activate the workflow Start chatting! Customize Change AI model → Edit "OpenAI Document Analyzer" node (switch to gpt-4o-mini for cost savings) Adjust content length → Modify maxLength in "Process Document Content" node (default: 15000 chars) Add file types → Update supportedTypes array in "Parse Document & Question" node Increase timeout → Change timeout value in "Fetch Web Content" node (default: 30s)
by Parag Javale
The AI Blog Creator with Gemini, Replicate Image, Supabase Publishing & Slack is a fully automated content generation and publishing workflow designed for modern marketing and SaaS teams. It automatically fetches the latest industry trends, generates SEO-optimized blogs using AI, creates a relevant featured image, publishes the post to your CMS (e.g., Supabase or custom API), and notifies your team via Slack all on a daily schedule. This workflow connects multiple services NewsAPI, Google Gemini, Replicate, Supabase, and Slack into one intelligent content pipeline that runs hands-free once set up. ✨ Features 📰 Fetch Trending Topics — pulls the latest news or updates from your selected industry (via NewsAPI). 🤖 AI Topic Generation — Gemini suggests trending blog topics relevant to AI, SaaS, and Automation. 📝 AI Blog Authoring — Gemini then writes a full 1200-1500 word SEO-optimized article in Markdown. 🧹 Smart JSON Cleaner — A resilient code node parses Gemini’s output and ensures clean, structured data. 🖼️ Auto-Generated Image — Replicate’s Ideogram model creates a blog cover image based on the content prompt. 🌐 Automatic Publishing — Posts are automatically published to your Supabase or custom backend. 💬 Slack Notification — Notifies your team with blog details and live URL. ⏰ Fully Scheduled — Runs automatically every day at your preferred time (default 10 AM IST). ⚙️ Workflow Structure | Step | Node | Purpose | | ---- | ----------------------------------- | ----------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 10 AM | | 2 | Fetch Industry Trends (NewsAPI) | Retrieves trending articles | | 3 | Message a model (Gemini) | Generates trending topic ideas | | 4 | Message a model1 (Gemini) | Writes full SEO blog content | | 5 | Code in JavaScript | Cleans, validates, and normalizes Gemini output | | 6 | HTTP Request (Replicate) | Generates an image using Ideogram | | 7 | HTTP Request1 | Retrieves generated image URL | | 8 | Wait + If | Polls until image generation succeeds | | 9 | Edit Fields | Assembles blog fields into final JSON | | 10 | Publish to Supabase | Posts to your CMS | | 11 | Slack Notification | Sends message to your Slack channel | 🔧 Setup Instructions Import the Workflow in n8n and enable it. Create the following credentials: NewsAPI (Query Auth) — from https://newsapi.org Google Gemini (PaLM API) — use your Gemini API key Replicate (Bearer Auth) — API key from https://replicate.com/account Supabase (Header Auth) — endpoint to your /functions/v1/blog-api (set your key in header) Slack API — create a Slack App token with chat:write permission Edit the NewsAPI URL query parameter to match your industry (e.g., q=AI automation SaaS). Update the Supabase publish URL to your project endpoint if needed. Adjust the Slack Channel name under “Slack Notification”. (Optional) Change the Schedule Trigger time as per your timezone. 💡 Notes & Tips The Code in JavaScript node is robust against malformed or extra text in Gemini output — it sanitizes Markdown and reconstructs clean JSON safely. You can replace Supabase with any CMS or Webhook endpoint by editing the “Publish to Supabase” node. The Replicate model used is ideogram-ai/ideogram-v3-turbo — you can swap it with Stable Diffusion or another model for different aesthetics. Use the slug field in your blog URLs for SEO-friendly links. Test with one manual execution before activating scheduled runs. If Slack notification fails, verify the token scopes and channel permissions. 🧩 Tags #AI #Automation #ContentMarketing #BlogGenerator #n8n #Supabase #Gemini #Replicate #Slack #WorkflowAutomation
by Jaruphat J.
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package, pdfseparate from poppler-utils, and custom command execution. Make sure to install all required dependencies locally. Who is this for? This template is designed for developers, back-office teams, and automation builders (especially in Thailand or Thai-speaking environments) who need to process multi-file, multi-page Thai PDFs and automatically export structured results to Google Sheets. It is ideal for: Government and enterprise document processing Thai-language invoices, memos, and official letters AI-powered automation pipelines that require Thai OCR What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text, but integrating it into an end-to-end workflow usually requires manual scripting and handling multi-page PDFs. This template solves that by: Splitting PDFs into individual pages Running Typhoon OCR on each page Aggregating text back into a single file Using AI to extract structured fields Automatically saving structured data into Google Sheets What this workflow does Trigger:** Manual execution or any n8n trigger node Load Files:** Read PDFs from a local doc/multipage folder Split PDF Pages:** Use pdfinfo and pdfseparate to break PDFs into pages Typhoon OCR:** Run OCR on each page via Execute Command Aggregate:** Combine per-page OCR text LLM Extraction:** Use AI (e.g., GPT-4, OpenRouter) to extract fields into JSON Parse JSON:** Convert structured JSON into a tabular format Google Sheets:** Append one row per file into a Google Sheet Cleanup:** Delete temp split pages and move processed PDFs into a Completed folder Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr poppler-utils: provides pdfinfo, pdfseparate qpdf: backup page counting Create folders /doc/multipage for incoming files /doc/tmp for split pages /doc/multipage/Completed for processed files Google Sheet Create a Google Sheet with column headers like: book_id | date | subject | to | attach | detail | signed_by | signed_by2 | contact_phone | contact_email | contact_fax | download_url API Keys Export your TYPHOON_OCR_API_KEY and OPENAI_API_KEY (or use credentials in n8n) How to customize this workflow Replace the LLM provider in the “Structure Text to JSON with LLM” node (supports OpenRouter, OpenAI, etc.) Adjust the JSON schema and parsing logic to match your documents Update Google Sheets mapping to fit your desired fields Add trigger nodes (Dropbox, Google Drive, Webhook) to automate file ingestion About Typhoon OCR Typhoon is a multilingual LLM and NLP toolkit optimized for Thai. It includes typhoon-ocr, a Python OCR package designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multi-language documents in Southeast Asia. Deployment Option You can also deploy this workflow easily using the Docker image provided in my GitHub repository: https://github.com/Jaruphat/n8n-ffmpeg-typhoon-ollama This Docker setup already includes n8n, ffmpeg, Typhoon OCR, and Ollama combined together, so you can run the whole environment without installing each dependency manually.
by Nasser
Who’s it for? Content Creators E-commerce Stores Marketing Team Description: Generate unique UGC images for your products. Simply upload a product image into a Google Drive folder, and the workflow will instantly generate 50 unique, high-quality AI UGC images using Nano Banana via Fal.ai. All results are automatically saved back into the same folder, ready to use across social media, e-commerce stores, and marketing campaigns. How it works? 📺 YouTube Video Tutorial: 1 - Trigger: Upload a new Product Image (with white background) to a Folder in your Google Drive 2 - Generate 50 different Image Prompts for your Product 3 - Loop over each Prompt Generated 4 - Generate UGC Content thanks to Fal.ai (Nano Banana) 5 - Upload UGC Content on the initial Google Drive Folder Cost: 0.039$ / image== How to set up? 1. Accounts & APIs In the Edit Field "Setup" Node replace all ==[YOUR_API_TOKEN]== with your API Token : Fal.ai (gemini-25-flash-image/edit): https://fal.ai/models/fal-ai/gemini-25-flash-image/edit/api In Credentials on your n8n Dashboard, connect the following accounts using ==Client ID / Secret==: Google Drive: https://docs.n8n.io/integrations/builtin/credentials/google/ 2. Requirements Base Image of your Product preferably have a White Background Your Google Drive Folder and every Files it contains should be publicly available 3. Customizations Change the amount of total UGC Generated: In Generate Prompts → Message → "Your task is to generate 50" Modify the instructions to generate the UGC Prompts: In Generate Prompts → Message Change the amount of Base Image: In Generate Image → Body Parameters → JSON → image_urls Change the amount of UGC Generated per prompt: In Generate Image → Body Parameters → JSON → num_images Modify the Folder where UGC Generated are stored: In Upload File → Parent Folder
by Gegenfeld
AI Background Removal Workflow This workflow automatically removes backgrounds from images stored in Airtable using the APImage API 🡥, then downloads and saves the processed images to Google Drive. Perfect for batch processing product photos, portraits, or any images that need clean, transparent backgrounds. The source (Airtable) and the storage (Google Drive) can be changed to any service or database you want/use. 🧩 Nodes Overview 1. Remove Background (Manual Trigger) This manual trigger starts the background removal process when clicked. Customization Options: Replace with Schedule Trigger for automatic daily/weekly processing Replace with Webhook Trigger to start via API calls Replace with File Trigger to process when new files are added 2. Get a Record (Airtable) Retrieves media files from your Airtable "Creatives Library" database. Connects to the "Media Files" table in your Airtable base Fetches records containing image thumbnails for processing Returns all matching records with their thumbnail URLs and metadata Required Airtable Structure: Table with image/attachment field (currently expects "Thumbnail" field) Optional fields: File Name, Media Type, Upload Date, File Size Customization Options: Replace with Google Sheets, Notion, or any database node Add filters to process only specific records Change to different tables with image URLs 3. Code (JavaScript Processing) Processes Airtable records and prepares thumbnail data for background removal. Extracts thumbnail URLs from each record Chooses best quality thumbnail (large > full > original) Creates clean filenames by removing special characters Adds processing metadata and timestamps Key Features: // Selects best thumbnail quality if (thumbnail.thumbnails?.large?.url) { thumbnailUrl = thumbnail.thumbnails.large.url; } // Creates clean filename cleanFileName: (record.fields['File Name'] || 'unknown') .replace(//g, '_') .toLowerCase() Easy Customization for Different Databases: Product Database**: Change field mappings to 'Product Name', 'SKU', 'Category' Portfolio Database**: Use 'Project Name', 'Client', 'Tags' Employee Database**: Use 'Full Name', 'Department', 'Position' 4. Split Out Converts the array of thumbnails into individual items for parallel processing. Enables processing multiple images simultaneously Each item contains all thumbnail metadata for downstream nodes 5. APImage API (HTTP Request) Calls the APImage service to remove backgrounds from images. API Endpoint: POST https://apimage.org/api/ai-remove-background Request Configuration: Header**: Authorization: Bearer YOUR_API_KEY Body**: image_url: {{ $json.originalThumbnailUrl }} ✅ Setup Required: Replace YOUR_API_KEY with your actual API key Get your key from APImage Dashboard 🡥 6. Download (HTTP Request) Downloads the processed image from APImage's servers using the returned URL. Fetches the background-removed image file Prepares image data for upload to storage 7. Upload File (Google Drive) Saves processed images to your Google Drive in a "bg_removal" folder. Customization Options: Replace with Dropbox, OneDrive, AWS S3, or FTP upload Create date-based folder structures Use dynamic filenames with metadata Upload to multiple destinations simultaneously ✨ How To Get Started Set up APImage API: Double-click the APImage API node Replace YOUR_API_KEY with your actual API key Keep the Bearer prefix Configure Airtable: Ensure your Airtable has a table with image attachments Update field names in the Code node if different from defaults Test the workflow: Click the Remove Background trigger node Verify images are processed and uploaded successfully 🔗 Get your API Key 🡥 🔧 How to Customize Input Customization (Left Section) Replace the Airtable integration with any data source containing image URLs: Google Sheets** with product catalogs Notion** databases with image galleries Webhooks** from external systems File system** monitoring for new uploads Database** queries for image records Output Customization (Right Section) Modify where processed images are stored: Multiple Storage**: Upload to Google Drive + Dropbox simultaneously Database Updates**: Update original records with processed image URLs Email/Slack**: Send processed images via communication tools Website Integration**: Upload directly to WordPress, Shopify, etc. Processing Customization Batch Processing**: Limit concurrent API calls Quality Control**: Add image validation before/after processing Format Conversion**: Use Sharp node for resizing or format changes Metadata Preservation**: Extract and maintain EXIF data 📋 Workflow Connections Remove Background → Get a Record → Code → Split Out → APImage API → Download → Upload File 🎯 Perfect For E-commerce**: Batch process product photos for clean, professional listings Marketing Teams**: Remove backgrounds from brand assets and imagery Photographers**: Automate background removal for portrait sessions Content Creators**: Prepare images for presentations and social media Design Agencies**: Streamline asset preparation workflows 📚 Resources APImage API Documentation 🡥 Airtable API Reference 🡥 n8n Documentation 🡥 ⚡ Processing Speed: Handles multiple images in parallel for fast batch processing 🔒 Secure: API keys stored safely in n8n credentials 🔄 Reliable: Built-in error handling and retry mechanisms
by n8n Automation Expert | Template Creator | 2+ Years Experience
Description 🎯 Overview An advanced automated trading bot that implements ICT (Inner Circle Trader) methodology and Smart Money Concepts for cryptocurrency trading. This workflow combines AI-powered market analysis with automated trade execution through Coinbase Advanced Trading API. ⚡ Key Features 📊 ICT Trading Strategy Implementation Kill Zone Detection**: Automatically identifies optimal trading sessions (Asian, London, New York kill zones) Smart Money Concepts**: Analyzes market structure breaks, liquidity grabs, fair value gaps, and order blocks Session Validation**: Real-time GMT time tracking with session strength calculations Structure Analysis**: Detects BOS (Break of Structure) and CHOCH (Change of Character) patterns 🤖 AI-Powered Analysis GPT-4 Integration**: Advanced market analysis using OpenAI's latest model Confidence Scoring**: AI generates confidence scores (0-100) for each trading signal Risk Assessment**: Automated risk level evaluation (LOW/MEDIUM/HIGH) ICT-Specific Prompts**: Custom prompts designed for Inner Circle Trader methodology 🔄 Automated Trading Flow Signal Reception: Receives trading signals via Telegram webhook Data Extraction: Parses symbol, action, price, and technical indicators Session Validation: Verifies current kill zone and trading session strength Market Data: Fetches real-time data from Coinbase Advanced Trading API AI Analysis: Processes signals through GPT-4 with ICT-specific analysis Quality Filter: Multi-condition filtering based on confidence, session, and structure Trade Execution: Automated order placement through Coinbase API Documentation: Records all trades and rejections in Notion databases 📱 Multi-Platform Integration Telegram Bot**: Receives signals and sends formatted notifications Coinbase Advanced**: Real-time market data and trade execution Notion Database**: Comprehensive trade logging and analysis tracking Webhook Support**: External system integration capabilities 🛠️ Setup Requirements API Credentials Needed: Coinbase Advanced Trading API** (API Key, Secret, Passphrase) OpenAI API Key** (GPT-4 access) Telegram Bot Token** and Chat ID Notion Integration** (Database IDs for trade records) Environment Variables: TELEGRAM_CHAT_ID=your_chat_id NOTION_TRADING_DB_ID=your_trading_database_id NOTION_REJECTED_DB_ID=your_rejected_signals_database_id WEBHOOK_URL=your_external_webhook_url 📈 Trading Logic Kill Zone Priority System: London & New York Sessions**: HIGH priority (0.9 strength) Asian & London Close**: MEDIUM priority (0.6 strength) Off Hours**: LOW priority (0.1 strength) Signal Validation Criteria: Signal quality must not be "LOW" Confidence score ≥ 60% Active kill zone session required ICT structure alignment confirmed 🎛️ Workflow Components Extract ICT Signal Data: Parses incoming Telegram messages for trading signals ICT Session Validator: Determines current kill zone and session strength Get Coinbase Market Data: Fetches real-time cryptocurrency data ICT AI Analysis: GPT-4 powered analysis with ICT methodology Parse ICT AI Analysis: Processes AI response with fallback mechanisms ICT Quality & Session Filter: Multi-condition signal validation Execute ICT Trade: Automated trade execution via Coinbase API Create ICT Trading Record: Logs successful trades to Notion Generate ICT Notification: Creates formatted Telegram alerts Log ICT Rejected Signal: Records filtered signals for analysis 🚀 Use Cases Automated ICT-based cryptocurrency trading Smart Money Concepts implementation Kill zone session trading AI-enhanced market structure analysis Professional trading documentation and tracking ⚠️ Risk Management Built-in session validation prevents off-hours trading AI confidence scoring filters low-quality signals Comprehensive logging for performance analysis Automated stop-loss and take-profit calculations This workflow is perfect for traders familiar with ICT methodology who want to automate their Smart Money Concepts trading strategy with AI-enhanced decision making.
by Yaron Been
Comprehensive SEO Strategy with O3 Director & GPT-4 Specialist Team Trigger When chat message received → User submits an SEO request (e.g., “Help me rank for project management software”). The message goes straight to the SEO Director Agent. SEO Director Agent (O3) Acts like the head of SEO strategy. Uses the Think node to plan and decide which specialists to call. Delegates tasks to relevant agents. Specialist Agents (GPT-4.1-mini) Each agent has its own OpenAI model connection for lightweight cost-efficient execution. Tasks include: Keyword Research Specialist → Keyword discovery, clustering, competitor analysis. SEO Content Writer → Generates optimized blog posts, landing pages, etc. Technical SEO Specialist → Site audit, schema markup, crawling fixes. Link Building Strategist → Backlink strategies, outreach campaign ideas. Local SEO Specialist → Local citations, GMB optimization, geo-content. Analytics Specialist → Reports, performance insights, ranking metrics. Feedback Loop Each agent sends results back to the SEO Director. Director compiles insights into a comprehensive SEO campaign plan. ✅ Why This Setup Works Well O3 Model for Director** → Handles reasoning-heavy orchestration (strategy, delegation). GPT-4.1-mini for Specialists** → Cheap, fast, task-specific execution. Parallel Execution** → All specialists can run at the same time. Scalable & Modular** → You can add/remove agents depending on campaign needs. Sticky Notes** → Already document the workflow (great for onboarding & sharing).
by Avkash Kakdiya
How it works This workflow starts whenever a new domain is added to a Google Sheet. It cleans the domain, fetches traffic insights from SimilarWeb, extracts the most relevant metrics, and updates the sheet with enriched data. Optionally, it can also send this information to Airtable for further tracking or analysis. Step-by-step Trigger on New Domain Workflow starts when a new row is added in the Google Sheet. Captures the raw URL/domain entered by the user. Clean Domain URL Strips unnecessary parts like http://, https://, www., and trailing slashes. Stores a clean domain format (e.g., example.com) along with the row number. Fetch Website Analysis Uses the SimilarWeb API to pull traffic and engagement insights for the domain. Data includes global rank, country rank, category rank, total visits, bounce rate, and more. Extract Key Metrics Processes raw SimilarWeb data into a simplified structure. Extracted insights include: Ranks: Global, Country, and Category. Traffic Overview: Total Visits, Bounce Rate, Pages per Visit, Avg Visit Duration. Top Traffic Sources: Direct, Search, Social. Top Countries (Top 3): With traffic share percentages. Device Split: Mobile vs Desktop. Update Google Sheet Writes the cleaned and enriched domain data back into the same (or another) Google Sheet. Ensures each row is updated with the new traffic insights. Export to Airtable (Optional) Creates a new record in Airtable with the enriched traffic metrics. Useful if you want to manage or visualize company/domain data outside of Google Sheets. Why use this? Automatically enriches domain lists with live traffic data from SimilarWeb. Cleans messy URLs into a standard format. Saves hours of manual research on company traffic insights. Provides structured, comparable metrics for better decision-making. Flexible: update sheets, export to Airtable, or both.
by Paolo Ronco
Amazon Luna Prime Games Catalog Tracker (Auto-Sync to Google Sheets)** Automatically fetch, organize, and maintain an updated catalog of Amazon Luna – Included with Prime games.This workflow regularly queries Amazon’s official Luna endpoint, extracts complete metadata, and syncs everything into Google Sheets without duplicates. Ideal for: tracking monthly Prime Luna rotations keeping a personal archive of games monitoring new games appearing on Amazon Games / Prime Gaming, so you can instantly play titles you’re interested in building dashboards or gaming databases powering notification systems (Discord, Telegram, email, etc.) Overview Amazon Luna’s “Included with Prime” lineup changes frequently, with new games added and old ones removed.Instead of checking manually, this n8n template fully automates the process: Fetches the latest list from Amazon’s backend Extracts detailed metadata from the response Syncs the data into Google Sheets Avoids duplicates by updating existing rows Supports all major Amazon regions Once configured, it runs automatically—keeping your game catalog correct, clean, and always up to date. 🛠️ How the workflow works 1. Scheduled Trigger Starts the workflow on a set schedule (default: every 5 days at 3:00 PM).You can change both frequency and time freely. 2. HTTP Request to Amazon Luna Calls Amazon Luna’s regional endpoint and retrieves the full “Included with Prime” catalog. 3. JavaScript Code Node – Data Extraction Parses the JSON response and extracts structured fields: Title Genres Release Year ASIN Image URLs Additional metadata The result is a clean, ready-to-use dataset. 4. Google Sheets – Insert or Update Rows Each game is written into the selected Google Sheet: Existing games get updated New games are appended The Title acts as the unique identifier to prevent duplicates. ## ⚙️ Configuration Parameters | Parameter | Description | Recommended values | | --- | --- | --- | | x-amz-locale | Language + region | it_IT 🇮🇹 · en_US 🇺🇸 · de_DE 🇩🇪 · fr_FR 🇫🇷 · es_ES 🇪🇸 · en_GB 🇬🇧 · ja_JP 🇯🇵 · en_CA 🇨🇦 | | x-amz-marketplace-id | Marketplace backend ID | APJ6JRA9NG5V4 🇮🇹 · ATVPDKIKX0DER 🇺🇸 · A1PA6795UKMFR9 🇩🇪 · A13V1IB3VIYZZH 🇫🇷 · A1RKKUPIHCS9HS 🇪🇸 · A1F83G8C2ARO7P 🇬🇧 · A1VC38T7YXB528 🇯🇵 · A2EUQ1WTGCTBG2 🇨🇦 | | Accept-Language | Response language | Example: it-IT,it;q=0.9,en;q=0.8 | | User-Agent | Browser-like request | Default or updated UA | | Trigger interval | Refresh frequency | Every 5 days at 3:00 PM (modifiable) | | Google Sheet | Storage output | Select your file + sheet | You can adapt these headers to fetch data from any supported country. 💡 Tips & Customization 🌍 Regional catalogs Duplicate the HTTP Request + Code + Sheet block to track multiple countries (US, DE, JP, UK…). 🧹 No duplicates The workflow updates rows intelligently, ensuring a clean catalog even after many runs. 🗂️ Move data anywhere Send the output to: Airtable Databases (MySQL, Postgres, MongoDB…) Notion CSV REST APIs BI dashboards 🔔 Add notifications (Discord, Telegram, Email, etc.) You can pair this template with a notification workflow.When used with Discord, the notification message can include: game title description or metadata the game’s image**, automatically downloaded and attached This makes notifications visually informative and perfect for tracking new Prime titles. 🔒 Important Notes All retrieved data belongs to Amazon. The workflow is intended for personal, testing, or educational use only. Do not republish or redistribute collected data without permission.