by Ranjan Dailata
This workflow automates AI-powered search insights by combining SE Ranking AI Search data with OpenAI summarization. It starts with a manual trigger and fetches the time-series AI visibility data via the SE Ranking API. The response is summarized using OpenAI to produce both detailed and concise insights. The workflow enriches the original metrics with these AI-generated summaries and exports the final structured JSON to disk, making it ready for reporting, analytics, or further automation. Who this is for This workflow is designed for: SEO professionals & growth marketers** tracking AI search visibility Content strategists** analyzing how brands appear in AI-powered search results Data & automation engineers** building SEO intelligence pipelines Agencies** producing automated search performance reports for clients What problem is this workflow solving? SE Ranking’s AI Search API provides rich but highly technical time-series data. While powerful, this data: Is difficult to interpret quickly Requires manual analysis to extract insights Is not presentation-ready for reports or stakeholders This workflow solves that by automatically transforming raw AI search metrics into clear, structured summaries, saving time and reducing analysis friction. What this workflow does At a high level, the workflow: Accepts input parameters such as target domain, AI engine, and region Fetches AI search visibility time-series data from SE Ranking Uses OpenAI GPT-4.1-mini to generate: A comprehensive summary A concise abstract summary Enriches the original dataset with AI-generated insights Exports the final structured JSON to disk for: Reporting Dashboards Further automation or analytics Setup Prerequisites n8n (self-hosted or cloud)** SE Ranking API access** OpenAI API key** Setup Steps If you are new to SE Ranking, please signup on seranking.com Import the workflow JSON into n8n Configure credentials: SE Ranking using HTTP Header Authentication. Please make sure to set the header authentication as below. The value should contain a Token followed by a space with the SE Ranking API Key. OpenAI for GPT-4.1-mini Open Set the Input Fields and update: target_site (e.g., your domain) engine (e.g., ai-overview) source (e.g., us, uk, in) Verify the file path in Write File to Disk Click Execute Workflow How to customize this workflow to your needs You can easily extend or tailor this workflow: Change analysis scope** Update domain, region, or AI engine Modify AI outputs** Adjust prompts or output schema for insights like trends, risks, or recommendations Replace storage** Send output to: Google Sheets Databases S3 / cloud storage Webhooks or BI tools Automate monitoring** Add a Cron trigger to run daily, weekly, or monthly Summary This workflow turns raw SE Ranking AI Search data into clear, executive-ready insights using OpenAI GPT-4.1-mini. By combining automated data collection with AI summarization, it enables faster decision-making, better reporting, and scalable SEO intelligence without manual analysis.
by Onur
Yelp Business Scraper by URL via Scrape.do API with Google Sheets Storage Overview This n8n workflow automates the process of scraping comprehensive business information from Yelp using individual business URLs. It integrates with Scrape.do for professional web scraping with anti-bot bypass capabilities and Google Sheets for centralized data storage, providing detailed business intelligence for market research, competitor analysis, and lead generation. Workflow Components 1. 📥 Form Trigger | Property | Value | |----------|-------| | Type | Form Trigger | | Purpose | Initiates the workflow with user-submitted Yelp business URL | | Input Fields | Yelp Business URL | | Function | Captures target business URL to start the scraping process | 2. 🔍 Create Scrape.do Job | Property | Value | |----------|-------| | Type | HTTP Request (POST) | | Purpose | Creates an async scraping job via Scrape.do API | | Endpoint | https://q.scrape.do/api/v1/jobs | | Authentication | X-Token header | Request Parameters: Targets**: Array containing the Yelp business URL Super**: true (uses residential/mobile proxies for better success rate) GeoCode**: us (targets US-based content) Device**: desktop Render**: JavaScript rendering enabled with networkidle2 wait condition Function: Initiates comprehensive business data extraction from Yelp with headless browser rendering to handle dynamic content. 3. 🔧 Parse Yelp HTML | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Extracts structured business data from raw HTML | | Mode | Run once for each item | Function: Parses the scraped HTML content using regex patterns and JSON-LD extraction to retrieve: Business name Overall rating Review count Phone number Full address Price range Categories Website URL Business hours Image URLs 4. 📊 Store to Google Sheet | Property | Value | |----------|-------| | Type | Google Sheets Node | | Purpose | Stores scraped business data for analysis and storage | | Operation | Append rows | | Target | "Yelp Scraper Data - Scrape.do" sheet | Data Mapping: Business Name, Overall Rating, Reviews Count Business URL, Phone, Address Price Range, Categories, Website Hours, Images/Videos URLs, Scraped Timestamp Workflow Flow Form Input → Create Scrape.do Job → Parse Yelp HTML → Store to Google Sheet │ │ │ │ ▼ ▼ ▼ ▼ User submits API creates job JavaScript code Data appended Yelp URL with JS rendering extracts fields to spreadsheet Configuration Requirements API Keys & Credentials | Credential | Purpose | |------------|---------| | Scrape.do API Token | Required for Yelp business scraping with anti-bot bypass | | Google Sheets OAuth2 | For data storage and export access | | n8n Form Webhook | For user input collection | Setup Parameters | Parameter | Description | |-----------|-------------| | YOUR_SCRAPEDO_TOKEN | Your Scrape.do API token (appears in 3 places) | | YOUR_GOOGLE_SHEET_ID | Target spreadsheet identifier | | YOUR_GOOGLE_SHEETS_CREDENTIAL_ID | OAuth2 authentication reference | Key Features 🛡️ Anti-Bot Bypass Technology Residential Proxy Rotation**: 110M+ proxies across 150 countries WAF Bypass**: Handles Cloudflare, Akamai, DataDome, and PerimeterX Dynamic TLS Fingerprinting**: Authentic browser signatures CAPTCHA Handling**: Automatic bypass for uninterrupted scraping 🌐 JavaScript Rendering Full headless browser support for dynamic Yelp content networkidle2 wait condition ensures complete page load Custom wait times for complex page elements Real device fingerprints for detection avoidance 📊 Comprehensive Data Extraction | Field | Description | Example | |-------|-------------|---------| | name | Business name | "Joe's Pizza Restaurant" | | overall_rating | Average customer rating | "4.5" | | reviews_count | Total number of reviews | "247" | | url | Original Yelp business URL | "https://www.yelp.com/biz/..." | | phone | Business phone number | "(555) 123-4567" | | address | Full street address | "123 Main St, New York, NY 10001" | | price_range | Price indicator | "$$" | | categories | Business categories | "Pizza, Italian, Delivery" | | website | Business website URL | "https://joespizza.com" | | hours | Operating hours | "Mon-Fri 11:00-22:00" | | images_videos_urls | Media content links | "https://s3-media1.fl.yelpcdn.com/..." | | scraped_at | Extraction timestamp | "2025-01-15T10:30:00Z" | 🗂️ Centralized Data Storage Automatic Google Sheets export Organized business data format with 12 data fields Historical scraping records with timestamps Easy sharing and collaboration Use Cases 📈 Market Research Competitor business analysis Local market intelligence gathering Industry benchmark establishment Service offering comparison 🎯 Lead Generation Business contact information extraction Potential client identification Market opportunity assessment Sales prospect development 📊 Business Intelligence Customer sentiment analysis through ratings Competitor performance monitoring Market positioning research Brand reputation tracking 📍 Location Analysis Geographic business distribution Local competition assessment Market saturation evaluation Expansion opportunity identification Technical Notes | Specification | Value | |--------------|-------| | Processing Time | 15-45 seconds per business URL | | Data Accuracy | 95%+ for publicly available business information | | Success Rate | 99.98% (Scrape.do guarantee) | | Proxy Pool | 110M+ residential, mobile, and datacenter IPs | | JS Rendering | Full headless browser with networkidle2 wait | | Data Format | JSON with structured field mapping | | Storage Format | Structured Google Sheets with 12 predefined columns | Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration Import into n8n: Workflows → Import from JSON Paste configuration and save Step 2: Configure Scrape.do Get your API token: Sign up at Scrape.do Navigate to Dashboard → API Token Copy your token Update workflow references (3 places): 🔍 Create Scrape.do Job node → Headers → X-Token 📡 Check Job Status node → Headers → X-Token 📥 Fetch Task Results node → Headers → X-Token Replace YOUR_SCRAPEDO_TOKEN with your actual API token. Step 3: Configure Google Sheets Create target spreadsheet: Create new Google Sheet named "Yelp Business Data" or similar Add header row with columns: name | overall_rating | reviews_count | url | phone | address | price_range | categories | website | hours | images_videos_urls | scraped_at Copy the Sheet ID from URL (the long string between /d/ and /edit) Set up OAuth2 credentials: In n8n: Credentials → Add Credential → Google Sheets OAuth2 Complete the Google authentication process Grant access to Google Sheets Update workflow references: Replace YOUR_GOOGLE_SHEET_ID with your actual Sheet ID Update YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with credential reference Step 4: Test and Activate Test with sample URL: Use a known Yelp business URL (e.g., https://www.yelp.com/biz/example-business-city) Submit through the form trigger Monitor execution progress in n8n Verify data appears in Google Sheet Activate workflow: Toggle workflow to "Active" Share form URL with users Sample Business Data The workflow captures comprehensive business information including: | Category | Data Points | |----------|-------------| | Basic Information | Name, category, location | | Performance Metrics | Ratings, review counts, popularity | | Contact Details | Phone, website, address | | Visual Content | Photos, videos, gallery URLs | | Operational Data | Hours, services, price range | Advanced Configuration Batch Processing Modify the input to accept multiple URLs by updating the job creation body: { "Targets": [ "https://www.yelp.com/biz/business-1", "https://www.yelp.com/biz/business-2", "https://www.yelp.com/biz/business-3" ], "Super": true, "GeoCode": "us", "Render": { "WaitUntil": "networkidle2", "CustomWait": 3000 } } Enhanced Rendering Options For complex Yelp pages, add browser interactions: { "Render": { "BlockResources": false, "WaitUntil": "networkidle2", "CustomWait": 5000, "WaitSelector": ".biz-page-header", "PlayWithBrowser": [ { "Action": "Scroll", "Direction": "down" }, { "Action": "Wait", "Timeout": 2000 } ] } } Notification Integration Add alert mechanisms: Email notifications for completed scrapes Slack messages for team updates Webhook triggers for external systems Error Handling Common Issues | Issue | Cause | Solution | |-------|-------|----------| | Invalid URL | URL is not a valid Yelp business page | Ensure URL format: https://www.yelp.com/biz/... | | 401 Unauthorized | Invalid or missing API token | Verify X-Token header value | | Job Timeout | Page too complex or slow | Increase CustomWait value | | Empty Data | HTML parsing failed | Check page structure, update regex patterns | | Rate Limiting | Too many concurrent requests | Reduce request frequency or upgrade plan | Troubleshooting Steps Verify URLs: Ensure Yelp business URLs are correctly formatted Check Credentials: Validate Scrape.do token and Google OAuth Monitor Logs: Review n8n execution logs for detailed errors Test Connectivity: Verify network access to all external services Check Job Status: Use Scrape.do dashboard to monitor job progress Performance Specifications | Metric | Value | |--------|-------| | Processing Time | 15-45 seconds per business URL | | Data Accuracy | 95%+ for publicly available information | | Success Rate | 99.98% (with Scrape.do anti-bot bypass) | | Concurrent Processing | Depends on Scrape.do plan limits | | Storage Capacity | Unlimited (Google Sheets based) | | Proxy Pool | 110M+ IPs across 150 countries | Scrape.do API Reference Async API Endpoints | Endpoint | Method | Purpose | |----------|--------|---------| | /api/v1/jobs | POST | Create new scraping job | | /api/v1/jobs/{jobID} | GET | Check job status | | /api/v1/jobs/{jobID}/{taskID} | GET | Retrieve task results | | /api/v1/me | GET | Get account information | Job Status Values | Status | Description | |--------|-------------| | queuing | Job is being prepared | | queued | Job is in queue waiting to be processed | | pending | Job is currently being processed | | rotating | Job is retrying with different proxies | | success | Job completed successfully | | error | Job failed | | canceled | Job was canceled by user | For complete API documentation, visit: Scrape.do Documentation Support & Resources Scrape.do Documentation**: https://scrape.do/documentation/ Scrape.do Dashboard**: https://dashboard.scrape.do/ n8n Documentation**: https://docs.n8n.io/ Google Sheets API**: https://developers.google.com/sheets/api This workflow is powered by Scrape.do - Reliable, Scalable, Unstoppable Web Scraping
by dirogar
Telegram Tasker Bot — это сценарий n8n, который принимает голосовые сообщения в Telegram, автоматически превращает их в текст, извлекает из него ключевые поля задачи и создаёт карточку в нужной доске Trello. Пользователь просто говорит задачу — бот сам оформляет её и присылает ссылку на готовую карточку. Для использования вам потребуется telegram bot. Его можно создать через бота BotFather Так же понадобится доступ к API chatgpt - он используется только для транскрибции аудио в речь. Вы можете использовать любой другой сервис, по вашему выбору. И аккаунт в trello, с доступом к API. !Внимание! ID доски в trello можно взять из url ID столбца на доске трелло можно взять через инструменты разработчика (по крайней мере я так получал эти данные)
by Anir Agram
🛡️📥 Telegram Invoice Agent → 🔎 OCR → 🤖 AI Parsing → 📄 Google Sheets + 🗂️ Drive What this workflow does 🤖 Captures invoices from Telegram and auto-downloads PDFs/images. 🔎 Runs OCR, then uses AI to structure clean invoice fields. 📄 Appends parsed data to a Google Sheets “Invoice Database.” 🗂️ Uploads the original file to Google Drive with a neat name. 💬 Sends a friendly Telegram summary with totals, due date, notes, and link. Why it’s useful ⚡ Faster bookkeeping with zero manual copy-paste. 🧱 Consistent schema for reliable reporting and pivots. 👥 Team-friendly drop-and-log via Telegram. 🧩 Easy to extend with approvals, ERP/CRM sync, or vendor routing. How it works 📲 Telegram Trigger → file received. 🌐 HTTP OCR (OCR.space) → text extracted. 🤖 AI Agent → maps to strict JSON schema. 📄 Google Sheets → appends structured row. 🗂️ Google Drive → saves original invoice. 💬 Telegram → concise confirmation and links. What you’ll need 🤖 Telegram Bot token. 🔑 OCR API key (OCR.space: free tier; upgrade for volume/accuracy). 🔐 Google OAuth for Sheets + Drive. 🧠 LLM account (e.g., Gemini/OpenAI-compatible). Setup steps 🔗 Connect credentials: Telegram, Google, OCR, AI. 📄 Prepare Sheet columns: Invoice Number, Date, Total Amount ($), Billing Address, Due Date, Notes. 🧭 Update sheet ID and Drive folder ID. 🧪 Test: send a sample invoice and validate OCR, AI output, row append, and Drive link. Customization ideas 🎯 Higher accuracy OCR: swap to Google Vision. 📊 Line items: extract into a second tab for analytics. ✅ Approvals: add Telegram keyboard confirmation before write. 🧯 Robustness: IF/Retry on empty OCR; user prompt to retake photo. Who it’s for 🧑💻 Freelancers/agencies needing fast invoice intake via Telegram. 🧾 Small finance teams wanting a searchable ledger with links to originals. 🏗️ Builders extending to ERPs/CRMs and custom accounting flows. Want help customizing? 📧 anirpoke@gmail.com 🔗 Linkedin
by Rajeet Nair
Automatically converts CSV/XLSX files into a fully validated database schema using AI, generating SQL scripts, ERD diagrams, a data dictionary, and load plans to accelerate database design and data onboarding. EXPLANATION This workflow automates the end-to-end process of transforming raw CSV or Excel data into a production-ready relational database schema. It begins by accepting file uploads through a webhook, detecting file type, and extracting structured data. The workflow performs data cleaning and deep profiling to analyze column types, uniqueness, null values, and patterns. A column analysis engine identifies candidate primary keys and potential relationships. An AI agent then generates a normalized schema by organizing data into tables, assigning appropriate SQL data types, and defining primary and foreign keys. The schema is validated using rule-based checks to ensure data integrity, correct relationships, and proper normalization. If validation fails, the workflow automatically refines the schema through a revision loop. Once validated, it generates SQL DDL scripts, ERD diagrams, a data dictionary, and a load plan that determines the correct order for inserting data. Finally, all outputs are combined and returned via webhook as a structured response, making the workflow ideal for rapid database creation, data migration, and AI-assisted data modeling. Overview This workflow automatically converts CSV or Excel files into a production-ready relational database schema using AI and rule-based validation. It analyzes uploaded data to detect column types, relationships, and data quality, then generates a normalized schema with proper keys and constraints. The output includes SQL DDL scripts, ERD diagrams, a data dictionary, and a load plan. This eliminates manual schema design and accelerates database setup from raw data. How It Works File Upload (Webhook) Accepts CSV or XLSX files and initializes workflow configuration such as thresholds and retry limits. File Extraction Detects file format and extracts rows into structured JSON format. Data Cleaning & Profiling Cleans data, removes duplicates, normalizes values, and computes column statistics such as null percentage and uniqueness. Column Analysis Engine Identifies candidate primary keys, analyzes cardinality, and suggests potential foreign key relationships. AI Schema Generation Uses an AI agent to design normalized tables, assign SQL data types, and define primary keys, foreign keys, and constraints. Validation Layer Validates schema integrity by checking data types, primary key uniqueness, foreign key overlap, and constraint consistency. Revision Loop If validation fails, the workflow sends feedback to the AI agent and regenerates the schema until it meets requirements. Schema Output Generation Generates SQL DDL scripts, ERD diagrams, a data dictionary, and a load plan. Load Plan Engine Determines the correct order for inserting data and detects circular dependencies. Combine & Explain Merges all outputs and optionally provides AI-generated explanations of schema decisions. Response Output Returns all generated artifacts as a structured JSON response via webhook. Setup Instructions Activate the workflow and copy the webhook URL Send a POST request with a CSV or XLSX file Configure OpenAI credentials for the AI agent Adjust thresholds if needed (FK overlap, retries, confidence) Execute the workflow and review outputs Use Cases Automatically generate database schemas from CSV/Excel files Accelerate data migration and onboarding pipelines Rapidly prototype relational database designs Reverse engineer structured schemas from raw datasets AI-assisted data modeling and normalization Requirements n8n (latest version recommended) OpenAI API credentials LangChain nodes enabled CSV or XLSX input file
by Vuong Nguyen
How it works This workflow generates an 8-second product advertising video from a single input image. It downloads the image from Google Drive, converts it to base64 for the API request, analyzes it with Gemini (Creative Visualiser), then turns the description into a short video script/prompt. The prompt + image are sent to Veo to start a long-running video generation job. The workflow polls until a video URI is available, downloads the MP4, and uploads it back to Google Drive. Setup 1) Connect credentials used in this workflow: Google Drive + Google Gemini, and an API key for the Veo HTTP requests. 2) Set the input image file in Download ad image. 3) Set the output folder in Upload to Drive. 4) (Optional) Adjust aspectRatio, resolution, and durationSeconds in Generate Video, then execute the workflow.
by Robert Breen
Create multi-sheet Excel workbooks in n8n to automate reporting using Google Drive + Google Sheets Build an automated Excel file with multiple tabs directly in n8n. Two Code nodes generate datasets, each is converted into its own Excel worksheet, then combined into a single .xlsx and (optionally) appended to a Google Sheet for sharing—eliminating manual copy-paste and speeding up reporting. Who’s it for Teams that publish recurring reports as Excel with multiple tabs Ops/Marketing/Data folks who want a no-code/low-code way to package JSON into Excel n8n beginners learning the Code → Convert to File → Merge pattern How it works Manual Trigger starts the run. Code nodes emit JSON rows for each table (e.g., People, Locations). Convert to File nodes turn each JSON list into an Excel binary, assigning Sheet1/Sheet2 (or your names). Merge combines both binaries into a single Excel workbook with multiple tabs. Google Sheets (optional) appends the JSON rows to a live spreadsheet for collaboration. Setup (only 2 connections) 1️⃣ Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) Sign in with your Google account and grant access Copy the example sheet referenced in the Google Sheets node (open the node and duplicate the linked sheet), or select your own In the workflow’s Google Sheets node, select your Spreadsheet and Worksheet https://docs.google.com/spreadsheets/d/1G6FSm3VdMZt6VubM6g8j0mFw59iEw9npJE0upxj3Y6k/edit?gid=1978181834#gid=1978181834 2️⃣ Connect Google Drive (OAuth2) In n8n → Credentials → New → Google Drive (OAuth2) Sign in with the Google account that will store your Excel outputs and allow access In your Drive-related nodes (if used), point to the folder where you want the .xlsx saved or retrieved Customize the workflow Replace the sample arrays in the Code nodes with your data (APIs, DBs, CSVs, etc.) Rename sheetName in each Convert to File node to match your desired tab names Keep the Merge node in Combine All mode to produce a single workbook In Google Sheets, switch to Manual mapping for strict column order (optional) Best practices (per template guidelines) Rename nodes** to clear, action-oriented names (e.g., “Build People Sheet”, “Build Locations Sheet”) Add a yellow Sticky Note at the top with this description so users see setup in-workflow Do not hardcode credentials** inside HTTP nodes; always use n8n Credentials Remove personal IDs/links before publishing Sticky Note (copy-paste) > Multi-Tab Excel Builder (Google Drive + Google Sheets) > This workflow generates two datasets (Code → JSON), converts each to an Excel sheet, merges them into a single workbook with multiple tabs, and optionally appends rows to Google Sheets. > > Setup (2 connections): > 1) Google Sheets (OAuth2): Create credentials → duplicate/select your target spreadsheet → set Spreadsheet + Worksheet in the node. > 2) Google Drive (OAuth2): Create credentials → choose the folder for storing/retrieving the .xlsx. > > Customize: Edit the Code nodes’ arrays, rename tab names in Convert to File, and adjust the Sheets node mapping as needed. Troubleshooting Missing columns / wrong order:* Use *Manual mapping** in the Google Sheets node Binary not found:* Ensure each *Convert to File* node’s binaryPropertyName matches what *Merge** expects Permissions errors:** Re-authorize Google credentials; confirm you have edit access to the target Sheet/Drive folder 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your PDF)? 📧 rbreen@ynteractive.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Yaron Been
This workflow automatically monitors marketing job boards to identify growing companies and potential business opportunities. It saves you time by eliminating the need to manually check job listings and provides insights into which companies are actively hiring and expanding their marketing teams. Overview This workflow automatically scrapes marketing job listings from Indeed and other job boards to extract company information, job details, and growth indicators. It uses Bright Data to access job sites without being blocked and AI to intelligently parse job postings into structured data, then sends formatted email alerts to your marketing team. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping job boards without being blocked OpenAI**: AI agent for intelligent job data extraction and parsing Gmail**: For sending automated job alert emails to your team How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Gmail: Connect your Gmail account for sending notifications Customize: Set your target job search parameters and email recipients Use Cases Business Development**: Identify rapidly growing companies for potential partnerships Sales Teams**: Target companies actively hiring for sales outreach opportunities Market Research**: Track hiring trends and identify emerging market players Recruitment**: Monitor competitor hiring patterns and market opportunities Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #jobboards #marketingj jobs #brightdata #webscraping #businessdevelopment #leadgeneration #companyresearch #jobmonitoring #n8nworkflow #workflow #nocode #jobautomation #marketresearch #growingcompanies #hiringtrends #salesleads #prospecting #jobscraping #indeed #recruitmentintel #businessintelligence #marketanalysis #companytracking #automatedalerts #emailnotifications #jobdata #hiringinsights #marketopportunities
by Mutasem
Use case This workflow snoozes any Todoist tasks, by moving them into a Snoozed todoist list and unsnoozes them 3 days before due date. Helps keep inbox clear only of tasks you need to worry about soon. How to setup Add your Todoist creds Create a Todoist project called snoozed Set the project ids in the relevant nodes Add due dates to your tasks in Inbox. Watch them disappear to snoozed. Set their date to tomorrow, watch it return to inbox. How to adjust this template Adjust the timeline.. Maybe 3 days is too close for you. Works mostly for me :)
by Deborah
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step. Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
by Oneclick AI Squad
This workflow automatically monitors blog posts or product pages for new/updated content, analyzes Google AI Overviews to identify optimization gaps, auto-generates optimized titles, summaries, and schema markup, and applies these improvements to enhance visibility in AI search results. Who's it for • SEO managers optimizing for AI-driven search results • Content teams publishing 5+ articles per week • E-commerce teams managing product page visibility • Digital marketers tracking AI Overview presence How it works / What it does Detects new or updated blog posts / product pages via webhook or schedule Fetches raw content and existing metadata Queries Google Search API to analyze AI Overview coverage gaps AI generates optimized title, meta summary, and JSON-LD schema markup Pushes improvements back to CMS or target endpoint Logs all changes and scores to Google Sheet tracker How to set up Import this workflow Set up credentials (Google Search API, OpenAI/Anthropic, CMS/HTTP, Google Sheets) Update your site URL and content preferences Activate workflow Requirements • Webhook endpoint or CMS trigger • Google Custom Search API key • OpenAI / Anthropic / Grok API • CMS REST API or target HTTP endpoint • Google Sheets for tracking How to customize the workflow • Change AI tone and schema type in the AI node • Modify Python keyword/gap detection logic • Update Google Sheet columns and Sheet ID • Adjust polling interval or wait times
by Miquel Colomer
Do you want to avoid bounces in your Email Marketing campaigns? This workflow verifies emails using the uProc.io email verifier. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Email Item" can be replaced by any other supported service with email value, like Mailchimp, Calendly, MySQL, or Typeform. The "uProc" node returns a status per checked email (deliverable, undeliverable, spamtrap, softbounce,...). "If" node checks if "deliverable" status exists. If value is not present, you can mark email as invalid to discard bounces. If "deliverable" status is present, you can use email in your Email Marketing campaigns. If you need to know detailed indicators of any email, you can use the tool "Communication" > "Check Email Exists (Extended)" to get advanced information.