by Evoort Solutions
Automated SEO Competitor Analysis with Google Sheets Logging Description: This n8n workflow automates SEO competitor analysis by integrating the Competitor Analysis API. It captures website domains via a simple user form, sends them to the API to retrieve competitor data, and logs the results directly into Google Sheets. The workflow intelligently handles empty or missing data and incorporates a wait node to respect API rate limits, making your competitor tracking seamless and reliable. Node-by-Node Explanation On Form Submission Triggers the workflow when a user submits a website domain. Global Storage Stores the submitted website domain for use in subsequent nodes. Competitor Analysis Request Sends a POST request to the Competitor Analysis API to fetch organic competitors, pages, and keyword data. If (Condition Check) Verifies that the API response contains valid, non-empty data. Google Sheets (Insert Success Data) Appends the fetched competitor data to a specified Google Sheet. Wait Node Adds a 5-second delay to avoid exceeding API rate limits. Google Sheets (Insert 'Not Found' Record) Logs a “Not data found.” entry into Google Sheets if the API response lacks relevant data. Use Cases & Benefits SEO Professionals:** Automate competitor insights collection for efficient SEO research. Marketing Teams:** Maintain consistent, automated logs of competitor data across multiple websites. Agencies:** Manage organic search competitor tracking for many clients effortlessly. Reliability:** Conditional checks and wait nodes ensure smooth API interaction and data integrity. Scalable & User-Friendly:** Simple form input and Google Sheets integration enable easy adoption and scalability. 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Competitor Keyword Analysis API Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by System Admin
Tagged with: , , , ,
by Barbora Svobodova
Sora 2 Video Generation: Prompt-to-Video Automation with OpenAI API Who’s it for This template is ideal for content creators, marketers, developers, or anyone needing automated AI video creation from text prompts. Perfect for bulk generation, marketing assets, or rapid prototyping using OpenAI's Sora 2 API. Example use cases: E-commerce sellers creating product showcase videos for multiple items without hiring videographers or renting studios Social media managers generating daily content like travel vlogs, lifestyle videos, or brand stories from simple text descriptions Marketing teams producing promotional videos for campaigns, events, or product launches in minutes instead of days How it works / What it does Submit a text prompt using a form or input node. Workflow sends your prompt to the Sora 2 API endpoint to start video generation. It polls the API to check if the video is still processing or completed. When ready, it retrieves the finished video's download link and automatically saves the file. All actions—prompt submission, status checks, and video retrieval—run without manual oversight. How to set up Use your existing OpenAI API key or create a new one at https://platform.openai.com/api-keys Replace Your_API_Key in the following nodes with your OpenAI API key: Sora 2Video, Get Video, Download Video Adjust the Wait node for Video node intervals if needed — video generation typically takes several minutes Enter your video prompt into the Text Prompt trigger form to start the workflow Requirements OpenAI account & OpenAI API key n8n instance (cloud or self-hosted) A form, webhook, or manual trigger for prompt submission How to customize the workflow Connect the prompt input to external forms, bots, or databases. Add post-processing steps like uploading videos to cloud storage or social platforms. Adjust polling intervals for efficient status checking. Limitations and Usage Tips Prompt Clarity: For optimal video generation results, ensure that prompts are clear, concise, and well-structured. Avoid ambiguity and overly complex language to improve AI interpretation. Processing Duration: Video creation may take several minutes depending on prompt complexity and system load. Users should anticipate this delay and design workflows accordingly. Polling Interval Configuration: Adjust polling intervals thoughtfully to balance prompt responsiveness with API rate limits, optimizing both performance and resource usage. API Dependency: This workflow relies on the availability and quota limits of OpenAI’s Sora 2 API. Users should monitor their API usage to avoid interruptions and service constraints.
by Evoort Solutions
🔍 SERP Keyword Ranking Checker with RapidAPI & Google Sheets Automate keyword SERP ranking lookups and log the data into Google Sheets using this no-code n8n workflow. Perfect for SEO professionals, digital marketers, or anyone tracking keyword visibility across regions. 🧰 Tools Used SERP Keyword Ranking Checker API – Fetch real-time keyword SERP data Google Sheets** – Log keyword data for tracking, analysis, or client reporting 📌 Workflow Overview User submits a keyword and country code via an n8n form Workflow sends a request to the SERP Keyword Ranking Checker API API response is checked: If valid data is found, it is logged to Google Sheets If no results are found, a fallback message is saved instead Optional delays added to space out operations ⚙️ Node-by-Node Breakdown 1. 🟢 Form Trigger: On form submission Accepts user input for: keyword (e.g. "labubu") country (e.g. "us") 2. 📦 Set Node: Global Storage Stores form input into variables (keyword, country) for use in API requests and logging. 3. 🌐 HTTP Request: SERP Keyword Ranking Checker Sends a POST request to the SERP Keyword Ranking Checker API with: keyword country Includes headers: x-rapidapi-host: serp-keyword-ranking-checker.p.rapidapi.com x-rapidapi-key: YOUR_RAPIDAPI_KEY 4. ⚖️ If Node: Condition Checking Checks whether the serpResults array returned by the API is non-empty. ✅ True Branch: Proceeds if valid SERP data is available. ❌ False Branch: Proceeds if no SERP data is returned (e.g., empty result). 5. ⏳ Wait Node: 5-Second Delay Adds a 5-second delay before proceeding to Google Sheets insertion. This helps control execution pace and ensures API rate limits or spreadsheet latency is handled smoothly. Used on both True and False branches for consistency. 6. 📊 Google Sheets (Success Path) Appends a new row into the selected Google Sheet with: Keyword – the submitted keyword Country – selected country code Json data – full serpResults JSON array returned by the API 💡 Ideal for tracking keyword rankings over time or populating live dashboards. 7. 📊 Google Sheets (Failure Path) Appends a fallback message into the Google Sheet when no SERP results are found. Keyword – the submitted keyword Country – selected country code Json data – "No result found. Please try another keyword..." 🔍 Helps maintain a log of unsuccessful queries for debugging or auditing. 💡 Use Cases SEO Audits** Automate keyword performance snapshots across different markets to identify opportunities and gaps. Competitor Analysis** Track keyword rankings of rival brands in real time to stay ahead of competition. Client Reporting** Feed live SERP data into dashboards or reports for transparent, real-time agency deliverables. Content Strategy** Discover which keywords surface top-ranking pages to guide content creation and optimization efforts. 🔑 How to Obtain Your API Key for SERP Keyword Ranking Checker API Sign Up or Log In Visit RapidAPI and create a free account using your email or social login. Go to the API Page Navigate to the SERP Keyword Ranking Checker APi. Subscribe to the API Click Subscribe to Test, then choose a pricing plan that fits your needs (Free, Basic, Pro). Get Your API Key After subscribing, go to the Security tab on the API page to find your X-RapidAPI-Key. Use Your API Key Add the API key to your HTTP request headers: X-RapidAPI-Key: YOUR_API_KEY Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Wevanta Infotech
LinkedIn Auto-Post Agent for n8n 🚀 Automate your LinkedIn presence with AI-powered content generation This n8n workflow automatically generates and publishes engaging LinkedIn posts using OpenAI's GPT models. Perfect for professionals and businesses who want to maintain an active LinkedIn presence without manual effort. ✨ Features 🤖 AI-Powered Content**: Generate professional LinkedIn posts using OpenAI GPT-3.5-turbo or GPT-4 ⏰ Automated Scheduling**: Post content automatically on weekdays at 9 AM (customizable) 🎯 Manual Trigger**: Generate and post content on-demand 🔒 Secure**: All credentials stored securely in n8n's encrypted credential system 📊 Error Handling**: Built-in retry logic and error notifications 🎨 Customizable**: Easily modify prompts, scheduling, and content parameters 🏗️ Architecture This workflow uses a streamlined 3-node architecture: Schedule/Manual Trigger → OpenAI Content Generation → LinkedIn Post Node Details Schedule Trigger: Automatically triggers the workflow (default: weekdays at 9 AM) Manual Trigger: Allows on-demand content generation OpenAI Content Generation: Creates LinkedIn-optimized content using AI LinkedIn Post: Publishes the generated content to LinkedIn 📋 Prerequisites n8n instance (self-hosted or cloud) OpenAI API account and API key LinkedIn account with API access Basic familiarity with n8n workflows 🚀 Quick Start 1. Import the Workflow Download the linkedin-auto-post-agent.json file In your n8n instance, go to Workflows → Import from File Select the downloaded JSON file Click Import 2. Set Up Credentials OpenAI API Credentials Go to Credentials in your n8n instance Click Create New Credential Select OpenAI Enter your OpenAI API key Name it "OpenAI API" and save LinkedIn OAuth2 Credentials Create a LinkedIn App at LinkedIn Developer Portal Configure OAuth 2.0 settings: Redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Scopes: r_liteprofile, w_member_social In n8n, create new LinkedIn OAuth2 credentials Enter your LinkedIn App's Client ID and Client Secret Complete the OAuth authorization flow 3. Configure the Workflow Open the imported workflow Click on the OpenAI Content Generation node Select your OpenAI credentials Customize the content prompt if desired Click on the LinkedIn Post node Select your LinkedIn OAuth2 credentials Save the workflow 4. Test the Workflow Click the Manual Trigger node Click Execute Node to test content generation Verify the generated content in the LinkedIn node output Check your LinkedIn profile to confirm the post was published 5. Activate Automated Posting Click the Active toggle in the top-right corner The workflow will now run automatically based on the schedule ⚙️ Configuration Options Scheduling The default schedule posts content on weekdays at 9 AM. To modify: Click the Schedule Trigger node Modify the Cron Expression: 0 9 * * 1-5 0 9 * * 1-5: Weekdays at 9 AM 0 12 * * *: Daily at noon 0 9 * * 1,3,5: Monday, Wednesday, Friday at 9 AM Content Customization Modify the OpenAI prompt to change content style: Click the OpenAI Content Generation node Edit the System Message to adjust tone and style Modify the User Message to change topic focus Example Prompts Professional Development Focus: Create a LinkedIn post about professional growth, skill development, or career advancement. Keep it under 280 characters and include 2-3 relevant hashtags. Industry Insights: Generate a LinkedIn post sharing an industry insight or trend in technology. Make it thought-provoking and include relevant hashtags. Motivational Content: Write an inspiring LinkedIn post about overcoming challenges or achieving goals. Keep it positive and engaging with appropriate hashtags. Model Selection Choose between OpenAI models based on your needs: gpt-3.5-turbo**: Cost-effective, good quality gpt-4**: Higher quality, more expensive gpt-4-turbo**: Latest model with improved performance 🔧 Advanced Configuration Error Handling The workflow includes built-in error handling: Retry Logic**: 3 attempts with 1-second delays Continue on Fail**: Workflow continues even if individual nodes fail Error Notifications**: Optional email/Slack notifications on failures Content Review Workflow (Optional) To add manual content review before posting: Add a Wait node between OpenAI and LinkedIn nodes Configure webhook trigger for approval Add conditional logic based on approval status Rate Limiting To respect API limits: OpenAI: 3 requests per minute (default) LinkedIn: 100 posts per day per user Adjust scheduling frequency accordingly 📊 Monitoring and Analytics Execution History Go to Executions in your n8n instance Filter by workflow name to see all runs Click on individual executions to see detailed logs Key Metrics to Monitor Success Rate**: Percentage of successful executions Content Quality**: Review generated posts periodically API Usage**: Monitor OpenAI token consumption LinkedIn Engagement**: Track post performance on LinkedIn 🛠️ Troubleshooting Common Issues OpenAI Node Fails Verify API key is correct and has sufficient credits Check if you've exceeded rate limits Ensure the model name is spelled correctly LinkedIn Node Fails Verify OAuth2 credentials are properly configured Check if LinkedIn app has required permissions Ensure the content doesn't violate LinkedIn's posting policies Workflow Doesn't Trigger Confirm the workflow is marked as "Active" Verify the cron expression syntax Check n8n's timezone settings Debug Mode Enable Save Manual Executions in workflow settings Run the workflow manually to see detailed execution data Check each node's input/output data 🔒 Security Best Practices Store all API keys in n8n's encrypted credential system Regularly rotate API keys (monthly recommended) Use environment variables for sensitive configuration Enable execution logging for audit trails Monitor for unusual API usage patterns 📈 Optimization Tips Content Quality Review and refine prompts based on output quality A/B test different prompt variations Monitor LinkedIn engagement metrics Adjust posting frequency based on audience response Cost Optimization Use gpt-3.5-turbo for cost-effective content generation Set appropriate token limits (200 tokens recommended) Monitor OpenAI usage in your dashboard Performance Keep workflows simple with minimal nodes Use appropriate retry settings Monitor execution times and optimize if needed 🤝 Contributing We welcome contributions to improve this workflow: Fork the repository Create a feature branch Make your improvements Submit a pull request 📄 License This project is licensed under the MIT License - see the LICENSE file for details. 🆘 Support If you encounter issues or have questions: Check the troubleshooting section above Review n8n's official documentation Join the n8n community forum Create an issue in this repository 🔗 Useful Links n8n Documentation OpenAI API Documentation LinkedIn API Documentation n8n Community Forum Happy Automating! 🚀 This workflow helps you maintain a consistent LinkedIn presence while focusing on what matters most - your business and professional growth.
by System Admin
Tagged with: , , , ,
by Guillaume Duvernay
This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers:** Produce draft answers to large question sets using your company docs. Sales & proposal teams:** Auto-generate RFP answer drafts informed by internal docs. Support & operations teams:** Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders:** Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries:** Eliminates the manual process of running many individual lookups. Ensures answers are grounded:* Responses come from your uploaded documents via *Lookio**, reducing hallucinations. Produces ready-to-use output:* Delivers an enriched CSV with a new *Response** column for downstream use. Simple UX:* Users only need to upload a CSV with a *Query** column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the api_key header value with your Lookio API Key and update assistant_id with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries:** Insert error handling and delay nodes to respect API limits for large batches. Improve the speed**: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results:* Add an *Airtable* or *Google Sheets** node to archive questions and responses for audit and reuse. Post-process answers:** Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations:* Replace the *Form Trigger* with a *Google Drive* or *Airtable** trigger to process CSVs automatically from a folder or table.
by franck fambou
Extract and Convert PDF Documents to Markdown with LlamaIndex Cloud API Overview This workflow automatically converts PDF documents to Markdown format using the LlamaIndex Cloud API. LlamaIndex is a powerful data framework that specializes in connecting large language models with external data sources, offering advanced document processing capabilities with high accuracy and intelligent content extraction. How It Works Automatic Processing Pipeline: Form Submission Trigger**: Workflow initiates when a user submits a document through a web form Document Upload**: PDF files are automatically uploaded to LlamaIndex Cloud for processing Smart Status Monitoring**: The system continuously checks processing status and adapts the workflow based on results Conditional Content Extraction**: Upon successful processing, extracted Markdown content is retrieved for further use Setup Instructions Estimated Setup Time: 5-10 minutes Prerequisites LlamaIndex Cloud account and API credentials Access to n8n instance (cloud or self-hosted) Configuration Steps Configure Form Trigger Set up the webhook form trigger with file upload capability Add required fields to capture document metadata and processing preferences Setup LlamaIndex API Connection Obtain your API key from LlamaIndex Cloud dashboard Configure the HTTP Request node with your credentials and endpoint URL Set proper authentication headers and request parameters Configure Status Verification Define polling intervals for status checks (recommended: 10-30 seconds) Set maximum retry attempts to avoid infinite loops Configure success/failure criteria based on API response codes Setup Content Extractor Configure output format preferences (Markdown styling, headers, etc.) Set up error handling for failed extractions Define content storage or forwarding destinations Use Cases Document Digitization**: Convert legacy PDF documents to editable Markdown format Content Management**: Prepare documents for CMS integration or static site generators Knowledge Base Creation**: Transform PDF manuals and guides into searchable Markdown content Academic Research**: Convert research papers and publications for analysis and citation Technical Documentation**: Process PDF specifications and manuals for developer documentation Key Features Fully automated PDF to Markdown conversion Intelligent content structure preservation Error handling and retry mechanisms Status monitoring with real-time feedback Scalable processing for batch operations Requirements LlamaIndex Cloud API key n8n instance (v0.200.0 or higher recommended) Internet connectivity for API access Support For issues related to LlamaIndex API, consult their official documentation docs. For n8n-specific questions, refer to the n8n community forum.
by Sk developer
🎵 Spotify to MP3 → Upload to Google Drive Automate the process of converting Spotify track URLs into MP3 files, uploading them to Google Drive, and instantly generating shareable links — all triggered by a simple form. ✅ What This Workflow Does Accepts a Spotify URL from a form. Sends the URL to Spotify Downloader MP3 API on RapidAPI. Waits briefly for conversion. Downloads the resulting MP3 file. Uploads it to Google Drive. Sets public sharing permissions for easy access. 🧩 Workflow Structure | Step | Node Name | Description | |------|--------------------------------|-----------------------------------------------------------------------------| | 1 | On form submission | Collects Spotify track URL via an n8n Form Trigger node. | | 2 | Spotify Rapid API | Calls Spotify Downloader MP3 API to generate the MP3 download link. | | 3 | Wait | Ensures download link is processed before proceeding. | | 4 | Downloader | Downloads the MP3 using the generated link. | | 5 | Upload MP3 to Google Drive | Uploads the file using Google Drive credentials. | | 6 | Update Permission | Makes the uploaded file publicly accessible via a shareable link. | 🔧 Requirements n8n instance (self-hosted or cloud) RapidAPI account & subscription to Spotify Downloader MP3 API Google Cloud service account with Drive API access Active Google Drive (root or specified folder) 🚀 How to Use Set up Google API credentials in n8n. Subscribe to the Spotify Downloader MP3 API on RapidAPI. Insert your RapidAPI key into the HTTP Request node. Deploy the workflow and access the webhook form URL. Submit a Spotify URL — the MP3 gets downloaded, uploaded, and shared. 🎯 Use Cases 🎧 Music collectors automating downloads 🧑🏫 Teachers creating music-based lessons 🎙 Podcasters pulling music samples 📥 Anyone who needs quick Spotify → MP3 conversion 🛠 Tech Stack n8n**: Visual workflow automation RapidAPI**: Spotify Downloader MP3 API Google Drive**: File storage and sharing Form Trigger**: Input collection interface HTTP Request Node**: Handles API communication 🔐 Notes on Security Do not expose your x-rapidapi-key publicly. Use environment variables or n8n credentials for secure storage. Adjust sharing permissions (reader, writer, or restricted) per your needs. 🔗 API Reference 🎵 Spotify Downloader MP3 API – skdeveloper 📦 Tags spotify mp3 google-drive automation rapidapi n8n music
by WeblineIndia
This workflow sends a “Join in 10” Slack ping to each interviewer shortly before their interview starts. It checks the Interviews Google Calendar every minute, finds interviews starting in the next LEAD_MINUTES (default 10), and sends a Slack DM with the candidate name, role, local start time, meeting link, and any CV: / Notes: links present in the event description. If the Slack user can’t be found by email, it posts to a fallback channel (default #recruiting-alerts) with an @‑style email mention. A Data Store prevents duplicate pings for the same event + attendee. Who’s It For Interviewers who prefer a timely Slack nudge instead of calendar alerts. Recruiting coordinators who want consistent reminders without manual follow‑ups. Teams that include links directly in the calendar event description. How It Works Cron (every minute) polls near‑term events so pings arrive about 10 minutes before start. Google Calendar (Interviews) fetches upcoming events. Prepare pings filters interviews starting in ≤ LEAD_MINUTES, creates one item per internal attendee (company domain), and extracts meeting/CV/Notes links. Data Store checks a ledger to avoid re‑notifying the same event+attendee. Slack looks up the user by email and sends a DM with Block Kit buttons; otherwise posts to the fallback channel. Data Store records that the ping was sent. Attendees marked declined are skipped; accepted, tentative, and needsAction are included. How To Set Up Ensure interviews are on the Interviews Google Calendar and that interviewers are added as attendees. In each event’s description, optionally add lines like CV: https://... and Notes: https://.... Import the workflow and add credentials: Google Calendar (OAuth) Slack OAuth2 with users:read.email, chat:write, and im:write Open Set: Config and confirm: CALENDAR_NAME = Interviews COMPANY_DOMAIN = weblineindia.com TIMEZONE = Asia/Kolkata LEAD_MINUTES = 10 FALLBACK_CHANNEL = #recruiting-alerts Activate the workflow. It will begin checking every minute. Requirements Google Workspace calendar access for Interviews. Slack workspace + an app with the scopes: users:read.email, chat:write, im:write. n8n (cloud or self‑hosted) with access to both services. How to Customize the Workflow Lead time:* Change LEAD_MINUTES in *Set: Config** (e.g., 5, 15). Audience:** Modify attendee filters to include/exclude tentative or needsAction. Message format:** Tweak the Block Kit text/buttons (e.g., hide CV/Notes buttons). Fallback policy:** Switch the fallback from channel post to “skip and log” if needed. Time windows:** Add logic to silence pings at night or only during business hours. Calendar name:* Update CALENDAR_NAME in *Set: Config** if you use a different calendar. Add-Ons to level up the Workflow with additional nodes Conflict detector:** Warn if an interviewer is double‑booked in the next hour. Escalation:** If no DM can be sent (no Slack user), also notify a coordinator channel. Logging:** Append each ping to Google Sheets/Airtable for audit. Weekday rules:** Auto‑mute on specific days or holidays via a calendar/lookup table. Follow‑up:** Send a post‑interview Slack message with the feedback form link. Common Troubleshooting No pings:** Ensure events actually start within the next LEAD_MINUTES and that attendees include internal emails (@weblineindia.com). Wrong recipients:** Verify interviewer emails on the event match Slack emails; otherwise it will post to the fallback channel. Duplicate pings:* Confirm the *Data Store** is configured and the workflow isn’t duplicated. Missing meeting link:** Add a proper meeting URL to the event description or rely on Google Meet/Zoom links detected in the event. Time mismatch:** Make sure TIMEZONE is Asia/Kolkata (or your local TZ) and calendar times are correct. Need Help ? If you’d like a hand adjusting filters, message formatting or permissions, feel free to reach out we'll be happy to help you get this running smoothly.
by Yasser Sami
Olostep Amazon Products Scraper This n8n template automates Amazon product scraping using the Olostep API. Simply enter a search query, and the workflow scrapes multiple Amazon search pages to extract product titles and URLs. Results are cleaned, normalized, and saved into a Google Sheet or Data Table. Who’s it for E-commerce analysts researching competitors and pricing Product sourcing teams Dropshippers and Amazon sellers Automation builders who want quick product lists without manual scraping Growth hackers collecting product data at scale How it works / What it does Form Trigger User enters a search query (e.g., “wireless bluetooth headphones”). The query is used to build the Amazon search URL. Pagination Setup A list of page numbers (1–10) is generated automatically. Each number loads the corresponding Amazon search results page. Scrape Amazon with Olostep For each page, Olostep scrapes Amazon search results. Olostep’s LLM extraction returns: title — product title url — product link Parse & Split Results The JSON output is decoded and turned into individual product items. URL Normalization If the product URL is relative, it is automatically converted into a full Amazon URL. Conditional Check (IF node) Ensures only valid product URLs are stored. Helps avoid scraping Amazon navigation links or invalid items. Insert into Sheet / Data Table Each valid product is saved in: title url Automatic Looping & Rate Management A wait step ensures API rate limits are respected while scraping multiple pages. This workflow gives you a complete, reliable Amazon scraper with no browser automation and no manual copy/paste — everything runs through the Olostep API and n8n. How to set up Import this template into your n8n account. Add your Olostep API key. Connect your Google Sheets or Data Table. Deploy the form and start scraping with any Amazon search phrase. Requirements Olostep API key Google Sheets or Data Table n8n cloud or self-hosted instance How to customize the workflow Add more product fields (price, rating, number of reviews, seller name, etc.). Extend pagination range (1–20 or more pages). Add filtering logic (e.g., ignore sponsored results). Send scraped results to Notion, Airtable, or a CRM. Trigger via Telegram bot instead of a form. 👉 This workflow is perfect for e-commerce research, competitive analysis, or building Amazon product datasets with minimal effort.
by Edoardo Guzzi
Auto-update n8n instance with Coolify Who’s it for This workflow is designed for self-hosted n8n administrators who want to keep their instance automatically updated to the latest stable release. It removes the need for manual version checks and ensures deployments are always up to date. What it does The workflow checks your current n8n version against the latest GitHub release. If a mismatch is detected, it triggers a Coolify deployment to update your instance. If both versions match, the workflow ends safely without action. How it works Trigger: Start manually or on a schedule. HTTP Request (n8n settings): Fetches your current version (versionCli). HTTP Request (GitHub): Fetches the latest n8n release (name). Merge (SQL): Keeps only the two fields needed. Set (Normalize): Converts values into comparable variables. IF Check: Compares current vs latest version. If different → Deploy update. If same → Stop with no operation. HTTP Request (Coolify): Triggers a forced redeploy via API. How to set up Replace https://yourn8ndomain/rest/settings with your own n8n domain. Replace the Coolify API URL with your Coolify domain + app UUID. Add an HTTP Bearer credential containing your Coolify API token. Adjust the schedule interval (e.g., every 6 hours). Requirements Self-hosted n8n instance with /rest/settings endpoint accessible. Coolify (or a similar service) managing your n8n deployment. Valid API token configured as Bearer credential in n8n. How to customize Change the schedule frequency depending on how often you want checks. Modify the IF condition if you want stricter or looser version matching (e.g., ignore patch versions). Replace Coolify API call with another service (like Docker, Portainer, or Kubernetes) if you use a different deployment method.