by System Admin
Tagged with: , , , ,
by Yasser Sami
Zillow Property Scraper Using Olostep API This n8n template automates Zillow property data collection by scraping Zillow search results using the Olostep API. It extracts property price, link to listing, and location, removes duplicates, and stores everything in a Google Sheet or Data Table. Who’s it for Real estate analysts and investors researching property markets. Lead generators needing structured Zillow data. Freelancers and automation builders collecting housing listings. Anyone needing fast, clean Zillow data without manual scraping. How it works / What it does Form Trigger User enters a Zillow search URL. This becomes the base Zillow search URL. Pagination Logic A list of page numbers (1–7) is generated. Each number is used to load the next Zillow search page. Scrape Zillow Pages with Olostep For each page, the Olostep API scrapes the Zillow results. Olostep’s LLM extraction schema extracts: price url (link to the Zillow listing) location Parse & Split Results Returned JSON is cleaned and converted into individual listing items. Remove Duplicates Ensures each Zillow listing appears only once. Insert into Google Sheet / Data Table Final cleaned listings are inserted row-by-row. Perfect for filtering, exporting, or further analysis. This workflow gives you a fast, scalable property scraper using Zillow + Olostep — no browser automation, no manual copy/paste. How to set up Import the template into n8n. Add your Olostep API key. Connect your Google Sheet or n8n Data Table. Deploy the form and start scraping by entering a place name. Requirements Olostep API key Google Sheets account or Data Table n8n cloud or self-hosted instance How to customize the workflow Add more pages to the pagination array (e.g., 1–20). Expand the LLM extraction schema to include: number of bedrooms number of bathrooms square footage property type Trigger via Telegram or API instead of a form. Send results to Airtable or Notion instead of Google Sheets. 👉 This template gives you an automated Zillow scraper powered by AI extraction — perfect for real estate lead gen or market research.
by System Admin
No description available
by Adam ABDELMOUMNI
✨What it does Useful for scraping only/API data. This workflow will give you the ability to bypass the IP address limitation control of some publicly available APIs by using a different couple of "user-agent" and "IP address" for each call to the targeted API. You can therefore place this workflow before any HTTP request node. 1) Generate over 9000 different user-agents that can be used randomly (or not) into each one of you API call. Example of user-agents it can provide with for Chrome web browser : "Mozilla/5.0 (Windows NT 11.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.6998.166 Safari/537.36" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:146.0) Gecko/20100101 Firefox/146.0" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/46.0.2486.0 Safari/537.36 Edge/13.10586" It will simply send an HTTP GET request to https://www.useragentstring.com/pages/Browserlist/ and get all user-agent values available from over 100 web browsers (Chrome, Edge,Safari,Firefox etc). 2) Generate a dynamic IP address to be used as proxy to be used for each API call. It uses decodo "residential" "rotating session" type : a different residential IP address is used for each API call. After running the workflow you'll find different IP address/user-agent used in node IP address and user-agent used: 🛠️How to set it up ? If you're using Decodo proxy server (or any other proxy service provider) you can use the Residential proxy with "session type" as "rotating" is your want to have one IP address per API call 1) The proxy connection details Credentials usually work like this in any proxy service provider : http://username:password@gate.decodo.com:PORT You just need to configure these credentials in your node SET your proxy connection details in the node SET your proxy connection details here`: proxy_username proxy_password proxy_port 2) Number of user-agents needed You can configure the number of different user-agents you want to use in the node Take X random user-agents 3) Call your targeted API You just configure your HTTP node for the targeted API you want to call with these additional details in the node Targeted API : add header name user-agent with value {{ $json["clean_user-agent"] }} add option "proxy" with value http://{{ $json.proxy_username }}:{{ $json.proxy_password }}@gate.decodo.com:{{ $json.proxy_port }} ⚠️ Please note that some APIs may reject connections from a proxy server
by System Admin
Tagged with: , , , ,
by Felix
What This Workflow Does Upload an invoice (PDF, PNG, or JPEG) via a hosted web form. The file is sent to easybits Extractor, which extracts key invoice data (supplier, amount, date, etc.). Based on the amount, an approval tier is assigned. The invoice details are then posted to Slack with interactive Approve / Reject / Flag buttons. How It Works Form Upload – A user uploads an invoice through the n8n web form Extraction via easybits – The data URI is POSTed to the easybits Extractor, which returns structured invoice data Field Mapping – Extracted fields are mapped + approval tier is calculated based on amount Slack Notification – A message is posted to Slack with invoice details and interactive buttons Approval Tiers 🟢 Standard: < €1,000 🟡 Medium: €1,000 – €5,000 🔴 High: > €5,000 Setup Guide 1. Create Your easybits Extractor Pipeline Go to extractor.easybits.tech and create a new pipeline Add the following fields to the mapping: vendor_name – The supplier/company name on the invoice invoice_number – The invoice reference number invoice_date – The date on the invoice total_amount – The total amount due (number only) customer_name – The recipient/customer name Copy your Pipeline ID and API Key 2. Connect the Nodes in n8n Add the easybits Extractor node from the n8n community nodes Enter your Pipeline ID and API Key as credentials Create a Slack API credential using your Slack Bot Token and assign it to the Slack node Update the Slack channel ID in the Send to Slack for Approval node to your target channel 3. Set Up the Slack App Go to api.slack.com/apps and create a new app Add Bot Token Scopes: chat:write, chat:write.public Install the app to your workspace Copy the Bot User OAuth Token (starts with xoxb-) Enable Interactivity and set the Request URL to your approval handler webhook 4. Activate & Test Click Active in the top-right corner of n8n Open the form URL and upload a test invoice Check Slack – you should see the approval message with buttons
by Cheng Siong Chin
Compare AI Models with Nvidia API: Qwen, DeepSeek, Seed-OSS & Nemotron Overview Queries four AI models simultaneously via Nvidia's API in 2-3 seconds—4x faster than sequential processing. Perfect for ensemble intelligence, model comparison, or redundancy. How It Works Webhook Trigger receives queries AI Router distributes to four parallel branches: Qwen2, SyncGenInstruct, DeepSeek-v3.1, and Nvidia Nemotron Merge Node aggregates responses (continues with partial results on timeout) Format Response structures output Webhook Response returns JSON with all model outputs Prerequisites Nvidia API key from build.nvidia.com (free tier available) n8n v1.0.0+ with HTTP access Model access in Nvidia dashboard Setup Import workflow JSON Configure HTTP nodes: Authentication → Header Auth → Authorization: Bearer YOUR_API_KEY Activate workflow and test Customization Adjust temperature/max_tokens in HTTP nodes, add/remove models by duplicating nodes, change primary response selection in Format node, or add Redis caching for frequent queries. Use Cases Multi-model chatbots, A/B testing, code review, research assistance, and production systems with AI fallback.
by Tran Trung Nghia
Veo 3 Video Generator via VietVid.com API (n8n) Overview This workflow leverages the VietVid.com Veo3 model to generate AI videos from simple text descriptions or optional images. Users interact via a form interface, inputting a prompt (e.g., a scene description), choosing aspect ratio and model, then the system automatically submits the request to the VietVid API, monitors the generation status in real time, and retrieves the final video output. It’s ideal for content creators, marketers, and developers exploring text-to-video AI creation, supporting intelligent video generation with minimal setup. Prerequisites A VietVid.com account and API key: Register at VietVid.com to obtain your free or paid API key. An active n8n instance (cloud or self-hosted) with HTTP Request, Wait, and form submission capabilities. Basic knowledge of AI prompts for video generation to achieve optimal results. Setup Instructions 1. Obtain API Key Register at VietVid.com and generate your API key. Store it securely—do not share publicly. 2. Configure the Form In the Form Trigger node, ensure the following fields are available: text_prompt — video description (e.g., “A serene mountain landscape at sunset with birds flying”) ImageURL [optional] — optional image input for image-to-video api_Token — your VietVid API key aspect_Ratio [16:9,9:16] — dropdown to select ratio model — choose between veo3 and veo3_fast 3. Test the Workflow Click Execute Workflow in n8n. Access the generated form URL. Submit your prompt, API key, and options. The workflow will poll the VietVid API every 10 seconds until the video is ready. 4. Handle Outputs The final Set node formats and displays the video links: 720p_link — always available when ready. 1080p_link — available only for 16:9 aspect ratio. Customization Tips Enhance prompts*: Add details like style (realistic, cinematic, animated*), duration, actions, and camera/lighting for better results. Stability**: Fix the seeds value (e.g., 50000) for more consistent characters. Webhook Response: Add a **Webhook Response node to return a clean JSON payload for frontend integrations. Adjust polling delay**: Modify Wait node (8–15s) if needed to balance speed vs. API calls.
by Viktor Klepikovskyi
Index your site using IndexNow and XML sitemap Stop waiting for search engines to discover your content updates. This workflow automates the process of notifying search engines (Bing, Yandex, etc.) about new or updated pages using the IndexNow protocol. By parsing your XML sitemap and filtering for pages modified within a specific timeframe (e.g., the last week), it ensures your crawl budget is used efficiently and your latest content appears in search results almost instantly. How to configure: Host your key: Generate an IndexNow key and upload the .txt file to your website's root directory. Setup Configuration node: Enter your sitemap_url (e.g., https://yoursite.com/sitemap.xml). Enter your indexnow_key. Set the modified_after variable to define your lookback window (e.g., -7d or a specific ISO date). Schedule: Adjust the Schedule Trigger to your preferred frequency (default is daily). Activate: Once configured, turn the workflow on to start proactive indexing. For a deep dive into how this workflow works and why it's a game-changer for your SEO strategy, check out our full guide on n8nplaybook.com. Read the full Playbook Article here
by Marco Cassar
Who it’s for? Anyone who wants a simple, secure way to call a Google Cloud Run endpoint from n8n—without exposing it publicly. People who want a cheap/free-tier way to run custom API logic without hosting n8n or spinning up servers. Example: you’ve got scraping code that needs specific system/python libs—build it into a Dockerfile on Cloud Run, then call it as a secure endpoint from n8n. How it works This is a conjunctive workflow: the main workflow calls Service Auth (sub-workflow) to get a Google ID token, merges that auth with your context, then calls your Cloud Run URL with Authorization: Bearer <id_token>. Works great for single calls or looping over items. How to set up General instructions below—see my detailed guide for more info: Build a Secure Google Cloud Run API, Then Call It from n8n (Free Tier) Setup: Create a Cloud Run service and enable Require authentication (Cloud IAM). Create a Google Service Account and grant Cloud Run Invoker on that service. In n8n, import the workflows and update the Vars node (service_url, optional service_path). Create a JWT (PEM) credential from your service account key, then run. Make sure to read the sticky notes in the workflows—they contain helpful pointers and optional configurations. Requirements Cloud Run service URL (auth required) Google Service Account with Cloud Run Invoker Private key JSON fields downloaded from Service Account | needed to generate JWT credentials How to customize Change the HTTP method/path/body in Cloud Run Request, or drop the Service Auth (sub-workflow) into other workflows to reuse the same auth pattern. More details Full write-up (minimal + modular flows), screenshots, and more: Build a Secure Google Cloud Run API, Then Call It from n8n (Free Tier) — by Marco Cassar
by Tran Trung Nghia
Cheap Nano Banana API - AI Image Generator with BananaAPI.com Overview This workflow integrates BananaAPI.com with the Nano Banana image engine to generate or edit AI images from text prompts and optional reference images. Users simply fill out a form with their prompt and preferences, the workflow submits the request to BananaAPI, polls the status until it is complete, and then returns the final image link. Why use it? Super affordable: only $0.025 per image Pay-as-you-go** pricing — no monthly subscription Credits never expire** — use anytime, no pressure Perfect for creators, marketers, and developers looking for a cost-effective AI image generator inside n8n. Prerequisites A BananaAPI.com account + API key (Bearer token). Sign up at BananaAPI.com. An n8n instance (Cloud or self-hosted). Basic knowledge of crafting AI prompts for better quality results. ⚠️ Important: Never expose your API key in public workflows. Use n8n Credentials for production setups. Setup Instructions 1. Obtain API Key Create an account at BananaAPI.com, generate your API key, and keep it safe. 2. Configure the Form The Form Trigger collects the following fields: api_token (required) — Banana API key prompt (required) — image description (e.g., “a neon cyberpunk cat, detailed, 4k”) Output Format [optional] — choose PNG or JPEG Image Size [optional] — 16:9, 9:16, 1:1, 3:4, 4:3 image_url_1 ... image_url_5 [optional] — reference images for editing/transform 3. Workflow Execution User fills the form and submits. Workflow sends a POST request to https://bananaapi.com/api/n8n/generate/. BananaAPI forwards the job to Nano Banana. Workflow waits 5s, then polls status via image-status/{taskId}. If status != completed, loop until ready. Once completed, workflow returns the final image URL. 4. Outputs The workflow returns: image_url — the generated image link task_id — task reference ID status — job status (completed/pending) 💡 Tip: Add a Webhook Response node to return clean JSON for frontend apps. Customization Tips Enhance prompts** with details like style (photorealistic, cartoon, cyberpunk), lighting, or action for better results. Use image_url_1 with a strong prompt to create image editing flows. Adjust wait time (5s → 8–10s) to optimize polling frequency. Add validation to ensure required fields are always filled in. API Reference POST** https://bananaapi.com/api/n8n/generate/ GET** https://bananaapi.com/api/n8n/image-status/{taskId} Docs: BananaAPI Docs ✅ Always include Authorization: Bearer <token> in headers. Pricing Advantages $0.025 per image** — cheaper than most alternatives Pay-as-you-go** — no monthly subscription required Credits never expire** — full flexibility to use anytime This makes BananaAPI + Nano Banana one of the most budget-friendly AI image solutions for automation workflows. Troubleshooting 401/403 Unauthorized** → Check Authorization header (Bearer token). Invalid JSON** → Ensure POST body is valid JSON (double quotes, no trailing commas). No imageUrl returned** → Task still pending; wait longer or verify taskId. Slow performance** → Increase wait interval (8–10s). Security Best Practices Do not hardcode API tokens in public workflows. Use n8n Credentials for storing tokens securely. Hide sensitive fields in forms or use Webhooks for controlled access.
by Tobias Mende
This n8n template automates daily backups of workflows and credentials to S3-compatible storage with automatic retention management. Designed for self-hosted n8n instances requiring disaster recovery protection. The workflow has three tasks: backs up all workflows via n8n API, exports and stores credentials securely, and automatically deletes outdated backups based on configurable retention policies. Perfect for administrators needing automated backup solutions with storage cost management. Target Audience n8n Administrators**: Managing production n8n instances requiring reliable backup solutions DevOps Teams**: Implementing disaster recovery strategies for automation infrastructure IT Managers**: Ensuring business continuity and compliance for critical automation workflows System Administrators**: Maintaining secure, automated backup processes for workflow management platforms How it works The workflow operates through three synchronized branches that execute automatically on a daily schedule: Workflow Backup Process: The schedule trigger initiates daily backups, retrieving all workflows via the n8n API and storing them as timestamped JSON files in your S3 bucket. Retention Management: Simultaneously, the system lists existing backup files, extracts dates from filenames, applies retention policies to identify outdated backups, and automatically deletes files beyond the configured retention period. Credential Backup: In parallel, the workflow exports all n8n credentials to a temporary file, uploads the encrypted credential data to S3 storage, and securely removes temporary files from the local system. Prerequisites Before implementing this template, ensure you have the following requirements in place: Self-Hosted n8n Instance: This template requires a self-hosted n8n installation with file system access for credential export functionality. Cloud-hosted n8n instances cannot export credentials due to security restrictions. S3-Compatible Storage: Set up an S3 bucket (AWS S3, MinIO, DigitalOcean Spaces, or any S3-compatible service) with read/write permissions configured for your backup storage needs. Access Credentials: Obtain S3 access credentials (Access Key ID and Secret Access Key) with appropriate bucket permissions for file operations (create, delete, list). System Permissions: Ensure your n8n instance has command-line access and file system permissions for credential export and temporary file management operations. Setup Instructions Step 1: S3 Bucket Configuration Create and configure your S3-compatible storage bucket: Create a new S3 bucket in your preferred region Configure bucket policies for read/write access Generate access credentials (Access Key ID and Secret Access Key) Note the bucket name, region, and endpoint URL for configuration Step 2: Import and Configure Template Import this workflow template into your n8n instance Navigate to the Config node (Manual Trigger) to customize settings Configure the following parameters: Bucket name and region Retention period (default: 7 days) Backup file naming conventions Folder structure preferences Step 3: Set Up S3 Credentials Configure S3 credentials in all storage-related nodes. Step 4: Set Up N8N Credentials Create an API key for n8n via the n8n settings of your n8n instance. Set this API key in the n8n node configuration. These credentials are neccesary to retrieve all workflows. Step 5: Configure Backup Schedule Customize the Daily Backup schedule trigger: Daily at 2:00 AM: 0 2 * * * Daily at midnight: 0 0 * * * Twice daily: 0 0,12 * * * Custom schedule: Modify cron expression as needed Step 6: Test and Validate Execute the Config node manually to verify settings Run the complete workflow to test all three backup branches Verify files appear in your S3 bucket with correct naming Confirm retention policies work by checking cleanup operations Test credential backup and temporary file removal Retention Settings The retention management system automatically maintains your backup storage: Configurable Retention Period: Set how many days of backups to retain (default: 31 days). The system automatically calculates cutoff dates and removes older files. Date-Based Cleanup: The Extract Date node processes backup filenames to determine file age, while the Keep Outdated Backups filter identifies files beyond the retention period. Automatic Deletion: Outdated files are automatically removed from S3 storage, preventing unlimited storage growth and managing costs effectively.