by AFK Crypto
Try It Out! 🚀 Reddit Crypto Intelligence & Market Spike Detector ⸻ 🧠 Workflow Description Reddit Crypto Intelligence & Market Spike Detector is an automated market sentiment and price-monitoring workflow that connects social chatter with real-time crypto price analytics. It continuously scans new posts from r/CryptoCurrency, extracts recently mentioned coins, checks live price movements via CoinGecko, and alerts you on Discord when a significant spike or drop occurs. This automation empowers traders, analysts, and communities to spot early market trends before they become mainstream — all using free APIs and open data. ⸻ ⚙️ How It Works Monitor Reddit Activity ◦ Automatically fetches the latest posts from r/CryptoCurrency using Reddit’s free RSS feed. ◦ Captures trending titles, post timestamps, and mentions of coins or tokens (e.g., $BTC, $ETH, $SOL, $PEPE). Extract Coin Mentions ◦ A Code Node parses the feed using regex (\$[A-Za-z0-9]{2,10}) to identify any symbols or tickers discussed. ◦ Removes duplicates and normalizes all results for accurate data mapping. Fetch Market Data ◦ Each detected coin symbol is matched with CoinGecko’s public API to fetch live market data, including current price, market rank, and 24-hour price change. ◦ No API key required — completely free and reliable source. Detect Market Movement ◦ A second Code Node filters the fetched data to identify price movements greater than ±5% within the last 24 hours. ◦ This helps isolate meaningful market action from routine fluctuations. Generate and Send Alerts ◦ When a spike or dip is detected, the workflow composes a rich alert message including: ▪ 💎 Coin name and symbol ▪ 💰 Current price ▪ 📈 24h percentage change ▪ 🕒 Timestamp of detection ◦ The message is sent automatically to your Discord channel using a preconfigured webhook. ⸻ 💬 Example Output 🚨 Crypto Reddit Mention & Price Spike Alert! 🚨 💎 ETHEREUM (ETH) 💰 $3,945.23 📈 Change: +6.12% 💎 SOLANA (SOL) 💰 $145.88 📈 Change: +8.47% 🕒 Checked at: 2025-10-31T15:00:00Z If no coins cross the ±5% threshold: “No price spikes detected in the latest Reddit check.” 🔔 #MarketIntel #CryptoSentiment #PriceAlert ⸻ 🪄 Key Features • 🧠 Social + Market Intelligence – Combines Reddit sentiment with live market data to detect potential early signals. • 🔎 Automated Coin Detection – Dynamically identifies newly discussed tokens from live posts. • 📊 Smart Spike Filtering – Highlights only meaningful movements above configurable thresholds. • 💬 Discord Alerts – Delivers clear, structured, and timestamped alerts to your community automatically. • ⚙️ Fully No-Cost Stack – Utilizes free Reddit and CoinGecko APIs with no authentication required. ⸻ 🧩 Use Cases • Crypto Traders: Detect early hype or momentum shifts driven by social chatter. • Analysts: Automate social sentiment tracking tied directly to live market metrics. • Community Managers: Keep members informed about trending coins automatically. • Bots & AI Assistants: Integrate this logic to enhance automated trading signals or alpha alerts. ⸻ 🧰 Required Setup • Discord Webhook URL – For automatic alert posting. • (Optional) CoinGecko API endpoint (no API key required). • n8n Instance – Self-hosted or Cloud; free tier is sufficient. • Workflow Schedule – Recommended: hourly (Cron Node interval = 1 hour). ⸻ AFK Crypto Website: afkcrypto.com
by Growth AI
French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 * * (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 * * (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config** - Your configuration settings Offset** - Pagination tracking (managed automatically) All** - Complete tender database Target** - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.
by Atta
This workflow automatically turns any YouTube video into a structured blog post with Gemini AI. By sending a simple POST request with a YouTube URL to a webhook, it downloads the video’s audio, transcribes the content, and generates a blog-ready article with a title, description, tags, and category. The final result, along with the full transcript and original video URL, is delivered to your chosen webhook or CMS. How it works: The workflow handles the entire process of transforming YouTube videos into complete blog posts using Gemini AI transcription and structured text generation. Once triggered, it: Downloads the video’s audio Transcribes the spoken content into text Generates a blog post in the same language as the video’s original language Creates: A clear and engaging title A short description Suggested category and tags The full transcript of the video The original YouTube video URL This makes it easy to repurpose video content into publish-ready articles in minutes. This template is ideal for content creators, marketers, educators, and bloggers who want to quickly turn video content into written posts without manual transcription or editing. Setup Instructions Install yt-dlp on your local machine or server where n8n runs. This is required to download YouTube audio. Get a Google Gemini API key and configure it in your AI nodes. Webhook Input Configuration: Endpoint: The workflow starts with a Webhook Trigger. Method: POST Example Request Body: { "videoUrl": "https://www.youtube.com/watch?v=lW5xEm7iSXk" } Configure Output Webhook: Add your target endpoint in the last node where the blog post JSON is sent. This could be your CMS, a Notion database, or another integration. Customization Guidance Writing Style:** Update the AI Agent’s prompt to adjust tone (e.g., casual, professional, SEO-optimized). Metadata:** Modify how categories and tags are generated to fit your website’s taxonomy. Integration:** Swap the final webhook with WordPress, Ghost, Notion, or Slack to fit your publishing workflow. Transcript Handling:** Save the full transcript separately if you also want searchable video archives.
by Belen
This n8n template automatically transcribes GoHighLevel (GHL) call recordings and creates an AI-generated summary that is added as a note directly to the related contact in your GHL CRM. It’s designed for real estate investors, agencies, and sales teams that handle a large volume of client calls and want to keep detailed, searchable notes without spending hours on manual transcription. Who’s it for Sales and acquisitions teams that want instant call notes in their CRM Real estate wholesalers or agencies using GoHighLevel for deal flow Support and QA teams that need summarized transcripts for review Any business owner who wants to automatically document client conversations How it works A HighLevel automation workflow triggers when a call is marked “Completed” and automatically sends a webhook to n8n. The n8n workflow receives this webhook and waits briefly to ensure the call recording is ready. It retrieves the conversation and message IDs from the webhook payload. The call recording is fetched from GHL’s API. An AI transcription node converts the audio to text. A summarization node condenses the transcript into bullet points or a concise paragraph. A Code node formats the AI output into proper JSON for GHL’s “Create Note” endpoint. Finally, an HTTP Request node posts the summary to the contact’s record in GHL. How to set up Add your GoHighLevel OAuth credential and connect your agency account. Add your AI credential (e.g., OpenAI, Anthropic, or Gemini). Replace the sample webhook URL with your n8n endpoint. Test with a recent call and confirm the summary appears in the contact timeline. Requirements GoHighLevel account with API and OAuth access AI service for transcription and summarization (e.g., OpenAI Whisper + GPT) Customizing this workflow You can tailor this automation for your specific team or workflow: Add sentiment analysis or keyword extraction to the summary. Change the AI prompt to focus on “action items,” “objections,” or “next steps.” Send summaries to Slack, Notion, or Google Sheets for reporting. Trigger follow-up tasks automatically in your CRM based on keywords. Good to know AI transcription and summarization costs vary by provider — check your LLM’s pricing. GoHighLevel’s recording availability may take up to 1 minute after the call ends; adjust the delay accordingly. For OAuth setup help, refer to GHL’s OAuth documentation. Happy automating! ⚙️
by WeblineIndia
Fill iOS localization gaps from .strings → Google Sheets and PR with placeholders (GitHub) This n8n workflow automatically identifies missing translations in .strings files across iOS localizations (e.g., Base.lproj vs fr.lproj) and generates a report in Google Sheets. Optionally, it creates a GitHub PR to insert placeholder strings ("TODO_TRANSLATE") so builds don't fail. Supports DRY\_RUN mode. Who’s it for iOS teams who want fast feedback on missing translations. Localization managers who want a shared sheet to assign work to translators. How it works A GitHub Webhook triggers on push or pull request. The iOS repo is scanned for .strings files under Base.lproj or en.lproj and their target-language counterparts. It compares keys and identifies what’s missing. A new or existing Google Sheet tab (e.g., fr) is updated with missing entries. If enabled, it creates a GitHub PR with placeholder keys (e.g., "TODO_TRANSLATE"). How to set up Import the Workflow JSON into your n8n instance. Set Config Node values like: { "GITHUB_OWNER": "your-github-user-name", "GITHUB_REPO": "your-iOS-repo-name", "BASE_BRANCH": "develop", "SHEET_ID": "<YOUR_GOOGLE_SHEET_ID>", "ENABLE_PR": "true", "IOS_SOURCE_GLOB": "/Base.lproj/*.strings,/en.lproj/*.strings", "IOS_TARGET_GLOB": "*/.lproj/*.strings", "PLACEHOLDER_VALUE": "TODO_TRANSLATE", "BRANCH_TEMPLATE": "chore/l10n-gap-{{YYYYMMDD}}", } Create GitHub Webhook URL: https://your-n8n-instance/webhook/l10n-gap-ios Content-Type: application/json Trigger on: Push, Pull Request Connect credentials GitHub token with repo scope Google Sheets API (Optional) Slack OAuth + SMTP Requirements | Tool | Needed For | Notes | | ---------------- | -------------------- | ---------------------------------------- | | GitHub Repo | Webhook, API for PRs | repo token or App | | Google Sheets | Sheet output | Needs valid SHEET_ID or create-per-run | | Slack (optional) | Notifications | chat:write scope | | SMTP (optional) | Email fallback | Standard SMTP creds | How to customize Multiple Locales**: Add comma-separated values to TARGET_LANGS_CSV (e.g., fr,de,es). Globs**: Adjust IOS_SOURCE_GLOB and IOS_TARGET_GLOB to scan only certain modules or file patterns. Ignore Rules**: Add IGNORE_KEY_PREFIXES_CSV to skip certain internal/debug strings. Placeholder Value**: Change PLACEHOLDER_VALUE to something meaningful like "@@@". Slack/Email**: Set SLACK_CHANNEL and EMAIL_FALLBACK_TO_CSV appropriately. DRY\_RUN**: Set to true to skip GitHub PR creation but still update the sheet. Add‑ons Android support:** Add a second path for strings.xml (values → values-<lang>), same diff → Sheets → placeholder PR. Multiple languages at once:** Expand TARGET_LANGS_CSV and loop tabs + placeholder commits per locale. .stringsdict handling:** Validate plural/format entries and open a precise PR. Translator DMs:** Provide a LANG → Slack handle/email map to DM translators with their specific file/key counts. GitLab/Bitbucket variants:** Replace GitHub API calls with GitLab/Bitbucket equivalents to open Merge Requests. Use Case Examples Before a test build, ensure fr has all keys present—placeholders keep the app compiling. Weekly run creates a single sheet for translators and a PR with placeholders, avoiding last‑minute breakages. A new screen adds 12 strings; the bot flags and pre‑fills them across locales. Common troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | --------------------------------------------- | ------------------------------------------------------ | | No source files found | Glob doesn't match Base.lproj or en.lproj | Adjust IOS_SOURCE_GLOB | | Target file missing | fr.lproj doesn’t exist yet | Will be created in placeholder PR | | Parsing skips entries | Non-standard string format in file | Ensure proper .strings format "key" = "value"; | | Sheet not updating | SHEET_ID missing or insufficient permission | Add valid ID or allow write access | | PR not created | ENABLE_PR=false or no missing keys | Enable PR and ensure at least one key is missing | | Slack/Email not received | Missing credentials or config | Configure Slack/SMTP properly and set recipient fields | Need Help? Want to expand this for Android? Loop through 5+ locales at once? Or replace GitHub with GitLab? Contact our n8n Team at WeblineIndia with your repo & locale setup and we’ll help tailor it to your translation workflow!
by Anirudh Aeran
This workflow provides a complete backend solution for building your own WhatsApp marketing dashboard. It enables you to send dynamic, personalized, and rich-media broadcast messages to an entire contact list stored in Google Sheets. The system is built on three core functions: automatically syncing your approved Meta templates, providing an API endpoint for your front-end to fetch those templates, and a powerful broadcast engine that merges your contact data with the selected template for mass delivery. Who’s it for? This template is for marketers, developers, and businesses who want to run sophisticated WhatsApp campaigns without being limited by off-the-shelf tools. It's perfect for anyone who needs to send personalized bulk messages with dynamic content (like unique images or links for each user) and wants to operate from a simple, custom-built web interface. How it works This workflow is composed of three independent, powerful parts: Automated Template Sync: A scheduled trigger runs periodically to fetch all of your approved message templates directly from your Meta Business Account. It then clears and updates an n8n Data Table, ensuring your list of available templates is always perfectly in sync with Meta. Front-end API Endpoint: A dedicated webhook acts as an API for your dashboard. When your front-end calls this endpoint, it returns a clean JSON list of all available templates from the n8n Data Table, which you can use to populate a dropdown menu for the user. Dynamic Broadcast Engine: The main webhook listens for a request from your front-end, which includes the name of the template to send. It then: Looks up the template's structure in the Data Table. Fetches all contacts from your Google Sheet. For each contact, a Code node dynamically constructs a personalized API request. It can merge the contact's name into the body, add a unique user ID to a button's URL, and even pull a specific image URL from your Google Sheet to use as a dynamic header. Sends the fully personalized message to the contact. How to set up Pre-requisite - Front-end: This workflow is a backend and is designed to be triggered by a front-end application. You will need a simple UI with a dropdown to select a template and a button to trigger the broadcast. Meta for Developers: You need a Meta App with the WhatsApp Business API configured. From your app, you will need your WhatsApp Business Account ID, a Phone Number ID, and a permanent System User Access Token. n8n Data Table: Create an n8n Data Table (e.g., named "WhatsApp Templates") with the following columns: template_name, language_code, components_structure, template_id, status, category. Google Sheet: Create a Google Sheet to store your contacts. It must have columns like Phone Number, Full Name, and for dynamic images, Marketing Image URL. Configure Credentials: -> Create an HTTP Header Auth credential in n8n for WhatsApp. Use Authorization as the Header Name and Bearer YOUR_PERMANENT_TOKEN as the value. -> Add your Google Sheets credentials. Configure Nodes: -> In both HTTP Request nodes, select your WhatsApp Header Auth credential. Update the URLs with your own Phone Number ID and WABA ID. -> In the Google Sheets node, select your credential and enter the Sheet ID. -> In all Data Table nodes, select the Data Table you created. First Run: Manually execute the "Sync Meta Templates" flow (starting with the Schedule Trigger) once to populate your Data Table with your templates. Activate: Activate all parts of the workflow. Requirements A Meta for Developers account with a configured WhatsApp Business App. A permanent System User Access Token for the WhatsApp Business API. A Google Sheets account. A front-end application/dashboard to trigger the workflow.
by Piotr Sikora
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.
by WeblineIndia
Webhook from Payment Provider → Jira Finance Ticket → Slack Invoice Follow-up Automation This workflow automates failed subscription renewal processing by validating webhook data, using AI to analyze urgency and churn risk, creating a Jira Finance Task and notifying the finance team via Slack. If required fields are missing, it sends an error alert for manual review instead. ⚡ Quick Implementation Steps (Start Using in 60 Seconds) Import workflow JSON into n8n. Add Jira & Slack credentials. Configure webhook URL /payment-failed-renewal in payment provider. Test with: { "customerId": "C-101", "customerEmail": "user@example.com", "subscriptionId": "S-500", "amount": 39.99 } Activate workflow. What It Does This automation connects your payment system with your financial operations. When a subscription renewal fails, the payment provider sends a webhook. The workflow validates the fields, uses OpenAI to analyze the payment failure reason (determining urgency & churn risk), routes high-value failures to high priority, creates a Jira task with an AI-drafted recovery email and alerts the finance team on Slack. If required data is missing, the workflow prevents incomplete Jira tickets by routing the event to an error handler and sending a detailed Slack alert listing all missing fields and full payload for manual inspection. Who’s It For Finance & billing departments SaaS companies with recurring billing Teams using Jira for billing operations Slack-based financial support teams Companies wanting automated revenue recovery workflows Requirements to Use This Workflow n8n instance OpenAI API Key (or compatible LLM credential) Jira Software account with permissions for FIN project Slack bot token with channel posting rights Payment provider that supports POST JSON webhooks Webhook configured to: https://YOUR-N8N-URL/webhook/payment-failed-renewal How It Works & How To Set Up Step-by-Step Flow Webhook receives payment failure payload. Validation node checks required fields: customerId customerEmail subscriptionId amount AI Analysis: OpenAI analyzes failure reason, sets urgency, and drafts email. Logic: Switch node routes High Value (>$500) to 'High' priority. Jira Finance Task created (with AI draft). Slack message sent (with Churn Risk score). Setup Steps Step 1 — Webhook Setup Method: POST Path: payment-failed-renewal Step 2 — Jira Setup Select Jira credentials in Create Jira Finance Ticket node. Ensure: Project: FIN Issue type: Task Step 3 — Slack Setup Add Slack credentials to both Slack nodes. Select finance alert channel. Step 4 — OpenAI Setup Add OpenAI credentials in the AI Analysis node. Step 5 — Test { "customerId": "CUST-001", "customerEmail": "billing@example.com", "subscriptionId": "SUB-1001", "amount": 19.99 } Step 6 — Activate Enable the workflow. How To Customize Nodes Webhook Add Basic Auth Add token-based security Add JSON schema validation Validate Payload Enhance with: Email format validation Numeric validation for amount Auto-fallback values Jira Node Customize: Ticket summary structure Labels (billing-recovery, urgent, etc.) Add custom fields Change issue type or project Slack Nodes Enhance: Mentions: @finance-team Threads instead of channel posts Rich blocks, buttons, or attachments Add-ons (Optional Enhancements) Automated email to customer for payment recovery Retry count–based escalation (e.g., retry ≥ 3 → escalate to manager) Log data to Airtable / Google Sheets Sync events into CRM (HubSpot, Salesforce, Zoho) Notify Sales for high-value customer failures Use Case Examples Stripe renewal payment fails → Create Jira task → Slack finance alert. Chargebee retry attempts exhausted → Notify billing team immediately. Declined credit card → Jira ticket with failure reason. Razorpay/PayPal renewal failure → Automated follow-up. Webhook missing data → Slack error alert ensures nothing is silently ignored. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | Webhook not triggering | Wrong URL / method | Use POST + correct endpoint | | Jira ticket missing | No permissions or invalid payload | Check Jira permissions + required fields | | Slack shows undefined values | Missing fields in payload | Confirm payload structure | | Error alert triggered incorrectly | Field names mismatch | Match exact names: customerId, customerEmail, subscriptionId, amount | | Payment provider not sending events | Firewall/CDN blocking | Whitelist the n8n webhook URL | | Workflow silent | Not activated | Turn workflow ON | Need Help? If you want help customizing this workflow or extending it into a complete revenue recovery automation suite: WeblineIndia can support you with: Jira & Slack automation pipelines Payment provider webhook integrations Finance workflow optimization AI-based billing insights End‑to‑end automation solutions Reach out anytime for expert implementation or enhancements.
by JJ Tham
Struggling with inaccurate Meta Ads tracking due to iOS 14+ and ad blockers? 📉 This workflow is your solution. It provides a robust, server-side endpoint to reliably send conversion events directly to the Meta Conversions API (CAPI). By bypassing the browser, you can achieve more accurate ad attribution and optimize your campaigns with better data. This template handles all the required data normalization, hashing, and formatting, so you can set up server-side tracking in minutes. ⚙️ How it works This workflow provides a webhook URL that you can send your conversion data to (e.g., from a web form, CRM, or backend). Once it receives the data, it: Sanitizes User Data: Cleans and normalizes PII like email and phone numbers. Hashes PII: Securely hashes the user data using SHA-256 to meet Meta's privacy requirements. Formats the Payload: Assembles all the data, including click IDs (fbc, fbp) and user info, into the exact format required by the Meta CAPI. Sends the Event: Makes a direct, server-to-server call to Meta, reliably logging your conversion event. 👥 Who’s it for? Performance Marketers: Improve ad performance and ROAS with more accurate conversion data. Lead Generation Businesses: Reliably track form submissions as conversions. E-commerce Stores: Send purchase events from your backend to ensure nothing gets missed. Developers: A ready-to-use template for implementing server-side tracking without writing custom code from scratch. 🛠️ How to set up Setup is straightforward. You'll need your Meta Pixel ID and a CAPI Access Token. For a complete walkthrough, check out the tutorial video for this workflow on YouTube: https://youtu.be/_fdMPIYEvFM The basic steps are to copy the webhook URL, configure your form or backend to send the correct data payload, and add your Meta Pixel ID and Access Token to the final HTTP Request node. 👉 For a detailed, step-by-step guide, please refer to the yellow sticky note inside the workflow.
by Pixcels Themes
Who’s it for This template is ideal for ecommerce founders, dropshippers, Shopify store owners, product managers, and agencies who want to automate product listing creation. It removes manual work by generating titles, descriptions, tags, bullet points, alt text, and SEO metadata directly from a product image and basic input fields. What it does / How it works This workflow starts with a webhook that receives product information along with an uploaded image. The image is uploaded to an online image host so it can be used inside Shopify. At the same time, the image is analyzed by Google Gemini using your provided product name, material type, and details. Gemini returns structured JSON containing: Title Description Tags Bullet points Alt text SEO title SEO description The workflow cleans and parses the AI output, merges it with the uploaded image URL, and constructs a complete Shopify product payload. Finally, it creates a new product in Shopify automatically using the generated content and the provided product variants, vendor, options, and product type. Requirements Google Gemini (PaLM) API credentials Shopify private access token Webhook endpoint for receiving data and files An imgbb (or any image hosting) API key How to set up Connect your Gemini and Shopify credentials. Replace the imgbb API key and configure the hosting node. Provide vendor, product type, variants, and options in the webhook payload. Ensure your source system sends file, product_name, material_type, and extra fields. Run the webhook URL and test with a sample product. How to customize the workflow Change the AI prompt for different product categories Add translation steps for multi-language stores Add price calculation logic Push listings to multiple Shopify stores Save generated metadata into Google Sheets or Notion
by Amit Mehta
This workflow performs structured data extraction and data mining from a web page by combining the capabilities of Bright Data and Google Gemini. How it Works This workflow focuses on extracting structured data from a web page using Bright Data's Web Unlocker Product. It then uses n8n's AI capabilities, specifically Google Gemini Flash Exp, for information extraction and custom sentiment analysis. The results are sent to webhooks and saved as local files. Use Cases Data Mining**: Automating the process of extracting and analyzing data from websites. Web Scraping**: Gathering structured data for market research, competitive analysis, or content aggregation. Sentiment Analysis**: Performing custom sentiment analysis on unstructured text. Setup Instructions Bright Data Credentials: You need to have an account and a Web Unlocker zone with Bright Data. Update the Header Auth account credentials in the Perform Bright Data Web Request node. Google Gemini Credentials: Provide your Google Gemini(PaLM) Api account credentials for the AI-related nodes. Configure URL and Zone: In the Set URL and Bright Data Zone node, set the web URL you want to scrape and your Bright Data zone. Update Webhook: Update the Webhook Notification URL in the relevant HTTP Request nodes. Workflow Logic Trigger: The workflow is triggered manually. Set Parameters: It sets the target URL and the Bright Data zone. Web Request: The workflow performs a web request to the specified URL using Bright Data's Web Unlocker. The output is formatted as markdown. Data Extraction & Analysis: The markdown content is then processed by multiple AI nodes to: Extract textual data from the markdown. Perform topic analysis with a structured response. Analyze trends by location and category with a structured response. Output: The extracted data and analysis are sent to webhooks and saved as JSON files on disk. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger node to start the workflow. | | Set URL and Bright Data Zone | A Set node to define the URL to be scraped and the Bright Data zone to be used. | | Perform Bright Data Web Request | An httpRequest node that performs the web request to Bright Data's API to retrieve the content. | | Markdown to Textual Data Extractor | An AI node that uses Google Gemini to convert markdown content into plain text. | | Google Gemini Chat Model | A node representing the Google Gemini model used for the data extraction. | | Topic Extractor with the structured response | An AI node that performs topic analysis and outputs the results in a structured JSON format. | | Trends by location and category with the structured response | An AI node that analyzes and clusters emerging trends by location and category, outputting a structured JSON. | | Initiate a Webhook Notification... | These nodes send the output of the AI analysis to a webhook. | | Create a binary file... | Function nodes that convert the JSON output into binary format for writing to a file. | | Write the topics/trends file to disk | readWriteFile nodes that save the binary data to a local file (d:\topics.json and d:\trends.json). | Customization Tips Change the web URL in the Set URL and Bright Data Zone node to scrape different websites. Modify the AI prompts in the AI nodes to customize the analysis (e.g., change the sentiment analysis criteria). Adjust the output path in the readWriteFile nodes to save the files to a different location. Suggested Sticky Notes for Workflow Note**: "This workflow deals with the structured data extraction by utilizing Bright Data Web Unlocker Product... Please make sure to set the web URL of your interest within the 'Set URL and Bright Data Zone' node and update the Webhook Notification URL". LLM Usages**: "Google Gemini Flash Exp model is being used... Information Extraction is being used for the handling the custom sentiment analysis with the structured response". Required Files 1GOrjyc9mtZCMvCr_Structured_Data_Extract,Data_Mining_with_Bright_Data&_Google_Gemini.json: The main n8n workflow export for this automation. Testing Tips Run the workflow and check the webhook to verify that the extracted data is being sent correctly. Confirm that the d:\topics.json and d:\trends.json files are created on your disk with the expected structured data. Suggested Tags & Categories Engineering AI
by Cheng Siong Chin
How It Works This workflow automates document authenticity verification by combining AI-based content analysis with immutable blockchain records. It is built for compliance teams, legal departments, supply chain managers, and regulators who need tamper-proof validation and auditable proof. The solution addresses the challenge of detecting forged or altered documents while producing verifiable evidence that meets legal and regulatory standards. Documents are submitted via webhook and processed through PDF content extraction. Anthropic’s Claude analyzes the content for authenticity signals such as inconsistencies, anomalies, and formatting issues, returning structured authenticity scores. Verified documents trigger blockchain record creation and publication to a distributed ledger, with cryptographic proofs shared automatically with carriers and regulators through HTTP APIs. Setup Steps Configure webhook endpoint URL for document submission Add Anthropic API key to Chat Model node for AI Set up blockchain network credentials in HTTP nodes for record preparation Connect Gmail account and specify compliance team email addresses Customize authenticity thresholds Prerequisites Anthropic API key, blockchain network access and credentials Use Cases Supply chain documentation verification for import/export compliance Customization Adjust AI prompts for industry-specific authenticity criteria Benefits Eliminates manual document review time while improving fraud detection accuracy