by Kalyxi Ai
π Automate News Discovery & Publishing with GPT-4, Google Search API & Slack π― Overview Automated content publishing system that discovers industry news, transforms it into original articles using GPT-4, and publishes across multiple channels with SEO optimization and intelligent duplicate prevention. β¨ Key Features π€ Smart Query Generation** - AI agent generates unique search queries while checking Google Sheets to avoid duplicates π News Discovery** - Uses Google Custom Search API to find recent articles (last 7 days) π§ Content Intelligence** - Processes search results and skips anti-bot protected sites automatically π GPT-4 Article Generation** - Creates professional, SEO-optimized news articles in Reuters/Bloomberg style π’ Multi-Channel Publishing** - Publishes to CMS with automatic Slack notifications π Comprehensive Tracking** - Logs all activity to Google Sheets for analytics and duplicate prevention π How It Works β° Scheduled Trigger runs every 8 hours to maintain consistent content flow π€ AI Agent generates targeted search queries for your niche while checking historical data π Google Search finds recent articles and extracts metadata (title, snippet, source) π‘οΈ Smart Content Handler bypasses sites with anti-bot protection, using search snippets instead β‘ GPT-4 Processing transforms snippets into comprehensive 2000+ word articles with proper formatting π Publishing Pipeline formats content for CMS with SEO metadata and publishes automatically π± Notification System sends detailed Slack updates with article metrics π Activity Logging tracks all published content to prevent future duplicates π§ Setup Requirements π Prerequisites Google Custom Search API key and Search Engine ID OpenAI GPT-4 API access Google account for tracking spreadsheet Slack workspace for notifications CMS or website with API endpoint for publishing π οΈ Step-by-Step Setup Step 1: π Google Custom Search Configuration Go to Google Custom Search Engine Create a new search engine Configure to search the entire web Copy your Search Engine ID (cx parameter) Get your API key from Google Cloud Console Step 2: π Google Sheets Template Setup Create a Google Sheet with these required columns: Column A:** timestamp - ISO date format (YYYY-MM-DD HH:MM:SS) Column B:** query - The search query used Column C:** title - Published article title Column D:** url - Published article URL Column E:** status - Publication status (success/failed) Column F:** word_count - Final article word count Template URL: Copy this Google Sheets template Step 3: π Credential Configuration Set up the following credentials in n8n: π Google Sheets API - OAuth2 connection to your Google account π€ OpenAI API - Your GPT-4 API key π± Slack Webhook - Webhook URL for your notification channel π Custom Search API - Your Google Custom Search API key Step 4: βοΈ Workflow Customization Modify these key parameters to fit your needs: π― Search Topic:** Edit the AI agent prompt to focus on your industry β° Publishing Schedule:** Adjust the cron trigger (default: every 8 hours) π Article Length:** Modify GPT-4 prompt for different word counts π CMS Endpoint:** Update the publishing node with your website's API π¨ Customization Options π― Content Targeting Modify the AI agent's search query generation to focus on specific industries Adjust date restrictions (currently set to last 7 days) Change the number of search results processed per run βοΈ Article Style Customize GPT-4 prompts for different writing styles (formal, casual, technical) Adjust article length requirements Modify SEO optimization parameters π‘ Publishing Channels Add additional CMS endpoints for multi-site publishing Configure different notification channels (Discord, Teams, etc.) Set up social media auto-posting integration π‘ Use Cases π° Automated news websites π Industry blog content generation π SEO content pipeline automation π News aggregation and republishing π Content marketing automation π οΈ Technical Notes Workflow includes error handling for anti-bot protection Duplicate prevention through Google Sheets tracking Rate limiting considerations for API usage Automatic retry logic for failed requests π Support For setup assistance or customization help, refer to the workflow's internal documentation nodes or contact the template creator.
by bangank36
This workflow scrapes Trustpilot reviews for a given profile and saves them into Google Sheets. How It Works Clone this Google Sheets template, which includes two sheets: trustpilot A raw collection of Trustpilot reviews. You can customize it as needed. helpfulcrowd This sheet follows the format from this HelpfulCrowd guide, with a slight modification: an added review_id column to support the upsert process. Once the workflow is complete, export the sheet as a CSV and upload it to HelpfulCrowd. For detailed steps, see this post. Running the Workflow You can trigger the workflow on-demand or schedule it to run at a set interval. Requirements Trustpilot business name** (e.g., n8n.io in https://www.trustpilot.com/review/n8n.io). Update this name and pagination settings in the Global node. Google Sheets API credentials** Check out my other templates: π My n8n Templates
by Artem Boiko
Estimate embodied carbon (CO2e) for grouped BIM/CAD elements. The workflow accepts an existing XLSX (grouped element data) or, if missing, can trigger a local RvtExporter.exe to generate one. It detects category fields, filters out non-building elements, infers aggregation rules with AI, computes CO2 using densities & emission factors, and exports a multi-sheet Excel plus a clean HTML report. What it does Reads or builds XLSX** (from your model via RvtExporter.exe when needed). Finds category/volumetric fields**; separates building vs. annotation elements. Uses AI to infer aggregation rules (sum/mean/first) per header. Groups** rows by your group_by field and aggregates totals. Prepares enhanced prompts and calls your LLM to classify materials and estimate CO2 (A1-A3 minimum). Computes project totals* and generates a *multi-sheet XLSX* + *HTML** report with charts and hotspots. Prerequisites LLM credentials** for one provider (e.g., OpenAI, Anthropic, Gemini, Grok/OpenRouter). Enable one chat node and connect credentials. Windows host* only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is *not required**. Optional: mapping/classifier files (XLSX/CSV/PDF) to improve material classification. How to use Import this JSON into n8n. Open the Setup/Parameters node(s) and set: project_file β path to your .rvt/.ifc or to an existing grouped *_rvt.xlsx path_to_converter β C:\\DDC_Converter_Revit\\datadrivenlibs\\RvtExporter.exe (optional) group_by β e.g., Type Name / Category / IfcType sheet_name β default Summary (if reading from XLSX) Enable one LLM node and attach credentials; keep others disabled. Execute (Manual Trigger). The workflow detects/builds the XLSX, analyzes, classifies, estimates CO2, then writes Excel and opens the HTML report. Outputs Excel** (CO2_Analysis_Report_YYYY-MM-DD.xlsx, ~8 sheets): Executive Summary, All Elements, Material Summary, Category Analysis, Impact Analysis, Top 20 Hotspots, Data Quality, Recommendations. HTML**: executive report with key KPIs and charts. Per-group fields include: Material (EU/DE/US), Quantity & Unit, Density, Mass, CO2 Factor, Total CO2 (kg/tonnes), CO2 %, Confidence, Assumptions. Notes & tips Input quantities (volumes/areas) are already aggregated per group β do not multiply by element count. Use -no-collada upstream if you only need XLSX in extraction. Prefer ASCII-safe paths and ensure write permissions to output folder. Categories Data Extraction Β· Files & Storage Β· ETL Β· CAD/BIM Β· Carbon/ESG Tags cad-bim, co2, carbon, embodied-carbon, lca, revit, ifc, xlsx, html-report, llm Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub
by Yaron Been
This workflow provides automated access to the Aihilums Sehatsanjha AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Aihilums Sehatsanjha model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Aihilums/sehatsanjha AI model Aihilums Sehatsanjha**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Rajeet Nair
Overview This workflow implements a self-healing Retrieval-Augmented Generation (RAG) maintenance system that automatically updates document embeddings, evaluates retrieval quality, detects embedding drift, and safely promotes or rolls back embedding updates. Maintaining high-quality embeddings in production RAG systems is difficult. When source documents change or embedding models evolve, updates can accidentally degrade retrieval quality or introduce semantic drift. This workflow solves that problem by introducing an automated evaluation and rollback pipeline for embeddings. It periodically checks for document changes, regenerates embeddings for updated content, evaluates the new embeddings against a set of predefined golden test questions, and compares the results with the currently active embeddings. Quality metrics such as Recall@K, keyword similarity, and answer variance are calculated, while embedding vectors are also analyzed for semantic drift using cosine distance. If the new embeddings outperform the current ones and remain within acceptable drift limits, they are automatically promoted to production. Otherwise, the system safely rolls back or flags the update for manual review. This creates a robust, production-safe RAG lifecycle automation system. How It Works 1. Workflow Trigger The workflow can start in two ways: Scheduled trigger** running daily Webhook trigger** when source documents change Both paths lead to a centralized configuration node that defines parameters such as chunk size, thresholds, and notification settings. 2. Document Retrieval & Change Detection Documents are fetched from the configured source (GitHub, Drive, Confluence, or other APIs). The workflow then: Splits documents into deterministic chunks Computes SHA-256 hashes for each chunk Compares them with previously stored hashes in Postgres Only new or modified chunks proceed for embedding generation, which significantly reduces processing cost. 3. Embedding Generation Changed chunks are processed through: Recursive text splitting Document loading OpenAI embedding generation These embeddings are stored as a candidate vector store rather than immediately replacing the production embeddings. Metadata about the embedding version is stored in Postgres. 4. Golden Question Evaluation A set of golden test questions stored in the database is used to evaluate retrieval quality. Two AI agents are used: One queries the candidate embeddings One queries the current production embeddings Both generate answers using retrieved context. 5. Quality Metrics Calculation The workflow calculates several evaluation metrics: Recall@K** to measure retrieval effectiveness Keyword similarity** between generated answers and expected answers Answer length variance** to detect inconsistencies These are combined into a weighted quality score. 6. Embedding Drift Detection The workflow compares embedding vectors between versions using cosine distance. This identifies semantic drift, which may occur due to: embedding model updates chunking changes document structure changes 7. Promotion or Rollback The workflow checks two conditions: Quality score exceeds the configured threshold Embedding drift remains below the drift threshold If both conditions pass: The candidate embeddings are promoted to active If not: The system rolls back to the previous embeddings Or flags the update for human review 8. Notifications A webhook notification is sent with: update status quality score drift score timestamp This allows teams to monitor embedding health automatically. Setup Instructions Configure Document Source Edit the Workflow Configuration node and set: documentSourceUrl API endpoint or file source containing your documents. Examples include: GitHub repository API Google Drive export API Confluence REST API Configure Postgres Database Create the following tables in your Postgres database: document_chunks embeddings embedding_versions golden_questions These tables store chunk hashes, embedding vectors, version metadata, and evaluation questions. Connect the Postgres nodes using your database credentials. Add OpenAI Credentials Configure credentials for: OpenAI Embeddings** OpenAI Chat Model** These are used for generating embeddings and answering evaluation questions. Populate Golden Questions Insert evaluation questions into the golden_questions table. Each record should include: question_text expected passages expected answer keywords These questions represent critical queries your RAG system must answer correctly. Configure Notification Webhook Add a Slack or Teams webhook URL in the configuration node. Notifications will be sent whenever: embeddings are promoted embeddings are rolled back manual review is required Adjust Quality Thresholds In the configuration node you can modify: qualityThreshold driftThreshold chunkSize chunkOverlap These parameters control the sensitivity of the evaluation system. Use Cases Production RAG Monitoring Automatically evaluate and update embeddings in production knowledge systems without risking degraded results. Continuous Knowledge Base Updates Keep embeddings synchronized with frequently changing documentation, repositories, or internal knowledge bases. Safe Embedding Model Upgrades Test new embedding models against production data before promoting them. AI System Reliability Detect retrieval regressions before they affect end users. Enterprise AI Governance Provide automated evaluation and rollback capabilities for mission-critical RAG deployments. Requirements This workflow requires the following services: n8n** Postgres Database** OpenAI API** Recommended integrations: Slack or Microsoft Teams (for notifications) Required nodes include: Schedule Trigger Webhook HTTP Request Postgres Compare Datasets Code nodes OpenAI Embeddings OpenAI Chat Model Vector Store nodes AI Agent nodes Summary This workflow provides a fully automated self-healing RAG infrastructure for maintaining embedding quality in production systems. By combining change detection, golden-question evaluation, embedding drift analysis, and automatic rollback, it ensures that retrieval performance improves safely over time. It is ideal for teams running production AI assistants, knowledge bases, or internal search systems that depend on high-quality vector embeddings.
by Yaron Been
This workflow provides automated access to the Digitalhera Herathaisbragatto AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Digitalhera Herathaisbragatto model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Digitalhera/herathaisbragatto AI model Digitalhera Herathaisbragatto**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Rosh Ragel
Programatically Pull Square Report Data Into N8N What It Does This sub-workflow connects to the Square API and generates a daily sales summary report for all of your Square locations. The report matches the figures displayed in the Square Dashboard > Reports > Sales Summary. Itβs designed to be reused in other workflows, ideal for reporting, data storage, accounting, or automation. Prerequisites To use this workflow, you'll need: Square API credentials (configured as a Header Auth credential) How to Set Up Square Credentials: Go to Credentials > Create New Choose Header Auth Set the Name to "Authorization" Set the Value to your Square Access Token (e.g., Bearer <your-api-key>) How It Works Trigger: The workflow is triggered as a sub-workflow, requiring a report_date input. Fetch Locations: An HTTP request gets all Square locations linked to your account. Fetch Orders: For each location, an HTTP request pulls completed orders for the specified report_date. Filter Empty Locations: Locations with no sales are ignored. Aggregate Sales Data: A Code node processes the order data and produces a summary identical to Squareβs built-in Sales Summary report. Output: A cleaned, consistent summary that can be consumed by parent workflows or other nodes. Example Use Cases Automatically store daily sales data in Google Sheets, MySQL, or PostgreSQL for analysis and historical tracking Automatically send daily email or Slack reports to managers or finance teams Build weekly/monthly reports by looping over multiple dates Push sales data into accounting software like QuickBooks or Xero for automated bookkeeping Calculate commissions or rent payments based on sales volume How to Use Configure both HTTP Request nodes to use your Square API credential. If you are not in the Toronto/New York Timezone, please change the "start_at" and "end_at" parameters in the second HTTP node from "-05:00" to your local timezone Use as a sub-workflow inside a main workflow. Pass a report_date (formatted as YYYY-MM-DD) to the sub-workflow when you call it. Customization Options Add pagination to handle locations with more than 1,000 orders per day. Expand the workflow to save or send the report output via additional integrations (email, database, webhook, etc.). Why It's Useful This workflow saves time, reduces manual report pulling from Square, and enables smarter automation around sales dataβwhether for operations, finance, or performance monitoring.
by Usman Liaqat
This workflow listens for incoming WhatsApp messages that contain media (e.g., images) and automatically downloads the media file using WhatsApp's private media URL. The trigger node activates when a WhatsApp message with media is received. The media ID is extracted from the message payload. A private media URL is retrieved using the media ID. The media file is downloaded using an authenticated HTTP request. Ideal for: Archiving WhatsApp media to external systems. Triggering further automations based on received media. Integrating with cloud storage like Google Drive, Dropbox, or Amazon S3. Set up steps Connect your WhatsApp Business API account. Add HTTP credentials for downloading media via private URL. Set up the webhook in your WhatsApp Business account. Extend the workflow as needed for your use case (e.g., file storage, alerts).
by Oneclick AI Squad
This workflow transforms traditional REST APIs into structured, AI-accessible MCP (Model Context Protocol) tools. It provides a unified gateway that allows Claude AI to safely, granularly, and auditibly interact with any business system β CRM, ERP, databases, SaaS β through a single MCP-compliant interface. How it works Receive MCP Tool Request - Webhook ingests tool call from AI agent or MCP client Validate & Authenticate - Verifies API key, checks JWT token, validates MCP schema Tool Registry Lookup - Resolves requested tool name to backend API config and permission scope Claude AI Intent Verification - Confirms tool call parameters are safe, well-formed, and within policy Rate Limit & Quota Check - Enforces per-client tool call limits before execution Execute Backend API Call - Routes to the correct business system API with mapped parameters Normalize & Enrich Response - Standardizes API response into MCP tool result schema Audit & Log - Writes immutable access log for compliance and observability Return MCP Tool Result - Delivers structured response back to the AI agent Setup Steps Import workflow into n8n Configure credentials: Anthropic API - Claude AI for intent verification and parameter validation Google Sheets - Tool registry, rate limit tracking, and audit log SMTP - Alert notifications for policy violations Populate the Tool Registry sheet with your API endpoints Set your MCP gateway API key in the validation node Activate the workflow and point your MCP client to the webhook URL Sample MCP Tool Call Payload { "mcpVersion": "1.0", "clientId": "agent-crm-001", "apiKey": "mcp-key-xxxx", "toolName": "crm.get_customer", "parameters": { "customerId": "CUST-10042", "fields": ["name", "email", "tier"] }, "requestId": "req-abc-123", "callerContext": "User asked: show me customer details" } Supported Tool Categories CRM Tools** β get_customer, update_contact, list_deals ERP Tools** β get_inventory, create_order, update_stock Database Tools** β query_records, insert_record, update_record Communication Tools** β send_email, post_slack, create_ticket Analytics Tools** β run_report, fetch_metrics, export_data Features MCP-compliant schema** β works with any MCP-compatible AI agent Granular permission scopes** β read/write/admin per tool per client Claude AI intent guard** β blocks malformed or policy-violating calls Rate limiting** β per-client quota enforcement Full audit trail** β every tool call logged for SOC2 / ISO 27001 Explore More Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Davide
This workflow automates the process of converting images from JPG/PNG format to WEBP using the APYHub API. It retrieves image URLs from a Google Sheet, converts the images, and uploads the converted files to Google Drive. This workflow is a powerful tool for automating image conversion tasks, saving time and ensuring that images are efficiently converted and stored in the desired format. Using WebP images on a website provides several SEO benefits: Faster Loading Speed β WebP files are smaller than JPG and PNG, reducing page load times and improving user experience. Better Core Web Vitals β Google prioritizes websites with good performance metrics like LCP (Largest Contentful Paint). Improved Mobile Performance β Smaller images consume less bandwidth, enhancing mobile usability. Higher Search Rankings β Faster sites tend to rank better on Google due to improved user experience. Reduced Server Load β Lighter images lower hosting and CDN costs while improving site efficiency. Below is a breakdown of the workflow: 1. How It Works The workflow is designed to convert images from JPG/PNG to WEBP format and manage the converted files. Here's how it works: Manual Trigger: The workflow starts with a Manual Trigger node, which initiates the process when the user clicks "Test workflow." Set API Key: The Set API KEY node defines the API key required to access the APYHub API. Get Images: The Get Images node retrieves a list of image URLs from a Google Sheet. The sheet contains columns for the original image URL (FROM), the converted image URL (TO), and a status flag (DONE). Get Extension: The Get Extension node extracts the file extension (JPG, JPEG, or PNG) from the image URL and adds it to the JSON data. Determine Image Type: The JPG or PNG? node checks the file extension and routes the workflow to the appropriate conversion node: JPG/JPEG: Routes to the From JPG to WEBP node. PNG: Routes to the PNG to WEBP node. Convert Image: The From JPG to WEBP and PNG to WEBP nodes send POST requests to the APYHub API to convert the images to WEBP format. The API returns the URL of the converted image. Update Google Sheet: The Update Sheet node updates the Google Sheet with the URL of the converted image and marks the row as done (DONE). Get Converted Image: The Get File Image node downloads the converted WEBP image from the URL provided by the APYHub API. Upload to Google Drive: The Upload Image node uploads the converted WEBP image to a specified folder in Google Drive. 2. Set Up Steps To set up and use this workflow in n8n, follow these steps: APYHub API Key: Obtain an API Key from APYHub. In the Set API KEY node, define the API key. Google Sheets Integration: Set up Google Sheets credentials in n8n for the Get Images and Update Sheet nodes. Create a Google Sheet with columns for FROM (original image URL), TO (converted image URL), and DONE (status flag). Provide the Document ID and Sheet Name in the Get Images node. Google Drive Integration: Set up Google Drive credentials in n8n for the Upload Image node. Specify the folder ID in Google Drive where the converted images will be uploaded. Test the Workflow: Click the "Test workflow" button in n8n to trigger the workflow. The workflow will: Retrieve image URLs from the Google Sheet. Convert the images to WEBP format using the APYHub API. Update the Google Sheet with the converted image URLs. Upload the converted images to Google Drive. Optional Customization: Modify the workflow to include additional features, such as: Adding more image formats for conversion. Sending notifications when the conversion is complete. Integrating with other storage services (e.g., Dropbox, AWS S3).
by bangank36
This workflow retrieves all Shopify Orders and saves them into a Google Sheets spreadsheet using the Shopify Admin REST API. It uses pagination to ensure all orders are collected efficiently. I originally built this workflow for my own use and found it valuable for understanding how Shopify pagination works. Now, Iβm sharing it to help others automate their order retrieval process. How It Works Instead of relying on the built-in Shopify node (Get Orders Many), this workflow leverages the HTTP Request node to fetch paginated chunks manually. Shopify uses cursor-based pagination (page_info) instead of traditional page numbers. Pagination data is stored in the response headers, so we need to enable Include Response Headers and Status in the HTTP Request node. You can modify the limit parameter to control batch sizes and optimize for rate limits. This workflow can be run on demand or scheduled to keep your data up to date. Parameters You can adjust these parameters in the HTTP Request node: limit β The number of orders per request (default: 50, max: 250). fields β Comma-separated list of fields to retrieve. page_info β Used for pagination; only limit and fields are allowed when paginating. π Note: when you query the paginated chunks with page_info, only the limit and fields parameters are allowed Credentials Shopify API Key β Required for authentication. Google Sheets API credentials β Needed to insert data into the spreadsheet. πΎ Clone the Google Sheets template here Who Is This For? Shopify store owners who need to export all orders to Google Sheets. Users who want full control over API parameters for optimized queries. Anyone looking for a flexible and scalable Shopify data extraction solution. Explore More Templates π Check out my other n8n templates
by Yaron Been
This workflow automatically analyzes competitor content performance across various platforms to understand what content resonates with their audience. It saves you time by eliminating the need to manually track competitor content and provides insights into successful content strategies and engagement patterns. Overview This workflow automatically scrapes competitor websites, blogs, and social media to analyze content performance metrics including engagement rates, shares, comments, and audience response. It uses Bright Data to access competitor content without restrictions and AI to intelligently analyze performance data and extract actionable insights. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping competitor content platforms without being blocked OpenAI**: AI agent for intelligent content performance analysis Google Sheets**: For storing competitor content analysis and performance metrics How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your content analysis spreadsheet Customize: Define competitor URLs and content performance tracking parameters Use Cases Content Strategy**: Learn from high-performing competitor content to improve your own strategy Competitive Analysis**: Track competitor content trends and audience engagement patterns Content Optimization**: Identify content types and topics that drive the most engagement Market Research**: Understand what content resonates with your target audience Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #competitoranalysis #contentperformance #brightdata #webscraping #contentmarketing #n8nworkflow #workflow #nocode #contentanalysis #competitormonitoring #contentresearch #engagementanalysis #marketresearch #contentintelligence #competitiveintelligence #contentoptimization #performancetracking #contentmetrics #marketanalysis #contentaudit #brandanalysis #contentstrategy #digitalmarketing #contentinsights #socialmediaanalysis #contentmonitoring #performanceanalysis #competitorresearch