by Jonathan
This workflow creates a project in Clockify that any user can track time against. Syncro should be setup with a webhook via Notification Set for Ticket - created (for anyone). > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Jan Oberhauser
Simple workflow which allows to receive data from a Google Sheet via "REST" endpoint. Wait for Webhook Call Get data from Google Sheet Return data Example Sheet: https://docs.google.com/spreadsheets/d/17fzSFl1BZ1njldTfp5lvh8HtS0-pNXH66b7qGZIiGRU
by Tom
This easy-to-extend workflow automatically serves a static HTML page when a URL is accessed in a browser. Prerequisites Basic knowledge of HTML Nodes Webhook node triggers the workflow on an incoming request. Respond to Webhook node serves the HTML page in response to the webhook.
by David Olusola
Overview This workflow regularly backs up a Google Sheet by exporting its data and saving it as a new file (CSV or XLSX) in a specified folder within your Google Drive. This ensures data redundancy and historical versions. Use Case: Critical business data backup, audit trails, historical data snapshots. How It Works This workflow operates in three main steps: Scheduled Trigger: A Cron node triggers the workflow at a set interval (e.g., daily, weekly). Read Google Sheet Data: A Google Sheets node reads all data from the specified tab of your target Google Sheet. Upload to Google Drive: A Google Drive node takes the data read from the sheet. It converts the data into a file (e.g., CSV or XLSX format). It then uploads this file to a pre-defined folder in your Google Drive, with a dynamic filename including the date for versioning. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Create Google Sheets and Google Drive Credentials in n8n In your n8n instance, go to Credentials in the left sidebar. Ensure you have a "Google Sheets OAuth2 API" credential set up. If not, create one. Ensure you have a "Google Drive OAuth2 API" credential set up. If not, create one. Make note of their Credential Names. Step 2: Prepare Your Google Sheet and Drive Folder Source Google Sheet: Identify the Google Sheet you want to back up. Copy its Document ID (from the URL). Note the Sheet Name (or GID) of the specific tab you want to back up. Destination Google Drive Folder: Go to your Google Drive (drive.google.com). Create a new folder for your backups (e.g., Google Sheets Backups). Copy the Folder ID from its URL. Step 3: Import the Workflow JSON Step 4: Configure the Nodes Read Google Sheet Data Node: Select your Google Sheets credential. Replace YOUR_SOURCE_GOOGLE_SHEET_ID with the ID of the Google Sheet you want to back up. Replace Sheet1 with the exact name of the tab you want to back up. Upload Backup to Google Drive Node: Select your Google Drive credential. Replace YOUR_DESTINATION_GOOGLE_DRIVE_FOLDER_ID with the ID of the Google Drive folder where you want to store backups. File Type: The fileType is set to csv. You can change this to xlsx if you prefer an Excel format for the backup (though CSV is often simpler for raw data backups). Step 5: Activate and Test the Workflow Click the "Activate" toggle button. To test immediately, click "Execute Workflow". Check your Google Drive backup folder. A new file named something like backup_Sheet1_2025-07-26.csv should appear.
by Baptiste Fort
Export Google Search Console Data to Airtable Automatically If you’ve ever downloaded CSV files from Google Search Console, opened them in Excel, cleaned the weird formatting, and pasted them into a sheet just to get a simple report… this workflow is made for you. Who Is This Workflow For? This automation is perfect for: SEO freelancers and consultants** → who want to track client performance without wasting time on manual exports. Marketing teams** → who need fresh daily/weekly reports to check what keywords and pages are performing. Website owners** → who just want a clean way to see how their site is doing without logging into Google Search Console every day. Basically, if you care about SEO but don't want to babysit CSV files, this workflow is your new best friend. If you need a professional n8n agency to build advanced data automation workflows like this, check out Vision IA's n8n automation services. What Does It Do? Here’s the big picture: It runs on a schedule (every day, or whenever you want). It fetches data directly from the Google Search Console API. It pulls 3 types of reports: By Query (keywords people used). By Page (URLs that ranked). By Date (daily performance). It splits and cleans the data so it’s human-friendly. It saves everything into Airtable, organized in three tables. End result: every time you open Airtable, you have a neat SEO database with clicks, impressions, CTR, and average position — no manual work required. Prerequisites You’ll need a few things to get started: Access to Google Search Console. A Google Cloud project with the Search Console API enabled. An Airtable account to store the data. An automation tool that can connect APIs (like the one we’re using here). That’s it! Step 1: Schedule the Workflow The very first node in the workflow is the Schedule Trigger. Why?** → So you don’t have to press “Run” every day. What it does** → It starts the whole workflow at fixed times. In the JSON, you can configure things like: Run every day at a specific hour (e.g., 8 AM). Or run every X hours/minutes if you want more frequent updates. This is the alarm clock of your automation ⏰. Step 2: Set Your Domain and Time Range Next, we define the site and the time window for the report. In the JSON, there’s a Set node with two important parameters: domain → your website (example: https://www.vvv.fr/). days → how many days back you want the data (default: 30). 👉 Changing these two values updates the whole workflow. Super handy if you want 7-day reports instead of 30. Step 3: Fetch Data from Google Search Console This is where the workflow talks to the API. There are 3 HTTP Request nodes: Get Query Report Pulls data grouped by search queries (keywords). Parameters in the JSON: startDate = today - 30 days endDate = today dimensions = "query" rowLimit = 25000 (maximum rows the API can return) Get Page Report Same idea, but grouped by page URLs. Parameters: dimensions = "page" Same dates and row limit. Get Date Report This one groups performance by date. Parameters: dimensions = "date" You get a day-by-day performance view. Each request returns rows like this: { "keys": ["example keyword"], "clicks": 42, "impressions": 1000, "ctr": 0.042, "position": 8.5 } Step 4: Split the Data The API sends results in a big array (rows). That’s not very usable directly. So we add a Split Out node for each report. What it does: breaks the array into single items → 1 item per keyword, per page, or per date. This way, each line can be saved individually into Airtable. 👉 Think of it like opening a bag of candy and laying each one neatly on the table 🍬. Step 5: Clean and Rename Fields After splitting, we use Edit Fields nodes to make the data human-friendly. For example: In the Query report → rename keys[0] into Keyword. In the Page report → rename keys[0] into page. In the Date report → rename keys[0] into date. This is also where we keep only the useful fields: Keyword / page / date clicks impressions ctr position Step 6: Save Everything into Airtable Finally, the polished data is sent into Airtable. In the JSON, there are 3 Airtable nodes: Queries table** → stores all the keywords. Pages table** → stores all the URLs. Dates table** → stores day-by-day metrics. Each node is set to: Operation** = Create → adds a new record. Base** = Search Console Reports. Table** = Queries, Pages, or Dates. Field Mapping For Queries: Keyword → {{ $json.Keyword }} clicks → {{ $json.clicks }} impressions → {{ $json.impressions }} ctr → {{ $json.ctr }} position → {{ $json.position }} 👉 Same logic for Pages and Dates, just replace Keyword with page or date. Expected Output Every time this workflow runs: Queries table** fills with fresh keyword performance data. Pages table** shows how your URLs performed. Dates table** tracks the evolution day by day. In Airtable, you now have a complete SEO database with no manual exports. Why This Is Awesome 🚫 No more messy CSV exports. 📈 Data is always up-to-date. 🎛 You can build Airtable dashboards, filters, and interfaces. ⚙️ Easy to adapt → just change domain or days to customize. And the best part? You can spend the time you saved on actual SEO improvements instead of spreadsheet gymnastics 💃. Need Help Automating Your Data Workflows? This n8n workflow is perfect for automating SEO reporting and data collection. If you want to go further with document automation, file processing, and data synchronization across your tools, our agency specializes in building custom automation systems. 👉 Explore our document automation services: Vision IA – Document Automation Agency We help businesses automate their data workflows—from collecting reports to organizing files and syncing information across CRMs, spreadsheets, and databases—all running automatically. Questions about this workflow or other automation solutions? Visit Vision IA or reach out for a free consultation.
by Ian Kerins
Overview This n8n template automates daily monitoring of AppSumo lifetime deals. Using ScrapeOps Proxy with JavaScript rendering to reliably fetch pages and a structured parsing pipeline, the workflow tracks new and updated deals — saving everything to Google Sheets with full deduplication and change tracking. Who is this for? SaaS enthusiasts and deal hunters who follow AppSumo regularly Founders and marketers tracking competitor tools available as lifetime deals Agencies managing software budgets who want alerts on new tools Investors or analysts monitoring the AppSumo marketplace What problem does it solve? Manually checking AppSumo for new lifetime deals is easy to forget and inconsistent. This workflow runs every day, automatically fetches the latest listings, filters by categories you care about, and keeps your Google Sheet updated — appending new deals and refreshing existing ones — so you never miss a deal. How it works A daily schedule triggers the workflow at 09:00 automatically. ScrapeOps Proxy fetches the AppSumo browse page with JavaScript rendering to bypass dynamic content blocks. The HTML is parsed into structured JSON: name, URL, prices, discount, category, rating, reviews, image, and timestamps. A filter node keeps only deals matching your target categories or keywords. Each deal is looked up in Google Sheets by its URL to check if it already exists. New deals are appended as fresh rows; existing deals have their data updated in place. Set up steps (~10–15 minutes) Register for a free ScrapeOps API key: https://scrapeops.io/app/register/n8n Install the ScrapeOps community node and add credentials. Docs: https://scrapeops.io/docs/n8n/overview/ Duplicate the Google Sheet template and paste your Sheet URL into the Lookup, Append, and Update nodes. Edit the category/keyword list inside Filter Relevant Deal Categories to match what you want to track. Run the workflow once manually to confirm results, then activate. Pre-conditions Active ScrapeOps account (free tier available): https://scrapeops.io/app/register/n8n n8n instance with the ScrapeOps community node installed. Docs: https://scrapeops.io/docs/n8n/overview/ Google Sheets credentials configured in n8n A duplicated Google Sheet with correct column headers matching the parser output Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with AppSumo's Terms of Use, robots.txt directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render, scroll, and wait settings and parsers as needed. Use responsibly and only for legitimate business purposes.
by Sean Spaniel
Predict Housing Prices with a Neural Network This n8n template demonstrates how a simple Multi-Layer Perceptron (MLP) neural network can predict housing prices. The prediction is based on four key features, processed through a three-layer model. Input Layer Receives the initial data via a webhook that accepts four query parameters. Hidden Layer Composed of two neurons. Each neuron calculates a weighted sum of the inputs, adds a bias, and applies the ReLU activation function. Output Layer Contains one neuron that calculates the weighted sum of the hidden layer's outputs, adds its bias, and returns the final price prediction. Setup This template works out-of-the-box and requires no special configuration or prerequisites. Just import the workflow to get started. How to Use Trigger this workflow by sending a GET request to the webhook endpoint. Include the house features as query parameters in the URL. Endpoint: /webhook/regression/house/price Query Parameters square_feet: The total square footage of the house. number_rooms: The total number of rooms. age_in_years: The age of the house in years. distance_to_city_in_km: The distance to the nearest city center in kilometers. Example Here’s an example curl request for a 1,500 sq ft, 3-room house that is 10 years old and 5 km from the city. Request curl "https://your-n8n-instance.com/webhook/regression/house/price?square_feet=1500&number_rooms=3&age_in_years=10&distance_to_city_in_km=5" Response JSON { "price": 53095.832123960805 } `
by Stephan Koning
🌊 What it Does This workflow automatically classifies uploaded files (PDFs or images) as floorplans or non‑floorplans. It filters out junk files, then analyzes valid floorplans to extract room sizes and measurements. 👥 Who it’s For Built for real estate platforms, property managers, and automation builders who need a trustworthy way to detect invalid uploads while quickly turning true floorplans into structured, reusable data. ⚙️ How it Works User uploads a file (PDF, JPG, PNG, etc.). Workflow routes the file based on type for specialized processing. A two‑layer quality check is applied using heuristics and AI classification. A confidence score determines if the file is a valid floorplan. Valid floorplans are passed to a powerful OCR/AI for deep analysis. Results are returned as JSON and a user-friendly HTML table. 🧠 The Technology Behind the Demo This MVP is a glimpse into a more advanced commercial system. It runs on a custom n8n workflow that leverages Mistral AI's latest OCR technology. Here’s what makes it powerful: Structured Data Extraction: The AI is forced to return data in a clean, predictable JSON Schema. This isn't just text scraping; it’s a reliable data pipeline. Intelligent Data Enrichment: The workflow doesn't just extract data—it enriches it. A custom script automatically calculates crucial metrics like wall surface area from the floor dimensions, even using fallback estimates if needed. Automated Aggregation: It goes beyond individual rooms by automatically calculating totals per floor level and per room type, providing immediate, actionable insights. While this demo shows the core classification and measurement (Step 1), the full commercial version includes Step 2 & 3 (Automated Offer Generation), currently in use by a client in the construction industry. Test the Live MVP 📋 Requirements Jigsaw Stack API Key n8n Instance Webhook Endpoint 🎨 Customization Adjust thresholds, fine‑tune heuristics, or swap OCR providers to better match your business needs and downstream integrations.
by PinBridge
This workflow automatically turns WordPress posts into Pinterest publish jobs using PinBridge as the publishing layer. It is designed for bloggers, publishers, affiliate sites, and content teams that already publish to WordPress and want a repeatable way to distribute that content to Pinterest without manually copying titles, descriptions, links, and images every time a post goes live. The workflow starts by querying PinBridge for existing Pins, aggregates their titles, then fetches published WordPress posts from the WordPress REST API. From there, it filters out posts that do not have featured media and skips posts whose titles already exist in PinBridge, which gives the workflow a simple duplicate-protection layer. For posts that pass that filter, the workflow builds a publish-ready payload, validates the required fields, downloads the featured image, uploads that image through PinBridge, submits the Pinterest publish job, and returns a structured success or invalid result. The goal is not just to publish. The goal is to publish in a way that is operationally clean, easy to review, and safer to run repeatedly. What problem this workflow solves A common WordPress-to-Pinterest workflow usually looks like this: a post is published in WordPress the title already exists the link already exists the featured image already exists but Pinterest publishing still happens manually That manual step creates several problems: publishing is inconsistent some posts are missed completely metadata gets copied differently every time duplicate publishing becomes easy when you rerun the process scaling beyond a handful of posts becomes annoying This workflow solves that by turning WordPress into the content source and PinBridge into the publishing layer, with n8n sitting in the middle as the orchestrator. When you run the workflow, it does the following: Lists existing Pins from PinBridge Aggregates their titles into a reference list Fetches published posts from WordPress Skips posts that do not have featured media Skips posts whose titles already exist in PinBridge Builds a Pinterest-ready payload from the post Validates that the required fields are present Downloads the featured image Uploads the image to PinBridge Submits the Pin publish job through PinBridge Returns a success or invalid result That gives you a practical operational loop with a built-in first layer of duplicate protection. Why PinBridge is used here This workflow is intentionally built around PinBridge instead of direct Pinterest API plumbing. PinBridge is the publishing layer in the middle. In this workflow: WordPress** is the content source n8n** is the orchestration layer PinBridge** is the publishing layer that receives the image asset and submits the Pinterest publish job That separation keeps the workflow focused on the things n8n should be doing: reading content from the CMS filtering and validating data checking whether a post appears to have already been published downloading media passing a clean payload into the publishing layer handling the returned job submission result instead of forcing the workflow to become a full Pinterest delivery implementation. What this workflow does differently from the simpler version This updated version adds an important protection step before WordPress posts are processed: Existing Pin title check The workflow starts with the List pins node, which loads existing Pins from PinBridge. Then the Published Titles aggregate node collects those titles into a single list. Later, inside Skip Posts Without Featured Media, the workflow checks two things at once: the WordPress post has featured media the cleaned WordPress post title is not already present in the aggregated PinBridge title list That means this workflow is no longer just “publish latest posts.” It is now closer to: publish posts that have images and do not already appear to be published to Pinterest This is still lightweight duplicate protection, not a perfect deduplication system, but it is a meaningful improvement for a community template. What you need before you begin You need five things before importing and running this workflow: An n8n instance A WordPress site with REST API access A WordPress credential in n8n A PinBridge account A connected Pinterest account and the target board ID you want to publish to Step 1: Create your PinBridge account Create your PinBridge account first. Go to pinbridge.io and register. A free account is enough to start testing the workflow. You will need: a PinBridge API key the connected Pinterest account you want to publish to To connect the Pinterest account inside PinBridge, go to: App > Accounts > Connect > Give Access Make sure the correct Pinterest account is connected before you continue. Step 2: Create a PinBridge API key Inside PinBridge, create an API key that will be used by the n8n workflow. To create a new key, go to: App > API Keys > Create When creating the key: give it a clear name such as n8n-wordpress-publish store it securely do not hardcode it into random HTTP nodes use the PinBridge n8n credential field instead After the key is created, go into n8n and create the PinBridge credential used by the PinBridge nodes in this workflow. Step 2.5: Install the PinBridge n8n community node Before this workflow can run, your n8n instance must have the PinBridge community node installed. This workflow uses the following PinBridge nodes: List pins** Upload Image to PinBridge** Publish to Pinterest** If the PinBridge node is not installed, these nodes will either be missing or show as unknown after import. Install from the n8n UI If your n8n instance allows community nodes: Open Settings Go to Community Nodes Click Install Enter the PinBridge package name: n8n-nodes-pinbridge Confirm the installation Restart n8n if your environment requires it After installation, re-open the workflow and confirm that the PinBridge nodes load correctly. Install in self-hosted n8n from the command line If you manage your own n8n instance, install the package in your n8n environment: npm install n8n-nodes-pinbridge Then restart your n8n instance. If you are running n8n in Docker, the exact installation method depends on how your container is built. In that case, add the package to your custom image or persistent community-node setup, then restart the container. Verify the installation After the node is installed, search for PinBridge when adding a new node in n8n. You should see the PinBridge node available. If you do not, the installation is not complete yet, or your n8n instance has not been restarted properly. Step 3: Prepare your WordPress site This workflow reads posts from the standard WordPress REST API using this pattern: /wp-json/wp/v2/posts?status=publish&_embed=wp:featuredmedia That means your WordPress site must allow your n8n credential to read: published posts embedded featured media At minimum, the workflow needs access to: post title rendered excerpt permalink featured image information If your WordPress site blocks REST API access, uses a custom security layer, or has media access restrictions, make sure your n8n credential can successfully read posts before continuing. Step 4: Import the workflow into n8n Import the workflow JSON into your n8n instance. After import, open the workflow and go through the credentialed nodes. You will need to connect: the WordPress credential the PinBridge credential Do not assume imported placeholder credential IDs will work automatically. They will not. Step 5: Configure the WordPress source URL Open the Get Latest WordPress Posts node and update the URL if needed. The current workflow uses a direct HTTP request to: https://www.nomadmouse.com/wp-json/wp/v2/posts?status=publish&_embed=wp:featuredmedia If you are using your own site, replace that domain with your own WordPress domain. This node is currently set up to fetch all published posts with embedded featured media data. Step 6: Configure the target Pinterest account and board Open the Publish to Pinterest node. You must configure: the correct accountId the correct boardId This workflow publishes to a single fixed board. That means every qualifying WordPress post will be submitted to the same Pinterest board unless you later add routing logic. The publish node also appends UTM parameters to the post link automatically: ?utm_source=pinterest&utm_medium=social That is useful if you want cleaner attribution in your analytics. Workflow logic, node by node This section explains exactly how the current workflow behaves. 1. Manual Trigger The workflow starts manually. This is the right choice for a community template because it makes first-run testing easier and keeps the setup predictable. You can later replace it with: a schedule trigger a webhook trigger a cron-based polling flow 2. List pins This PinBridge node loads the existing Pins that are already known to PinBridge. This is the first important difference from the earlier version of the template. The workflow now begins by checking what has already been published. 3. Published Titles This aggregate node collects the titles returned by List pins into a single list. That list is later used to decide whether a WordPress post should be skipped. 4. Get Latest WordPress Posts This node fetches published WordPress posts from the WordPress REST API with embedded featured media. This is the source of content for the rest of the workflow. 5. Skip Posts Without Featured Media Despite the name, this node now does two checks: it verifies that the WordPress post has featured media with a usable source URL it verifies that the cleaned WordPress post title is not already present in the PinBridge title list If either of those checks fails, the post is not processed further. This means the workflow now skips: posts without featured images posts that appear to already be published based on title matching 6. Build Pin Payload from Post This node maps WordPress fields into a Pinterest-ready payload. It builds: post_id title description link_url image_url alt_text A practical note: the actual image used for download later comes from the embedded media path in the Download Featured Image node, not from the image_url field created here. In this workflow, image_url mainly exists to support validation and payload completeness. 7. Validate Required Fields This node verifies that the workflow has the minimum data needed to continue. The required fields are: title description link_url image_url If any are missing, the workflow goes to the invalid branch instead of attempting the publish process. 8. Download Featured Image This node downloads the full-size WordPress featured image as a file. That file is what gets sent into PinBridge as the asset upload input. 9. Upload Image to PinBridge This sends the downloaded featured image into PinBridge. This is the handoff point between WordPress media and the Pinterest publishing layer. 10. Publish to Pinterest This submits the Pin publish job using PinBridge. The node uses: the fixed accountId the fixed boardId the post title the excerpt-based description the canonical link with appended UTM parameters the alt text the dominant color field if present At this stage, the workflow is recording successful job submission, not final downstream delivery confirmation. That distinction matters. 11. Build Success Result If the publish request succeeds, the workflow builds a clean result object containing: post_id title link_url job_id status = submitted submitted_at error_message = '' This gives you a structured output you can later log, store, or notify from. 12. Build Invalid Result If required fields are missing, the workflow creates an invalid result object instead of attempting submission. That protects the publishing path from clearly incomplete content. Why this workflow still stops at job submission This is deliberate. This template is focused on the WordPress-to-PinBridge submission phase, not the full async lifecycle. That keeps the first version understandable and easy to run. A separate workflow should handle: webhook verification final publish confirmation publish failure notifications retries post-submission auditing That separation is cleaner and more realistic operationally. Expected first-run behavior When you run the workflow for the first time: existing PinBridge titles are loaded first WordPress posts with no featured image are skipped WordPress posts whose titles already exist in PinBridge are skipped valid new posts are converted into publish jobs successful submissions return a PinBridge job_id invalid posts return an invalid result object This means not every WordPress post fetched from the API will go through the publish path. That is intentional. How to test safely Do not start by pointing this at a very large live content set and assuming the filter is perfect. Start with a small controlled test: one post with a featured image and a new title one post with no featured image one post whose title already matches an existing Pin title Then run the workflow manually. A good outcome looks like this: the no-image post is skipped the already-published-title post is skipped the new post is submitted successfully and returns a job_id That proves the key branches are working. Common setup mistakes The WordPress domain was not replaced If you import this template and leave the original WordPress URL in place, you will be reading the wrong site. The post title match logic is too strict or too loose This workflow uses title matching as a duplicate-protection shortcut. That is useful, but not perfect. If two unrelated posts share the same title, the newer one may be skipped. If the title changes slightly, the workflow may treat it as new. This is acceptable for a starter template, but you should know the limitation. The board ID is wrong A board name is not enough. Use the real board ID expected by PinBridge. The wrong PinBridge account ID is used If the account context is wrong, the publish node may fail even though the rest of the workflow looks fine. The post excerpt is empty This workflow builds the description from the rendered WordPress excerpt. If your site does not use excerpts consistently, you may want to add a fallback from post content later. The featured image path in the payload differs from the actual download source The Build Pin Payload from Post node currently stores image_url from _links, while the actual download node uses _embedded'wp:featuredmedia'.media_details.sizes.full.source_url. That works in the current flow because the download node uses the embedded media URL directly, but it is something you should be aware of if you refactor later. Recommended extensions after initial success Once the base workflow is working, the best next extensions are: Add schedule-based polling Run the workflow every 15 minutes, every hour, or on your editorial cadence. Add stronger duplicate protection Instead of title matching only, track: WordPress post ID canonical URL stored publish state in Google Sheets, Airtable, or a database Add board routing Map WordPress categories or tags to different Pinterest boards. Add final status handling Use a separate workflow to receive PinBridge webhook callbacks, verify signatures, and record the final publish state. Add notifications Send a Slack or Telegram message after each successful submission or failure. Add excerpt fallback logic If a post has no excerpt, derive a safe short description from the rendered content. Minimum data contract for successful runs A post is publishable only if all of the following are true: the post exists the post has a title the post has a usable excerpt-based description the post has a permalink the post has featured media the featured image can be downloaded the post title does not already appear in the existing PinBridge title list the PinBridge credential is valid the PinBridge account ID is correct the target board ID is correct If any of those are false, the post should not be treated as publish-ready. Summary This workflow is a practical WordPress-to-Pinterest starter template with a built-in lightweight duplicate check. It uses: WordPress** as the content source n8n** as the orchestration layer PinBridge** as the Pinterest publishing layer Compared to the earlier version, this one adds an important operational improvement: it checks existing PinBridge titles before attempting to publish new WordPress posts. That makes it more useful as a real starter workflow, not just a happy-path demo. If your WordPress site is reachable, your posts have featured images and excerpts, your PinBridge credential is configured, and your Pinterest account and board are correct, you should be able to run this workflow successfully from start to finish. Quick setup checklist Before running the workflow, confirm all of the following: PinBridge account created PinBridge API key created PinBridge credential added in n8n PinBridge community node installed WordPress credential added in n8n the WordPress domain in the HTTP request node is correct correct Pinterest account ID entered in the publish node correct target board ID entered in the publish node WordPress REST API is reachable the test post has: title permalink featured image usable excerpt existing PinBridge titles can be listed successfully If all of that is true, the workflow is ready to run.
by automedia
Monitor RSS Feeds, Extract Full Articles, and Save to Supabase Overview This workflow solves a common problem with RSS feeds: they often only provide a short summary or snippet of the full article. This template automatically monitors a list of your favorite blog RSS feeds, filters for new content, visits the article page to extract the entire blog post, and then saves the structured data into a Supabase database. It's designed for content creators, marketers, researchers, and anyone who needs to build a personal knowledge base, conduct competitive analysis, or power a content aggregation system without manual copy-pasting. Use Cases Content Curation: Automatically gather full-text articles for a newsletter or social media content. Personal Knowledge Base: Create a searchable archive of articles from experts in your field. Competitive Analysis: Track what competitors are publishing without visiting their blogs every day. AI Model Training: Collect a clean, structured dataset of full-text articles to fine-tune an AI model. How It Works Scheduled Trigger: The workflow runs automatically on a set schedule (default is once per day). Fetch RSS Feeds: It takes a list of RSS feed URLs you provide in the "blogs to track" node. Filter for New Posts: It checks the publication date of each article and only continues if the article is newer than a specified age (e.g., published within the last 60 days). Extract Full Content: For each new article, the workflow uses the Jina AI Reader URL (https://r.jina.ai/)) to scrape the full, clean text from the blog post's webpage. This is a free and powerful way to get past the RSS snippet limit. Save to Supabase: Finally, it organizes the extracted data and saves it to your chosen Supabase table. The following data is saved by default: title source_url (the link to the original article) content_snippet (the full extracted article text) published_date creator (the author) status (a static value you can set, e.g., "new") content_type (a static value you can set, e.g., "blog") Setup Instructions You can get this template running in about 10-15 minutes. Set Up Your RSS Feed List: Navigate to the "blogs to track" Set node. In the source_identifier field, replace the example URLs with the RSS feed URLs for the blogs you want to monitor. You can add as many as you like. Tip: The best way to find a site's RSS feed is to use a tool like Perplexity or a web-browsing enabled LLM. // Example list of RSS feeds ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Configure the Content Age Filter: Go to the "max\_content\_age\_days" Set node. Change the value from the default 60 to your desired timeframe (e.g., 7 to only get articles from the last week). Connect Your Storage Destination: The template uses the "Save Blog Data to Database" Supabase node. First, ensure you have a table in your Supabase project with columns to match the data (e.g., title, source_url, content_snippet, published_date, creator, etc.). In the n8n node, create new credentials using your Supabase Project URL and Service Role Key. Select your table from the list and map the data fields from the workflow to your table columns. Want to use something else? You can easily replace the Supabase node with a Google Sheets, Airtable, or the built-in n8n Table node. Just drag the final connection to your new node and configure the field mapping. Set Your Schedule: Click on the first node, "Schedule Trigger". Adjust the trigger interval to your needs. The default is every day at noon. Activate Workflow: Click the "Save" button, then toggle the workflow to Active. You're all set\!
by kartik ramachandran
Track Azure API failures with Application Insights correlation Template Name Track Azure API failures with App Insights, APIM, and Service Bus correlation Description Troubleshoot failed API calls by correlating Application Insights telemetry with API Management logs and Service Bus messages. Query failures from the last 24 hours to 30 days, identify root causes, and generate detailed failure reports with full context. Who's it for DevOps Engineers, Site Reliability Engineers, API developers, Support teams, and Platform engineers troubleshooting production incidents. How it works Set Configuration: Stores credentials and query parameters Query Application Insights: Single node retrieves APIM requests, Service Bus traces, and exceptions via OAuth2 Correlate and Analyze Data: Links requests to SB messages and exceptions via operation IDs Generate Report: Creates detailed failure analysis with root cause data Output: Markdown/HTML reports, Excel export, or JSON API response Set up steps Prerequisites Application Insights resource in Azure APIM and Service Bus logging to App Insights Service Principal with "Monitoring Reader" role Setup 1. Create Azure Service Principal Azure CLI: Create service principal az ad sp create-for-rbac --name "n8n-appinsights-tracker" \ --role "Monitoring Reader" \ --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Insights/components/{app-insights-name} Save the output: appId (client ID) password (client secret) tenant (tenant ID) Also get your Application Insights App ID: az monitor app-insights component show --app {name} -g {rg} --query appId -o tsv 2. Configure Workflow Open "Set Configuration" node and update: appId - Application Insights Application ID clientId - Service Principal client ID (from step 1) clientSecret - Service Principal password (from step 1) tenantId - Azure AD tenant ID (from step 1) timeRange - 24h, 7d, or 30d includeSuccessful - false (failures only) or true (all requests) 3. Verify Dependencies Ensure diagnostic logging is configured: APIM**: Diagnostics → Application Insights enabled Service Bus**: Diagnostic settings → Send to App Insights Custom Dimensions**: Capture MessageId, EntityName for Service Bus 4. Test Workflow Click Manual Trigger Review "Generate Report" node output Verify correlated data appears 5. Enable Output Options (Optional) Excel**: Enable "Export to Excel" for downloadable reports Webhook**: Enable "Respond to Webhook" for API integration Requirements Azure Requirements Azure Application Insights resource Service Principal with "Monitoring Reader" role APIM and Service Bus configured to log to Application Insights n8n Requirements n8n instance (cloud or self-hosted version 1.0+) Ability to configure credentials in Set Configuration node How to customize Time Range Filters Modify timeRange in "Set Configuration": 24h - Last 24 hours (default) 7d - Last 7 days 30d - Last 30 days Filter by API or Operation Edit KQL queries in "Query Application Insights" Code node. Find the apimQuery variable and modify: const apimQuery = ` requests | where timestamp > ago(${timeRange}) | where name contains "specific-api-name" | where customDimensions['apim-operation-name'] == "GetOrders" | where resultCode >= 400 | take 500 | project timestamp, name, url, resultCode, duration, operation_Id, customDimensions `; Filter by Status Code Modify the where resultCode condition: | where resultCode == 500 // Only 500 errors | where resultCode >= 400 and resultCode < 500 // 4xx errors Include Custom Dimensions Extend the project statement in any query: | extend customField = tostring(customDimensions['YourCustomField']) | project ..., customField Adjust Result Limits Change take 500 in each query to retrieve more/fewer results: | take 1000 // Get 1000 results Add Additional Queries Add new queries in the Code node, for example dependency tracking: const dependencyQuery = ` dependencies | where timestamp > ago(${timeRange}) | where success == false | extend operationId = tostring(operation_Id) | take 500 | project timestamp, name, type, target, duration, success, operation_Id `; const dependencyResponse = await fetch( https://api.applicationinsights.io/v1/apps/${appId}/query?query=${encodeURIComponent(dependencyQuery)}, { headers: { 'Authorization': Bearer ${accessToken} } } ); Troubleshooting "Query failed" or 401 error: Verify Service Principal has "Monitoring Reader" role, check clientId/clientSecret/tenantId in Set Configuration node "Token acquisition failed": Confirm credentials are correct, verify Service Principal is not expired "No data returned": Check time range, verify APIM/SB logs to App Insights, confirm diagnostic settings enabled, ensure Application Insights App ID is correct "Operation IDs don't correlate": Ensure custom dimensions are captured, verify telemetry initializers configured in APIM and Service Bus "Missing Service Bus data": Enable Service Bus diagnostic logs, add MessageId to custom dimensions in logging configuration Use Cases Incident Response Query last 24h of failures during incident to identify affected APIs and root causes Pattern Analysis Use 7d or 30d range to identify recurring error patterns and systemic issues Performance Investigation Sort by duration to find slow API calls and optimize bottlenecks Customer Support Search by operation ID from customer complaint to trace full request lifecycle Compliance Reporting Export to Excel for audit trail of failed transactions Data Structure Summary Object { "totalRequests": 150, "failedRequests": 12, "successfulRequests": 138, "requestsWithExceptions": 8, "requestsWithServiceBus": 45, "averageDuration": 234.5, "timeRange": "24h" } Failed Request Object { "timestamp": "2026-01-19T10:30:00Z", "apiName": "POST /api/orders", "url": "https://api.example.com/orders", "resultCode": 500, "duration": 1523, "operationId": "abc-123-def", "apimServiceName": "prod-apim", "apimOperationName": "CreateOrder", "serviceBusMessageIds": ["msg-456", "msg-789"], "exceptionMessages": ["NullReferenceException: Object not set"], "hasException": true, "isFailure": true } KQL Query Examples Failed requests with exceptions requests | where timestamp > ago(24h) | where resultCode >= 400 | join kind=inner ( exceptions | where timestamp > ago(24h) ) on operation_Id | project timestamp, name, resultCode, operation_Id, outerMessage Service Bus message failures traces | where timestamp > ago(24h) | where message contains "Failed to process message" | extend messageId = tostring(customDimensions['MessageId']) | project timestamp, message, messageId APIM throttling analysis requests | where timestamp > ago(7d) | where resultCode == 429 | summarize count() by bin(timestamp, 1h), tostring(customDimensions['apim-subscription-id']) Integration Examples Python - Query API import requests url = "https://your-n8n.com/webhook/app-insights-tracker" response = requests.get(url) data = response.json() print(f"Failed Requests: {data'data'['failedRequests']}") for error in data'data': print(f" {error['error']}: {error['count']} times") PowerShell - Alert on Failures $data = Invoke-RestMethod -Uri "https://your-n8n.com/webhook/app-insights-tracker" $failures = $data.data.summary.failedRequests if ($failures -gt 10) { Send-MailMessage -To "oncall@company.com" ` -Subject "ALERT: $failures API failures detected" ` -Body $data.data.report } Scheduled Monitoring Run workflow every hour with Schedule Trigger to continuously monitor for failures and alert when thresholds exceeded. Resources Application Insights API | KQL Reference | APIM Logging Category: Monitoring, Observability, DevOps Difficulty: Intermediate Setup Time: 10 minutes n8n Version: 1.0+
by David Olusola
🎓 n8n Learning Hub — AI-Powered YouTube Educator Directory 📋 Overview This workflow demonstrates how to use n8n Data Tables to create a searchable database of educational YouTube content. Users can search for videos by topic (e.g., "voice", "scraping", "lead gen") and receive formatted recommendations from top n8n educators. What This Workflow Does: Receives search queries** via webhook (e.g., topic: "voice agents") Processes keywords** using JavaScript to normalize search terms Queries a Data Table** to find matching educational videos Returns formatted results** with video titles, educators, difficulty levels, and links Populates the database** with a one-time setup workflow 🎯 Key Features ✅ Data Tables Introduction - Learn how to store and query structured data ✅ Webhook Integration - Accept external requests and return JSON responses ✅ Keyword Processing - Simple text normalization and keyword matching ✅ Batch Operations - Use Split in Batches to populate tables efficiently ✅ Frontend Ready - Easy to connect with Lovable, Replit, or custom UIs 🛠️ Setup Guide Step 1: Import the Workflow Copy the workflow JSON In n8n, go to Workflows → Import from File or Import from URL Paste the JSON and click Import Step 2: Create the Data Table The workflow uses a Data Table called n8n_Educator_Videos with these columns: Educator** (text) - Creator name video_title** (text) - Video title Difficulty** (text) - Beginner/Intermediate/Advanced YouTubeLink** (text) - Full YouTube URL Description** (text) - Video summary for search matching To create it: Go to Data Tables in your n8n instance Click + Create Data Table Name it n8n_Educator_Videos Add the 5 columns listed above Step 3: Populate the Database Click on the "When clicking 'Execute workflow'" node (bottom branch) Click Execute Node to run the setup This will insert all 9 educational videos into your Data Table Step 4: Activate the Webhook Click on the Webhook node (top branch) Copy the Production URL (looks like: https://your-n8n.app.n8n.cloud/webhook/1799531d-...) Click Activate on the workflow Test it with a POST request: curl -X POST https://your-n8n.app.n8n.cloud/webhook/YOUR-WEBHOOK-ID \ -H "Content-Type: application/json" \ -d '{"topic": "voice"}' 🔍 How the Search Works Keyword Processing Logic The JavaScript node normalizes search queries: "voice", "audio", "talk"** → Matches voice agent tutorials "lead", "lead gen"** → Matches lead generation content "scrape", "data", "scraping"** → Matches web scraping tutorials The Data Table query uses LIKE matching on the Description field, so partial matches work great. Example Queries: {"topic": "voice"} // Returns Eleven Labs Voice Agent {"topic": "scraping"} // Returns 2 scraping tutorials {"topic": "avatar"} // Returns social media AI avatar videos {"topic": "advanced"} // Returns all advanced-level content 🎨 Building a Frontend with Lovable or Replit Option 1: Lovable (lovable.dev) Lovable is an AI-powered frontend builder perfect for quick prototypes. Prompt for Lovable: Create a modern search interface for an n8n YouTube learning hub: Title: "🎓 n8n Learning Hub" Search bar with placeholder "Search for topics: voice, scraping, RAG..." Submit button that POSTs to webhook: [YOUR_WEBHOOK_URL] Display results as cards showing: 🎥 Video Title (bold) 👤 Educator name 🧩 Difficulty badge (color-coded) 🔗 YouTube link button 📝 Description Design: Dark mode, modern glassmorphism style, responsive grid layout Implementation Steps: Go to lovable.dev and start a new project Paste the prompt above Replace [YOUR_WEBHOOK_URL] with your actual webhook Export the code or deploy directly Option 2: Replit (replit.com) Use Replit's HTML/CSS/JS template for more control. HTML Structure: <!DOCTYPE html> <html> <head> <title>n8n Learning Hub</title> <style> body { font-family: Arial; max-width: 900px; margin: 50px auto; } #search { padding: 10px; width: 70%; font-size: 16px; } button { padding: 10px 20px; font-size: 16px; } .video-card { border: 1px solid #ddd; padding: 20px; margin: 20px 0; } </style> </head> <body> 🎓 n8n Learning Hub <input id="search" placeholder="Search: voice, scraping, RAG..." /> <button onclick="searchVideos()">Search</button> <script> async function searchVideos() { const topic = document.getElementById('search').value; const response = await fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({topic}) }); const data = await response.json(); document.getElementById('results').innerHTML = data.Message || 'No results'; } </script> </body> </html> Option 3: Base44 (No-Code Tool) If using Base44 or similar no-code tools: Create a Form with a text input (name: topic) Add a Submit Action → HTTP Request Set Method: POST, URL: Your webhook Map form data: {"topic": "{{topic}}"} Display response in a Text Block using {{response.Message}} 📊 Understanding Data Tables Why Data Tables? Persistent Storage** - Data survives workflow restarts Queryable** - Use conditions (equals, like, greater than) to filter Scalable** - Handle thousands of records efficiently No External DB** - Everything stays within n8n Common Operations: Insert Row - Add new records (used in the setup branch) Get Row(s) - Query with filters (used in the search branch) Update Row - Modify existing records by ID Delete Row - Remove records Best Practices: Use descriptive column names Include a searchable text field (like Description) Keep data normalized (avoid duplicate entries) Use the "Split in Batches" node for bulk operations 🚀 Extending This Workflow Ideas to Try: Add More Educators - Expand the video database Category Filtering - Add a Category column (Automation, AI, Scraping) Difficulty Sorting - Let users filter by skill level Vote System - Add upvote/downvote columns Analytics - Track which topics are searched most Admin Panel - Build a form to add new videos via webhook Advanced Features: AI-Powered Search** - Use OpenAI embeddings for semantic search Thumbnail Scraping** - Fetch YouTube thumbnails via API Auto-Updates** - Periodically check for new videos from educators Personalization** - Track user preferences in a separate table 🐛 Troubleshooting Problem: Webhook returns empty results Solution: Check that the Description field contains searchable keywords Problem: Database is empty Solution: Run the "When clicking 'Execute workflow'" branch to populate data Problem: Frontend not connecting Solution: Verify webhook is activated and URL is correct (use Test mode first) Problem: Search too broad/narrow Solution: Adjust the keyword logic in "Load Video DB" node 📚 Learning Resources Want to learn more about the concepts in this workflow? Data Tables:** n8n Data Tables Documentation Webhooks:** Webhook Node Guide JavaScript in n8n:** "Every N8N JavaScript Function Explained" (see database) 🎓 What You Learned By completing this workflow, you now understand: ✅ How to create and populate Data Tables ✅ How to query tables with conditional filters ✅ How to build webhook-based APIs in n8n ✅ How to process and normalize user input ✅ How to format data for frontend consumption ✅ How to connect n8n with external UIs Happy Learning! 🚀 Built with ❤️ using n8n Data Tables