by Inga Kruger
GBP Exchange Rate Email Workflow Sends email with table that have GBP exchange rate for few money value every day, if user clicks the button to run it. How it works: Data come from API Changed into HTML table Before is send inside email
by keisha kalra
Try It Out! This n8n template creates a fully automated Instagram content schedule using AI and Google Sheets. It is perfect for content creators, marketing teams, or local businesses looking to organize and scale their social media posting. How it works The workflow starts by reading two sets of inputs from a Google Sheet: Your content strategy inputs (Pillar, Objective, Frequency, Format, Structure, Examples). A list of scraped blog posts with title, URL, and description (fetched from your website). Blog posts are scraped using Apify and parsed to extract key fields, which are stored in a tab labeled "Input (blog month)". You can assign a preferred posting month for each blog (e.g. fall blog posts get tagged for September). The workflow then merges both inputs and extracts the relevant information for further information added by ChatGPT. AI Scheduling & Personalization Once merged, the workflow loops through each content item and: Identifies if the scheduled post falls on or near a holiday (like Mother’s Day) and adjusts the content accordingly. A reference tool is attached to guide structure and tone, based on a library of post examples. Sends the content to an AI Agent (using GPT-4, but customizable) that generates: A compelling Instagram caption A visual description Hashtags Suggested post date, day, content pillar, and format (carousel, reel, image, etc.) Output All generated content including captions, structure, dates, hashtags, and pillar is exported into a tab titled Output in your Google Sheet. The final schedule is ready for manual review, editing, or publishing to social media. How to use The workflow uses a manual trigger to start, but you can replace it with a Webhook, cron job, or form submission. Add/edit your content strategy in Google Sheets. How to Set-Up Initial Input Tab Define your content pillars and structure Create a tab named "Input" or "Strategy" Include these columns: Pillar: e.g., Family images Objective: e.g., Showcase images Frequency: e.g., Bi-weekly Content Form: e.g., Images, Reels Structure: brief description of expected layout (e.g., carousel Q&A, singular photo) Examples: prompts or questions to guide AI (e.g., Why do you think families should do a session?) Input (blog month) Tab – Store scraped blog content Include these columns: URL: direct link to blog post Title: blog post title Description: short summary of the post Preferred Month: month you want it posted (e.g., August, September) This sheet is partially auto-filled by the workflow (except for Preferred Month) Output Tab – Final scheduled content Include these columns: Date: scheduled posting date (YYYY-MM-DD) Day: day of the week Pillar: content category assigned Format: e.g., Images, Reels, Carousel Description: visual summary Caption: Instagram-ready caption Hashtags: complete hashtag block To use the Apify HTTP Request node: Drag in an HTTP Request node into your n8n workflow. Set the Method and URL based on how you're using Apify: Use POST if you want to run an actor live with dynamic input (e.g. scrape blog posts in real time). Use GET if you want to retrieve results from a completed or static dataset run (faster and cheaper if you're reusing previous data). Configure query or body parameters: Include your Apify API token for authentication (e.g. token=YOUR_API_KEY) For POST: include an input object with any required actor settings (e.g., blog URL to scrape). For GET: specify the dataset ID in the URL Test the node to ensure you're retrieving the blog titles, descriptions, and URLs as expected. Requirements Apify account for scraping blog posts OpenAI key (e.g. GPT-4) or another model of your choice Google Sheets Credentials Example Use Cases A photographer repurposing blogs into Instagram carousels A nonprofit automatically generating seasonal posts A small team managing multi-pillar content across weeks or months Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Content Making ! 📅✨
by Aymeric Besset
> 🛠️ Note: This workflow uses a custom Mastodon API request. Ensure your server supports bookmark access, and that your access token has the right permissions. OAuth or token-based credentials must be configured. 🧑💼 Who is this for? This workflow is ideal for digital researchers, social media users, and knowledge workers who want to automatically archive Mastodon bookmarks into their Raindrop.io collection for future reference and tagging. 🔧 What problem is this solving? Mastodon users often bookmark posts they want to read or save for later, but there's no native integration to archive them outside the app. This workflow solves that by syncing bookmarked posts from Mastodon to Raindrop, making them more accessible, organized, and searchable long-term. ⚙️ What this workflow does Triggers on schedule (or manually). Tracks the latest fetched min_id using workflow static data to avoid duplicates. Sends an HTTP GET request to the Mastodon bookmarks API, using bearer token authentication. Validates and processes the bookmarks if new entries exist. Parses pagination metadata (e.g. min_id) from response headers. Splits response array to handle individual bookmarks. Filters out entries with missing data. Saves each post to Raindrop.io, using its title and URL. Use the card URL if exist. Updates the min_id to remember where it left off. 🚀 Setup Create a Mastodon access token with access to bookmarks. Add a credential in n8n of type HTTP Bearer Auth with your token. Create and connect a Raindrop OAuth2 credential. Replace {VOTRE SERVEUR MASTODON} with your Mastodon server's base URL. (Optional) Adjust the scheduling interval under the "Schedule Trigger" node. Make sure the Raindrop collection ID is correct or leave it as default (-1) as this is the index for the `Unsorted` collection. 🧪 How to customize this workflow To save to a specific Raindrop collection, change the collectionId in both Raindrop nodes. You can extend the Code node to pull additional metadata like author, hashtags, or content excerpts. Add an Email or Slack node after Raindrop to notify you of saved bookmarks.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by Vijayeta Sinel
Automate document translation and ensure translation accuracy using Straker Verify, Google Drive and Slack. **How it works? ** A workflow step is set up to "watch" a Google Drive folder. When your team members place new files in this folder, they are downloaded. Straker Verify then translates them and provides a quality score. Once Straker Verify has completed this, the job info is fetched, the translation is saved to an output folder and you are notified via Slack. What problem does this solve? When using AI to translate documents, you have no idea about the quality and accuracy of the output. This template answers the question “How good is my translation?” so you have a high level of confidence before you publish. Who is this for? This workflow template is designed for businesses needing translation and localization of documents such as text docs, presentations, web pages, transcripts, video subtitles and others. Use it to build workflows that localize your content at scale while maintaining translation quality, accuracy and compliance. Set up instructions Straker Verify Integration with n8n **Connect to Straker Verify: Obtain Your API Key Sign Up/Log In:** Visit https://verify.straker.ai/ to create an account or log in. Navigate to API Keys: Go to `Verify → Settings → API Keys. Copy Your Key: Find and copy your API key. Add Key to n8n: In n8n, go to Settings → Credentials → Straker Verify. Set Up Credentials for the Straker Verify Node Open n8n and go to "Credentials". From the left sidebar, click on "Credentials". Search for "Straker Verify" and select "Straker Verify API" from the dropdown. Paste the copied access token from the Verify app and save it. Assign Credentials to the Node 1.Go to your workflow and open the Straker Verify node. 2.Select the "Straker Verify API" credentials you just created from the dropdown (Credentials to connect with), and save. ✅ You are now ready to use the workflow. Workflow Steps Step 1: Initiate Workflow – Upload Files to Google Drive 1.Upload one or more files to the designated Google Drive folder. This triggers the "New File in Google Drive" node in n8n. Step 2: Verify Token Balance The workflow checks your token balance via: Get Current User Balance User Has Enough Tokens Not enough tokens? You will receive a Slack message: Not enough tokens Please top up and re-upload your files. Step 3: Select Workflow The system fetches available workflows via: Fetch All Users Workflows No matching workflow found? You'll be notified via Slack: No workflow found Step 4: Create Project in Straker Verify The following steps are handled automatically: Fetch language/project options Download files from Google Drive Create a new project using: Create Straker Project ✅ You'll receive a Slack confirmation: > "Project created – ID: xxxxxx" Step 5: Process Completion & File Return Once files are processed by Straker: The Incoming Translation Result webhook is triggered. The workflow: Downloads processed files: Get File Content from Strakerh - Uploads them to Google Drive: Upload File to Google Drivee Step 6: Confirmation - Workflow Complete You'll receive a final Slack message: > "Workflow complete – Files are now available in Google Drive"
by Alex Emerich
Convert PostgreSQL table to CSV CSV is a super useful and universal way to transfer data between different tools. This workflow gives an example of how to take data from PostgreSQL and convert it easily into a CSV. What you need Before running the workflow, please make sure you have access to a remote PostgreSQL server and have table data: book_title,book_author,read_date Demons,Fyodor Dostoyevsky,2022-09-08 Ulysses,James Joyce,2022-05-06 Catch-22,Joseph Heller,2023-01-04 The Bell Jar,Sylvia Plath,2023-01-21 Frankenstein,Mary Shelley,2023-02-14 How it works Trigger the workflow on click Declare the name of the Excel file and sheet names Remotely connect to the PostgreSQL database and specify query execution Write the query data to CSV The detailed process is explained further in the tutorial: https://blog.n8n.io/postgres-export-to-csv/
by Ibrahim Malick
⚠️ This template uses only official n8n nodes. No community nodes required. 🧑💼 Who is this for? This workflow is designed for: Legal tech founders Marketing freelancers or consultants Agencies supporting lawyers and small law firms Anyone doing outbound outreach in the legal niche ❓ What problem is this solving? LinkedIn is a goldmine for targeting legal professionals — but scraping and personalizing outreach is tedious and expensive. Most tools either: Require paid LinkedIn Sales Navigator Can’t personalize at scale Violate LinkedIn’s TOS This workflow solves that by using free Google Search, OpenRouter AI, and GPT-4o to find, enrich, and message up to 1,000 solo lawyers per day — without using browser automation or scrapers. ⚙️ What this workflow does Uses Google Programmable Search to find solo lawyers and small firm founders on LinkedIn Parses each profile’s name, title, profile URL, and snippet Saves raw lead data to Google Sheets Uses OpenRouter Sonar Pro to enrich each profile with external content Generates a personalized, 1-line message using GPT-4o Appends the final message into Google Sheets for outreach 🛠️ Setup Estimated time: 15–20 minutes ✅ Google Programmable Search Enable the Custom Search API on Google Cloud Create a programmable search engine set to search the full web Copy your API key and CX ID ✅ Google Sheets Create a sheet with columns: Name, Title, Profile URL, Outreach Message Share the sheet with your OAuth-connected Google account ✅ OpenRouter Sign up at openrouter.ai Fund with at least $5 and generate your API key Use the model perplexity/sonar-pro for real-time research ✅ GPT-4o (optional) You can use your OpenAI key or route GPT-4o via OpenRouter All setup-specific values are marked clearly in sticky notes and placeholders. 🛠️ How to customize this workflow to your needs Change the Google search query to match your industry (e.g., "founder" AND "therapist" site:linkedin.com/in) Modify the AI prompt to match your tone (formal, casual, humorous) Connect the final output to your CRM (like HubSpot, Airtable, etc.) Add a second outreach message variant to A/B test performance 📌 Sticky Notes & Annotations All nodes are clearly renamed for understandability (e.g., Find Lawyer Profiles, Parse LinkedIn Search Results) Color-coded sticky notes explain: Setup instructions Required credentials Use case 🗂 Category AI Sales Marketing
by Jay Hartley
What this template does This workflow will collect order data as it is produced, then send a summary email of all orders at the end of every day, formatted in a table. It receives new orders via webhook and stores in Airtable. At 7PM every day, it sends a summary email with the day's orders in a HTML table Setup: Instructions Video Create a new table in Airtable and give it a field time with type date, orderID with type number, and orderPrice also with type number. Create a new access token if you haven't already at https://airtable.com/create/tokens/new. Make sure to give the token the scopes data.records:read, data.records:write, schema.bases:read and access to whichever table you choose to store the orders. A pop-up window appears with the token. Use this token to make Create New Credential > Access Token for Airtable in the Store Order and Airtable Get Today's Orders nodes. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Send to Gmail node. In the Store Order node, change Base and Table to the database and table in your Airtable account you wish to use to store orders. Make sure to use these same values in the Airtable Get Today's Orders node. Every time an order is created in your system, send a POST request to Webhook from your order software. Each request must contain a single order containing fields 'orderID' and 'orderPrice' (or, edit Set Order Fields to select which incoming fields you wish to save) Change the schedule time for sending email from Everyday at 7PM to whichever time you choose. Test: Activate the workflow. From the node Webhook, copy Production URL Send the following CURL request to the URL given to you: curl -X POST -H "Content-Type: application/json" -d '{"orderID": 12345, "orderPrice": 99.99}' YOUR_URL_HERE It should say Node executed successfully. Now check your Airtable and confirm the order was stored in the right place.
by Sam Robertson
Generate Summaries from Uploaded Files using OpenAI Assistants API 📑 Overview Upload a document (PDF, DOCX, PPTX, TXT, CSV, JSON, or Markdown) and receive an AI-generated summary containing: title** – 5-10 words summary** – 1-2 sentences bullets** – 3-5 key points tags** – 3-6 short keywords The workflow: Stores the file in OpenAI. Runs an Assistant with File Search and Code Interpreter enabled. Polls until the run finishes. Retrieves the summary JSON. ✅ Prerequisites OpenAI Assistant Create one at <https://platform.openai.com/assistants> Enable File Search and Code Interpreter Note: The assistant ID starts with asst_ OpenAI API credential setup in n8n Go to Credentials → New → HTTP Header Auth Header name: Authorization Value: Bearer YOUR-OPENAI-API-KEY (replace YOUR-OPENAI-API-KEY with your OpenAI API secret key for your assistant, starts with sk-) Name it: openAIApiHeader 🔧 Setup Import the workflow JSON. When n8n prompts for a credential, choose openAIApiHeader for every HTTP Request node. Open Run Assistant → Body and replace "assistant_id": "REPLACE_WITH_YOUR_ASSISTANT_ID" with your real ID (starts with asst_…). Save. 🚀 How it works | # | Node | Purpose | |---|------|---------| | 1 | On form submission | User uploads a file (File). | | 2 | Upload File | POST /v1/files (multipart) → returns file_id. | | 3 | Create Thread | Creates a thread and attaches the uploaded file. | | 4 | Run Assistant | Starts the run using your assistant_id. | | 5 | Poll Run Status → Wait 2 s → IF | Loops until status = completed. | | 6 | Fetch Summary | GET /v1/threads/{thread_id}/messages → summary JSON. | 🖌️ Customisation ideas Edit the user prompt in Create Thread to change summary length, tone, or language. Add an HTTP Response node after Fetch Summary to return plaintext to the uploader. Replace the polling loop with OpenAI’s forthcoming wait-for-run endpoint when available. No community nodes required. Works on any n8n Cloud plan (Starter, Pro, Enterprise) or self-hosted Community Edition.
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. How It Works: Typeform Quiz Integration: This workflow streamlines the process of handling quiz answers submitted via Typeform. It ensures the data is correctly formatted and seamlessly integrates with KlickTipp. Data Transformation: Input data is validated and transformed to meet KlickTipp’s API requirements, including formatting phone numbers and converting dates. Key Features Typeform Trigger: Captures new quiz submissions from Typeform, including user details and quiz responses. Data Processing and Transformation: Formats phone numbers to numeric-only format with international prefixes. Converts dates (e.g., birthdays) to UNIX timestamps. Maps multiple-choice quiz answers to string values for API compatibility. Scales numeric quiz responses for tailored use cases. Subscriber Management in KlickTipp: Adds participants as subscribers to a designated KlickTipp list, with custom field mappings for: Personal details (e.g., name, email, phone number, birthday). Quiz responses (e.g., intended usage of KlickTipp, company location, and team size). Tags contacts for segmentation: Adds fixed and dynamic tags to contacts. Error Handling: Handles empty or malformed data gracefully, ensuring clean submissions to KlickTipp. Setup Instructions Install and Configure Nodes: Set up the Typeform and KlickTipp nodes in your n8n instance. Authenticate your Typeform and KlickTipp accounts. Prepare Custom Fields in KlickTipp: Create custom fields to store quiz answers and personal details, such as: | Name | Datentyp | |----------------------------------|-----------------| | Typeform_URL_Linkedin | URL | | Typeform_Frage1_klicktipp_nutzen | Zeile | | Typeform_Frage2_klicktipp_sitz | Zeile | | Typeform_Frage3_mitglieder_CHT | Dezimalzahl | After creating fields, allow 10-15 minutes for them to sync. If fields don’t appear, reconnect your KlickTipp credentials. Field Mapping and Adjustments: Verify and customize field assignments in the workflow to align with your specific form and subscriber list setup. Workflow Logic Trigger via Typeform Submission: The workflow initiates upon receiving a new quiz submission. Transform Data for KlickTipp: Converts and validates data from Typeform to match KlickTipp’s API requirements. Add to KlickTipp Subscriber List: Submits the cleaned data to KlickTipp, including all relevant quiz answers. Get all tags from KlickTipp and create a list: Fetches all existing Tags and turns them into an array Define tags to dynamically set for contacts: Definiton of variables that are received from the form submission and should be converted into tags Merge tags of both lists: Checks whether the list of existing tags in KlickTipp contains the tags which should be dynamically set based on the form submission Tag creation and tagging contacts: Creates new tags if it previously did not exist and then tags the contact Benefits Efficient lead generation: Contacts from forms are automatically imported into KlickTipp and can be used immediately, saving time and increasing the conversion rate. Automated processes: Experts can start workflows directly, such as welcome emails or course admissions, reducing administrative effort. Error-free data management: The template ensures precise data mapping, avoids manual corrections and reinforces a professional appearance. Testing and Deployment Test the workflow by filling the form on Typeform and verifying data updates in KlickTipp. Notes Customization: Update field mappings within the KlickTipp nodes to align with your account setup. This ensures accurate data syncing. Resources: Typeform KlickTipp Knowledge Base help article Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by Joachim Hummel
This n8n workflow automates posting Amazon affiliate products to Mastodon — complete with image upload, description, and a shortened tracking URL using Shlink. 🔧 How it works Input Source: The workflow starts by reading from a connected Google Sheet that contains: SHlink (Shortlink) Amazon Link Description (Optional) PicURL Send /NO or YES A Send column (used as a flag to check if the row was already posted) Image Upload: It fetches the product image via HTTP and uploads it directly to a Mastodon instance via the /media API endpoint. URL Shortening (Shlink): The original Amazon URL is shortened using your self-hosted or cloud-hosted Shlink instance to enable click tracking and better presentation. Text Generation: A two-line promotional text is automatically generated using a Language Model (LLM), based on the product description. Posting to Mastodon: The post is then published on Mastodon with: The image The generated text The shortened Shlink URL Row Update: Once published, the Send column in the Google Sheet is updated to "YES" to prevent duplicates. Requirements ✅ Shlink – Required for shortening and tracking Amazon URLs ✅ Google Sheet – Used as a product queue and post ✅ Google Sheet Example https://link.unixweb.home64.de/w7VqY ✅ Mastodon account – OAuth2 credentials with write scope ✅ Product image URL – Must be valid and accessible ✅ n8n credentials – Set up for Google Sheets, Mastodon, and optionally OpenRouter or other LLM providers This workflow is ideal for content creators, affiliate marketers, and automation fans who want to save time and optimize reach across the Fediverse. #affiliate #amazon #mastodon #advertisment