by Tristan V
Quickstart Guide: Facebook Messenger Chatbot with Pinecone RAG Step-by-step instructions to get this workflow running in n8n. Prerequisites Self-hosted n8n instance (v1.113.0+ recommended) Community nodes enabled in n8n Facebook Page (you must be an admin) OpenAI account Pinecone account (free Starter plan works) Workflow Architecture This workflow uses two webhooks with the same URL path but different HTTP methods: | Webhook | Method | Purpose | |---------|--------|---------| | Facebook Verification Webhook | GET | Handles Facebook's webhook verification | | Facebook Message Webhook | POST | Receives incoming messages | Both webhooks share the same URL: https://your-n8n.com/webhook/facebook-messenger-webhook n8n automatically routes requests based on HTTP method: GET** requests → Verification flow POST** requests → Message processing flow RAG Enhancement User Message → Batching → AI Agent ─┬─ OpenAI Chat Model ├─ Conversation Memory └─ Pinecone Assistant Tool (RAG) │ ▼ Search your documents │ ▼ Answer with citations Step 1: Install the Pinecone Assistant Community Node The Pinecone Assistant node is a community node that must be installed separately. In n8n, go to Settings → Community Nodes Click Install a community node Enter: @pinecone-database/n8n-nodes-pinecone-assistant Click Install Restart n8n if prompted > Note: Community nodes must be enabled in your n8n instance. For Docker, set N8N_COMMUNITY_PACKAGES_ALLOW_INSTALL=true. Step 2: Create Pinecone Account & Assistant 2.1 Create Pinecone Account Go to Pinecone Sign up for a free account (Starter plan includes 100 files per assistant) Complete the onboarding 2.2 Create an Assistant In the Pinecone console, go to Assistants Click Create Assistant Name it n8n-assistant (or choose your own name) Select your preferred region Click Create 2.3 Upload Your Documents Click on your newly created assistant Go to the Files tab Click Upload Files Upload your documents (PDFs, text files, etc.) Wait for processing to complete 2.4 Get Your Pinecone API Key In the Pinecone console, click on your profile/account Go to API Keys Copy your API key (or create a new one) Step 3: Get Your OpenAI API Key Go to OpenAI Platform Sign in with your OpenAI account Click Create new secret key Copy and save the API key Step 4: Create Facebook App & Get Page Access Token 4.1 Create Facebook App Go to Facebook Developers Click My Apps → Create App Select Other → Next Select Business → Next Enter app name and contact email Click Create App 4.2 Add Messenger Product In your app dashboard, scroll to Add products to your app Find Messenger and click Set up 4.3 Connect Your Facebook Page In Messenger settings, find Access Tokens section Click Add or Remove Pages Select your Facebook Page and grant permissions Click Done 4.4 Generate Page Access Token Back in Messenger settings, find your page in the list Click Generate Token Copy and save the token Step 5: Create Your Verify Token The verify token is a secret string for Facebook webhook verification. Create a random string (e.g., my-secret-token-12345) Save this value - you'll need it in Steps 7 and 10 Step 6: Create n8n Credentials 6.1 Pinecone Credential In n8n, go to Credentials → Add Credential Search for "Pinecone" (or "Pinecone API") Configure: Name: Pinecone API API Key: Paste your Pinecone API key from Step 2.4 Click Save 6.2 OpenAI API Credential In n8n, go to Credentials → Add Credential Search for "OpenAI API" Configure: Name: OpenAI API API Key: Paste your OpenAI API key from Step 3 Click Save 6.3 Facebook Graph API Credential In n8n, go to Credentials → Add Credential Search for "Facebook Graph API" Configure: Name: Facebook Page Access Token Access Token: Paste your Page Access Token from Step 4.4 Click Save Step 7: Import the Workflow In n8n, click Add Workflow → Import from File Select the workflow.json file from this folder The workflow will open in the editor 7.1 Configure the Verify Token Find the "Is Token Valid?" node Click on the node to open its settings In the conditions, find Value 2 that shows YOUR_VERIFY_TOKEN_HERE Replace it with your verify token from Step 5 7.2 Configure the Pinecone Assistant Name Find the "Get context snippets in Pinecone Assistant" node Click on the node to open its settings Change Assistant Name from n8n-assistant to your actual assistant name (from Step 2.2) Step 8: Connect Credentials to Nodes 8.1 Connect Facebook Credential Update these 3 nodes with your Facebook credential: Click on "Send Seen Indicator" → Select your Facebook Page Access Token credential Click on "Send Typing Indicator" → Select your Facebook Page Access Token credential Click on "Send Response to User" → Select your Facebook Page Access Token credential 8.2 Connect OpenAI Credential Click on "OpenAI Chat Model" → Select your OpenAI API credential 8.3 Connect Pinecone Credential Click on "Get context snippets in Pinecone Assistant" → Select your Pinecone API credential Step 9: Publish the Workflow Click Save to save the workflow Click the Publish button to make the workflow live Copy the webhook URL (e.g., https://your-n8n.com/webhook/facebook-messenger-webhook) Step 10: Configure Facebook Webhook Go to Facebook Developers → Your App → Messenger Settings Find Webhooks section Click Add Callback URL Enter: Callback URL: Your n8n webhook URL from Step 9 Verify Token: Same value from Step 5 Click Verify and Save After verification, subscribe to webhook fields: messages (required) messaging_postbacks (recommended) Step 11: Test Your Chatbot 11.1 Add Test Users (if needed) With Standard Access, only users with app roles can message the bot: Go to your Facebook App → App Roles → Roles Add users as Testers Those users must accept the invitation 11.2 Send a Test Message Open Facebook Messenger Search for your Facebook Page Try these test messages: "Hello!" - Should get a friendly greeting "What information do you have?" - Should search your documents "Tell me about [topic in your docs]" - Should return relevant information with context How Pinecone RAG Works User asks: "What are your return policies?" │ ▼ ┌─────────────────────────┐ │ AI Agent receives msg │ └─────────┬───────────────┘ │ ▼ ┌─────────────────────────┐ │ Calls Pinecone Tool │ → Searches your documents └─────────┬───────────────┘ │ ▼ ┌─────────────────────────┐ │ Gets relevant snippets │ ← "Return Policy.pdf: Items can be..." └─────────┬───────────────┘ │ ▼ ┌─────────────────────────┐ │ AI formulates answer │ → "According to our policy, items can be..." └─────────────────────────┘ Troubleshooting | Problem | Solution | |---------|----------| | "Pinecone Assistant Tool" not found | Ensure community node is installed (Step 1) | | "No relevant information found" | Upload more documents to your Pinecone Assistant | | Webhook verification fails | Check verify token matches in n8n and Facebook | | No response from bot | Check n8n execution logs for errors | | "Error validating access token" | Regenerate Page Access Token in Facebook | | AI Agent not using Pinecone tool | After import, open the "AI Agent1" node, make a small edit to the system message (e.g., add a space), and save. This re-initializes the tool bindings. | Customization Change the AI Behavior Edit the "AI Agent1" node's system message to: Adjust how it cites sources Change personality/tone Add specific instructions for your use case Change the Assistant Update the "Get context snippets in Pinecone Assistant" node to use a different assistant name. Adjust Response Length The workflow truncates responses to 1900 characters for Messenger. Edit the "Format Response" node to change this. Resources Pinecone Assistant Documentation Pinecone Assistant n8n Node (GitHub) Facebook Messenger Platform OpenAI API Documentation n8n Documentation
by Dahiana
AI Content Summarizer Suite This n8n template collection demonstrates how to build a comprehensive AI-powered content summarization system that handles multiple input types: URLs, raw text, and PDF files. Built as 4 separate workflows for maximum flexibility. Use cases: Research workflows, content curation, document processing, meeting prep, social media content creation, or integrating smart summarization into any app or platform. How it works Multi-input handling: Separate workflows for URLs (web scraping), direct text input, and PDF file processing Smart PDF processing: Attempts text extraction first, falls back to OCR.Space for image-based PDFs AI summarization: Uses OpenAI's GPT-4.1-mini with customizable length (brief/standard/detailed) and focus areas (key points/numbers/conclusions/action items) Language support: Multi-language summaries with automatic language detection Flexible output: Returns clean markdown-formatted summaries via webhook responses Unified option: The all-in-one workflow automatically detects input type and routes accordingly How to use Replace webhook triggers with your preferred method (manual, form, API endpoint) Each workflow accepts different parameters: URL, text content, or file upload Customize summary length and focus in the AI prompt nodes Authentication is optional - switch to "none" if running internally Perfect for integration with Bubble, Zapier, or any platform that can make HTTP requests Requirements OpenAI API key or OpenRouter Keys OCR.Space API key (for PDF fallback processing) n8n instance (cloud or self-hosted) Any platform that can make HTTP requests. Setup Steps Replace "Dummy OpenAI" with your OpenAI credentials Add your OCR.Space API key in the OCR nodes is not mandatory. Update webhook authentication as needed Test each workflow path individually
by Piotr Sikora
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.
by Atik
Automate multi-document handling with AI-powered extraction that adapts to any format and organizes it instantly. What this workflow does Monitors Google Drive for new uploads (receipts, resumes, claims, physician orders, blueprints, or any doc type) Automatically downloads and prepares files for analysis Identifies the document type using Google Gemini Parses structured data via the trusted VLM Run node with OCR + layout parsing Stores records in Google Sheets — AI Agent maps values to the correct sheet dynamically Setup Prerequisites: Google Drive & Google Sheets accounts, VLM Run API credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can integrate it directly for high-accuracy data extraction. Quick Setup: Configure Google Drive OAuth2 and select a folder for uploads Add VLM Run API credentials Create a Master Reference Google Sheet with the following structure: | Document_Name | Spreadsheet_ID | | ---------------------- | ----------------------------- | | Receipt | your-receipt-sheet-id | | Resume | your-resume-sheet-id | | Physician Order | your-physician-order-sheet-id | | Claims Processing | your-claims-sheet-id | | Construction Blueprint | your-blueprint-sheet-id | The first column holds the document type, and the second column holds the target sheet ID where extracted data should be appended. In the AI Agent node, edit the agent prompt to: Analyze the JSON payload from VLM Run Look up the document type in the Master Reference Sheet If a matching sheet exists → fetch headers, then append data accordingly If headers don’t exist → create them from JSON keys, then insert values If no sheet exists → add the new type to the Master Reference with an empty Spreadsheet ID Test with a sample upload and activate the workflow How to customize this workflow to your needs Extend functionality by: Adjusting the AI Agent prompt to support any new document schema (just update field mappings) Adding support for multi-language OCR or complex layouts in VLM Run Linking Sheets data to BI dashboards or reporting tools Triggering notifications when new entries are stored This workflow leverages the VLM Run node for flexible, precision extraction and the AI Agent for intelligent mapping, creating a powerful system that adapts to any document type with minimal setup changes.
by Amit Mehta
This workflow performs structured data extraction and data mining from a web page by combining the capabilities of Bright Data and Google Gemini. How it Works This workflow focuses on extracting structured data from a web page using Bright Data's Web Unlocker Product. It then uses n8n's AI capabilities, specifically Google Gemini Flash Exp, for information extraction and custom sentiment analysis. The results are sent to webhooks and saved as local files. Use Cases Data Mining**: Automating the process of extracting and analyzing data from websites. Web Scraping**: Gathering structured data for market research, competitive analysis, or content aggregation. Sentiment Analysis**: Performing custom sentiment analysis on unstructured text. Setup Instructions Bright Data Credentials: You need to have an account and a Web Unlocker zone with Bright Data. Update the Header Auth account credentials in the Perform Bright Data Web Request node. Google Gemini Credentials: Provide your Google Gemini(PaLM) Api account credentials for the AI-related nodes. Configure URL and Zone: In the Set URL and Bright Data Zone node, set the web URL you want to scrape and your Bright Data zone. Update Webhook: Update the Webhook Notification URL in the relevant HTTP Request nodes. Workflow Logic Trigger: The workflow is triggered manually. Set Parameters: It sets the target URL and the Bright Data zone. Web Request: The workflow performs a web request to the specified URL using Bright Data's Web Unlocker. The output is formatted as markdown. Data Extraction & Analysis: The markdown content is then processed by multiple AI nodes to: Extract textual data from the markdown. Perform topic analysis with a structured response. Analyze trends by location and category with a structured response. Output: The extracted data and analysis are sent to webhooks and saved as JSON files on disk. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger node to start the workflow. | | Set URL and Bright Data Zone | A Set node to define the URL to be scraped and the Bright Data zone to be used. | | Perform Bright Data Web Request | An httpRequest node that performs the web request to Bright Data's API to retrieve the content. | | Markdown to Textual Data Extractor | An AI node that uses Google Gemini to convert markdown content into plain text. | | Google Gemini Chat Model | A node representing the Google Gemini model used for the data extraction. | | Topic Extractor with the structured response | An AI node that performs topic analysis and outputs the results in a structured JSON format. | | Trends by location and category with the structured response | An AI node that analyzes and clusters emerging trends by location and category, outputting a structured JSON. | | Initiate a Webhook Notification... | These nodes send the output of the AI analysis to a webhook. | | Create a binary file... | Function nodes that convert the JSON output into binary format for writing to a file. | | Write the topics/trends file to disk | readWriteFile nodes that save the binary data to a local file (d:\topics.json and d:\trends.json). | Customization Tips Change the web URL in the Set URL and Bright Data Zone node to scrape different websites. Modify the AI prompts in the AI nodes to customize the analysis (e.g., change the sentiment analysis criteria). Adjust the output path in the readWriteFile nodes to save the files to a different location. Suggested Sticky Notes for Workflow Note**: "This workflow deals with the structured data extraction by utilizing Bright Data Web Unlocker Product... Please make sure to set the web URL of your interest within the 'Set URL and Bright Data Zone' node and update the Webhook Notification URL". LLM Usages**: "Google Gemini Flash Exp model is being used... Information Extraction is being used for the handling the custom sentiment analysis with the structured response". Required Files 1GOrjyc9mtZCMvCr_Structured_Data_Extract,Data_Mining_with_Bright_Data&_Google_Gemini.json: The main n8n workflow export for this automation. Testing Tips Run the workflow and check the webhook to verify that the extracted data is being sent correctly. Confirm that the d:\topics.json and d:\trends.json files are created on your disk with the expected structured data. Suggested Tags & Categories Engineering AI
by LeeWei
⚙️ Proposal Generator Template (Automates proposal creation from JotForm submissions) 🧑💻 Author: [LeeWei] 🚀 Steps to Connect: JotForm Setup Visit JotForm to generate your API key and connect to the JotForm Trigger node. Update the form field in the JotForm Trigger node with your form ID (default: 251206359432049). Google Drive Setup Go to Google Drive and set up OAuth2 credentials ("Google Drive account") with access to the folder containing your template. Update the fileId field in the Google Drive node with your template file ID (default: 1DSHUhq_DoM80cM7LZ5iZs6UGoFb3ZHsLpU3mZDuQwuQ). Update the name field in the Google Drive node with your desired output file name pattern (default: ={{ $json['Company Name'] }} | Ai Proposal). OpenAI Setup Visit OpenAI and generate your API key. Paste this key into the OpenAI and OpenAI1 nodes under the "OpenAi account 3" credentials. Update the modelId field in the OpenAI1 node if needed (default: gpt-4.1-mini). Google Docs Setup Set up OAuth2 credentials ("Google Docs account") with edit permissions for the generated documents. No fields need editing as the node dynamically updates based on previous outputs. Google Drive2 Setup Ensure the same Google Drive credentials ("Google Drive account") are used. No fields need editing as the node handles PDF conversion automatically. Gmail Setup Go to Gmail and set up OAuth2 credentials ("Gmail account"). No fields need editing as the node dynamically uses the prospect's email from JotForm. How it works The workflow triggers on JotForm submissions, copies a Google Drive template, downloads an audio call link, transcribes it with OpenAI, generates a tailored proposal, updates a Google Docs file, converts it to PDF, and emails it to the prospect. Set up steps Setup time: Approximately 15-20 minutes. Detailed instructions are available in sticky notes within the workflow.
by M Ayoub
Who is this for? Crypto traders, investors, and enthusiasts who want automated daily market analysis delivered to Discord without manually checking multiple data sources. What it does Fetches real-time cryptocurrency data from 6 free APIs, analyzes market sentiment and indicators using Google Gemini AI, and sends beautifully formatted investment recommendations to your Discord channel. ✅ Uses Free APIs Only — CoinGecko, Yahoo Finance, Alternative.me, and OKX public endpoints require no paid subscriptions or API keys. How it works Triggers daily at scheduled time (default: 5PM) Fetches BTC/ETH/USDT prices and global market metrics from CoinGecko Gets DXY (US Dollar Index) from Yahoo Finance Retrieves Fear & Greed Index from Alternative.me Pulls BTC and ETH funding rates from OKX Combines all data and builds comprehensive analysis prompt Gemini AI analyzes correlations, sentiment, and provides investment stance Formats rich Discord embed with market overview, dominance metrics, indicators, and AI insights Sends alert to your Discord webhook Set up steps Get a free Google Gemini API key from Google AI Studio Create a Discord webhook in your server (Server Settings → Integrations → Webhooks) Connect your Gemini API credentials to the Gemini AI Analysis node Update the webhook URL in the Send to Discord Webhook node with your Discord webhook URL Optionally adjust the trigger time in the Daily Schedule Trigger (5PM) node Setup time: ~5 minutes
by Nansen
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow listens for an incoming chat message and routes it to an AI Agent. The agent is powered by your preferred Chat Model (such as OpenAI or Anthropic) and extended with the Nansen MCP tool, which enables it to retrieve onchain wallet data, token movements, and address-level insights in real time. The Nansen MCP tool uses HTTP Streamable transport and requires API Key authentication via Header Auth. Read the Documentation: https://docs.nansen.ai/nansen-mcp/overview Set up steps Get your Nansen MCP API key Visit: https://app.nansen.ai/account?tab=api Generate and copy your personal API key. Create a credential for authentication From the homepage, click the dropdown next to "Create Workflow" → "Create Credential". Select Header Auth as the method. Set the Header Name to: NANSEN-API-KEY Paste your API key into the Value field. Save the credential (e.g., Nansen MCP Credentials). Configure the Nansen MCP tool Endpoint: https://mcp.nansen.ai/ra/mcp/ Server Transport: HTTP Streamable Authentication: Header Auth Credential: Select Nansen MCP Credentials Tools to Include: Leave as All (or restrict as needed) Configure the AI Agent Connect your preferred Chat Model (e.g., OpenAI, Anthropic) to the Chat Model input. Connect the Nansen MCP tool to the Tool input. (Optional) Add a Memory block to preserve conversational context. Set up the chat trigger Use the "When chat message received" node to start the flow when a message is received. Test your setup Try sending prompts like: What tokens are being swapped by 0xabc...123? Get recent wallet activity for this address. Show top holders of token XYZ.
by Julian Kaiser
Scan Any Workout Plan into the Hevy App with AI This workflow automates the creation of workout routines in the Hevy app by extracting exercise information from an uploaded PDF or Image using AI. What problem does this solve? Tired of manually typing workout plans into the Hevy app? Whether your coach sends them as Google Docs, PDFs, or you have a screenshot of a routine, entering every single exercise, set, and rep is a tedious chore. This workflow ends the madness. It uses AI to instantly scan your workout plan from any file, intelligently extract the exercises, and automatically create the routine in your Hevy account. What used to take 15 minutes of mind-numbing typing now happens in seconds. How it works Trigger: The workflow starts when a PDF file is submitted through an n8n form. Data Extraction: The PDF is converted to a Base64 string and sent to an AI model to extract the raw text of the workout plan. Context Gathering: The workflow fetches a complete list of available exercises directly from the Hevy API. This list is then consolidated. AI Processing: A Google Gemini model analyzes the extracted text, compares it against the official Hevy exercise list, and transforms the raw text into a structured JSON format that matches the Hevy API requirements. Routine Creation: The final structured data is sent to the Hevy API to create the new workout routine in your account. Set up steps Estimated set up time:** 15 minutes. Configure the On form submission trigger or replace it with your preferred trigger (e.g., Webhook). Ensure it's set up to receive a file upload. Add your API credentials for the AI service (in this case, OpenRouter.ai) and the Hevy app. You will need to create 'Hevy API' and OpenRouter API credentials in your n8n instance. In the Structured Data Extraction node, review the prompt and the json schema in the Structured Output Parser. You may need to adjust the prompt to better suit the types of files you are uploading. Activate the workflow. Test it by uploading a sample workout plan document.
by Marcelo Abreu
What this workflow does This workflow takes any website URL, extracts its HTML content, and uses an AI Agent (Claude Opus 4.6) to perform a comprehensive SEO analysis. The AI evaluates the page structure, meta tags, heading hierarchy, link profile, image optimization, and more — then generates a beautifully formatted HTML report. Finally, it converts the report into a PDF using Gotenberg, a free and open-source HTML-to-PDF engine. Workflow steps: Form submission — pass the URL you want to analyze HTML extraction — fetches the full HTML content from the URL AI SEO analysis — Claude Opus 4.6 analyzes the HTML and generates a detailed SEO report in HTML format File conversion — converts the HTML output into a file (index.html) for Gotenberg PDF generation — sends the file to Gotenberg and returns the final PDF Setup Guide Gotenberg — Choose one of 3 options: Option 1 — Demo URL (testing only): Use https://demo.gotenberg.dev as the URL in the HTTP Request node. This is a public instance with rate limits — do not use in production. Option 2 — Docker Compose (self-hosted n8n): Add Gotenberg to the same docker-compose.yml where your n8n service is defined: services: ... your n8n service ... gotenberg: image: gotenberg/gotenberg:8 restart: always Run docker compose up -d to restart your stack. Gotenberg will be available at http://gotenberg:3000 from inside your n8n container. Option 3 — Google Cloud Run (n8n Cloud or no Docker access): Deploy gotenberg/gotenberg:8 as a Cloud Run service via the Google Cloud Console. Set the container port to 3000, memory to 1 GiB, and use the generated URL as your endpoint. 📖 Full Gotenberg docs: gotenberg.dev/docs AI Model This workflow uses Claude Opus 4.6 via the Anthropic API. You can swap it for OpenAI, Google, or Ollama — just replace the Chat Model node. Requirements Anthropic API key (or alternative LLM provider) Gotenberg instance (demo URL included for quick testing) No other external services or paid tools required Feel free to contact me via LinkedIn if you have any questions! 👋🏻
by Cheng Siong Chin
How It Works This workflow automates document authenticity verification by combining AI-based content analysis with immutable blockchain records. It is built for compliance teams, legal departments, supply chain managers, and regulators who need tamper-proof validation and auditable proof. The solution addresses the challenge of detecting forged or altered documents while producing verifiable evidence that meets legal and regulatory standards. Documents are submitted via webhook and processed through PDF content extraction. Anthropic’s Claude analyzes the content for authenticity signals such as inconsistencies, anomalies, and formatting issues, returning structured authenticity scores. Verified documents trigger blockchain record creation and publication to a distributed ledger, with cryptographic proofs shared automatically with carriers and regulators through HTTP APIs. Setup Steps Configure webhook endpoint URL for document submission Add Anthropic API key to Chat Model node for AI Set up blockchain network credentials in HTTP nodes for record preparation Connect Gmail account and specify compliance team email addresses Customize authenticity thresholds Prerequisites Anthropic API key, blockchain network access and credentials Use Cases Supply chain documentation verification for import/export compliance Customization Adjust AI prompts for industry-specific authenticity criteria Benefits Eliminates manual document review time while improving fraud detection accuracy
by Pixcels Themes
Who’s it for This template is ideal for ecommerce founders, dropshippers, Shopify store owners, product managers, and agencies who want to automate product listing creation. It removes manual work by generating titles, descriptions, tags, bullet points, alt text, and SEO metadata directly from a product image and basic input fields. What it does / How it works This workflow starts with a webhook that receives product information along with an uploaded image. The image is uploaded to an online image host so it can be used inside Shopify. At the same time, the image is analyzed by Google Gemini using your provided product name, material type, and details. Gemini returns structured JSON containing: Title Description Tags Bullet points Alt text SEO title SEO description The workflow cleans and parses the AI output, merges it with the uploaded image URL, and constructs a complete Shopify product payload. Finally, it creates a new product in Shopify automatically using the generated content and the provided product variants, vendor, options, and product type. Requirements Google Gemini (PaLM) API credentials Shopify private access token Webhook endpoint for receiving data and files An imgbb (or any image hosting) API key How to set up Connect your Gemini and Shopify credentials. Replace the imgbb API key and configure the hosting node. Provide vendor, product type, variants, and options in the webhook payload. Ensure your source system sends file, product_name, material_type, and extra fields. Run the webhook URL and test with a sample product. How to customize the workflow Change the AI prompt for different product categories Add translation steps for multi-language stores Add price calculation logic Push listings to multiple Shopify stores Save generated metadata into Google Sheets or Notion