by Sona Labs
Generate Sora videos, stitch clips, and post to Twitter Generate creative ASMR cutting video concepts with GPT-5.1, create high-quality video clips using Sora v2, stitch them together with Cloudinary, and automatically post to Twitter/X—transforming ideas into viral content without manual video editing. How it works Step 1: Generate Video Concepts Schedule Trigger activates the workflow automatically GPT-5.1 AI agent generates 3 unique ASMR cutting scene prompts with unusual objects Creates structured video prompts optimized for Sora v2 (frontal camera angle, cutting actions) Generates Twitter-ready captions with relevant hashtags Saves all concepts and scripts to Google Sheets for tracking Step 2: Create Video Clips with Sora v2 Generates 3 separate Sora v2 video clips in parallel (8-12 seconds each) Each clip uses unique prompts from GPT-5.1 output Videos render at 720x1280 resolution (vertical format for social media) System waits 30 seconds for rendering to complete Step 3: Monitor & Download Videos Loops through all 3 video generation requests Checks Sora API status every 30 seconds until rendering completes Automatically skips failed renders (continues workflow with successful videos) Downloads completed videos from Sora API Uploads each clip to Cloudinary for storage and processing Step 4: Stitch Videos Together Collects all uploaded Cloudinary video IDs Builds Cloudinary transformation URL to stitch 3 clips into one seamless video Applies Twitter-compatible encoding (H.264 baseline, AAC audio, MP4 format) Downloads the final stitched video Step 5: Upload to Twitter/X Prepares video file data and calculates total file size Uses Twitter's chunked upload API (INIT → APPEND → FINALIZE) Waits for Twitter's video processing to complete Checks processing status until video is ready Posts tweet with AI-generated caption and attached video Updates Google Sheets status to "Posted" What you'll get AI-Generated Concepts**: Creative ASMR cutting ideas with unusual objects (glass avocados, lava rocks, rainbow soap) Professional Video Clips**: Three 8-12 second Sora v2 videos per concept with 720x1280 resolution Seamless Stitching**: Single combined video optimized for Twitter/X specifications Engaging Captions**: GPT-5.1 generated tweets with hashtags designed for virality Automated Posting**: Direct upload to Twitter/X without manual intervention Cloud Backup**: All videos stored in Cloudinary with metadata Progress Tracking**: Google Sheets integration shows workflow status (In Progress → Posted) Error Handling**: Failed Sora renders are automatically skipped Why use this Save 4+ hours per video**: Eliminate scripting, shooting, editing, and posting time Consistent posting schedule**: Set it and forget it with the Schedule Trigger Scale content creation**: Generate multiple video variations in 20-30 minutes Professional quality**: Leverage Sora v2's AI video generation for realistic cutting scenes Optimize for virality**: GPT-5.1 creates concepts and captions designed for engagement Reduce creative burnout**: AI handles ideation, execution, and distribution No video editing skills needed**: Complete automation from concept to post Test multiple concepts**: Generate 3 variations per run to see what resonates Setup instructions Required accounts and credentials: OpenAI API Key (GPT-5.1 and Sora v2 access required) Sign up at https://platform.openai.com Ensure your account has Sora v2 API access enabled Generate API key from API Keys section Note: Sora v2 is currently in limited beta Google Sheets OAuth (for tracking video ideas and status) Free Google account required Create a spreadsheet with columns: Category, Scene 1, Scene 2, Scene 3, Status n8n will request OAuth permissions during setup Cloudinary Account (for video storage and stitching) Sign up at https://cloudinary.com (free tier available) Note your cloud name from the dashboard Create an upload preset named n8n_integration Enable unsigned uploads for the preset Twitter OAuth 1.0a Credentials (for automated posting) Apply for Twitter Developer access at https://developer.twitter.com Create a new app in the Developer Portal Generate: API Key, API Secret, Access Token, Access Token Secret Enable "Read and Write" permissions (not just Read) OAuth 1.0a is required for media uploads (OAuth 2.0 won't work) Configuration steps: Update OpenAI API Key: Add your OpenAI API key to these nodes: "OpenAI Chat Model" credentials "Create Sora Video Scene - 1" (Authorization header) "Create Sora Video Scene - 2" (Authorization header) "Create Sora Video Scene - 3" (Authorization header) "Check Video Status" (Authorization header) "Download Completed Video" (Authorization header) Replace Bearer API KEY with Bearer YOUR_ACTUAL_API_KEY Configure Google Sheets: Open "Save Category and Clip Scripts" and "Update Status" nodes Authenticate with your Google account (OAuth 2.0) Select your spreadsheet and sheet name Ensure columns match: Category, Scene 1, Scene 2, Scene 3, Status The workflow will update Status from "In Progress" to "Posted" Update Cloudinary Settings: In "Upload to Cloudinary" node: Replace {Cloud name here} in the URL with your Cloudinary cloud name Verify upload preset is set to n8n_integration In "Build Stitch URL" node: Open the Code node Replace dph9n4uei on line 1 with your cloud name This builds the video stitching transformation URL Add Twitter OAuth 1.0a Credentials: Configure OAuth 1.0a in these nodes: "Twitter Upload - INIT" "Twitter Upload - APPEND" "Finalize Upload" "Check Twitter Processing Status" "Post a Tweet" Use the same OAuth 1.0a credential for all nodes Ensure your Twitter app has "Read and Write" permissions Adjust Schedule Trigger (optional): Default: Runs on every interval Modify in "Schedule Trigger" node to set specific times Recommended: Once per day or every few hours to avoid rate limits Test the workflow: Click "Execute Workflow" to test manually first Verify GPT-5.1 generates 3 video concepts Check that Sora v2 creates all 3 videos Confirm Cloudinary stitches videos correctly Ensure Twitter post appears with video and caption Important notes: Sora API Rate Limits**: Sora v2 may have rendering quotas. Monitor your usage Video Rendering Time**: Each Sora clip takes 2-5 minutes. Total workflow: 15-25 minutes Failed Videos**: The workflow automatically skips failed renders and continues Twitter Video Limits**: Maximum 512MB per video, MP4 format required Cloudinary Free Tier**: 25 credits/month includes video transformations Cost Estimate**: ~$1-3 per run (Sora API pricing varies) Troubleshooting: "Sora API access required"**: Contact OpenAI to enable Sora v2 API on your account Twitter upload fails**: Verify OAuth 1.0a credentials have "Read and Write" permissions Cloudinary upload fails**: Check cloud name and ensure upload preset exists Videos don't stitch**: Verify all 3 videos uploaded successfully to Cloudinary Google Sheets not updating**: Confirm OAuth permissions and sheet column names match Next steps: Enable the Schedule Trigger to automate daily/weekly posts Monitor Google Sheets to track posted content Adjust GPT-5.1 prompts in "ASMR Cutting Ideas" for different content themes Experiment with different video durations (8 vs 12 seconds) Add error notifications using Email or Slack nodes
by Trung Tran
Cloudflare Incident Monitoring & Escalation Workflow 🚀 Try Decodo — Web Scraping & Data API (Coupon: TRUNG) Decodo is a powerful public data access platform offering managed web scraping APIs and proxy infrastructure to collect structured web data at scale. It handles proxies, anti-bot protection, JavaScript rendering, retries, and global IP rotation—so you can focus on data, not scraping complexity. Why Decodo Managed Web Scraping API with anti-bot bypass & high success rates Works with JS-heavy sites; outputs JSON/HTML/CSV Easy integration (Python, Node.js, cURL) for eCommerce, SERP, social & general web data 🎟️ Special Discount Use coupon TRUNG to get the Advanced Scraping API plan — 23,000 requests for $5. Who this workflow is for For DevOps, SRE, IT Ops, and Platform teams running production traffic behind Cloudflare who need reliable incident awareness without alert fatigue. Use it if you want: Continuous Cloudflare incident monitoring Clear severity-based routing Automatic escalation into JIRA Clean Slack & Telegram notifications Deduplicated, noise-controlled alerts What this workflow does This workflow polls the Cloudflare Status API, detects unresolved incidents, scores their impact, and routes them to the right channels. High-impact incidents are escalated to JIRA. Lower-impact updates are notified (or skipped) to reduce noise. How it works (high level) Runs on a fixed schedule (e.g. every 5 minutes) Fetches current Cloudflare incidents Stops early if no active issues exist Normalizes and scores incidents (severity, impact, affected service) Deduplicates previously-alerted incidents Builds human-readable notification payloads Routes by impact: High → create JIRA incident + notify Low → notify or suppress Sends alerts to Slack and Telegram Requirements Decoco Scrapper API credential n8n (self-hosted or Cloud) Cloudflare Status API (public) Slack bot (chat:write) Telegram bot + chat ID JIRA project with issue-create permission Optional LLM credentials (summarization/classification) Notes All secrets are stored in n8n Credentials Workflow is idempotent and safe to rerun No assumptions about root cause or remediation Built for production-grade incident visibility with n8n.
by Jay Emp0
Overview Fetch Multiple Google Analytics GA4 metrics daily, post to Discord, update previous day’s entry as GA data finalizes over seven days. Benefits Automates daily traffic reporting Maintains single message per day, avoids channel clutter Provides near–real-time updates by editing prior messages Use Case Teams tracking website performance via Discord (or any chat tool) without manual copy–paste. Marketing managers, community moderators, growth hackers. If your manager asks you for daily marketing report every morning, you can now automate it Notes google analytics node in n8n does not provide real time data. The node updates previous values for the next 7 days discord node on n8n does not have features to update an exisiting message by message id. So we have used the discord api for this most businesses use multiple google analytics properties across their digital platforms Core Logic Schedule trigger fires once a day. Google Analytics node retrieves metrics for date ranges (past 7 days) Aggregate node collates all records. Discord node fetches the last 10 messages in the broadcast channel Code node maps existing Discord messages by to the google analytics data using the date fields For each GA record: If no message exists → send new POST to the discord channel If message exists and metrics changed, send an update patch to the existing discord message Batch loops + wait nodes prevent rate-limit. Setup Instructions Import workflow JSON into n8n. Follow the n8n guide to Create Google Analytics OAuth2 credential with access to all required GA accounts. Follow the n8n guide to Create Discord OAuth2 credential for “Get Messages” operations. Follow the Discord guide to Create HTTP Header Auth credential named “Discord-Bot” with header Key: Authorization Value: Bot <your-bot-token> In the two Set nodes in the beginning of the flow, assign discord_channel_id and google_analytics_id. Get your discord channel id by sending a text on your discord channel and then copy message link Paste the text below and you will see your message link in the form of https://discord.com/channels/server_id/channel_id/message_id , you will want to get the channel_id which is the number in the middle Find your google analytics id by going to google analytics dashboard, seeing the properties in the top right and copy paste that number to the flow Adjust schedule trigger times to your preferred report hour. Activate workflow. Customization Replace Discord HTTP Request nodes with Slack, ClickUp, WhatsApp, Telegram integrations by swapping POST/PATCH endpoints and authentication.
by Airtop
Extract Facebook Group Posts with Airtop Use Case Extracting content from Facebook Groups allows community managers, marketers, and researchers to gather insights, monitor discussions, and collect engagement metrics efficiently. This automation streamlines the process of retrieving non-sponsored post data from group feeds. What This Automation Does This automation extracts key post details from a Facebook Group feed using the following input parameters: Facebook Group URL**: The URL of the Facebook Group feed you want to scrape. Airtop Profile**: The name of your Airtop Profile authenticated to Facebook. It returns up to 5 non-sponsored posts with the following attributes for each: Post text Post URL Page/profile URL Timestamp Number of likes Number of shares Number of comments Page or profile details Post thumbnail How It Works Form Trigger: Collects the Facebook Group URL and Airtop Profile via a form. Browser Automation: Initiates a new browser session using Airtop. Navigates to the provided Facebook Group feed. Uses an AI prompt to extract post data, including interaction metrics and profile information. Structured Output: The results are returned in a defined JSON schema, ready for downstream use. Setup Requirements Airtop API Key — Free to generate. An Airtop Profile logged into Facebook. Next Steps Integrate With Analytics Tools**: Feed the output into dashboards or analytics platforms to monitor community engagement. Automate Alerts**: Trigger notifications for posts matching certain criteria (e.g., high engagement, keywords). Combine With Comment Automation**: Extend this to reply to posts or engage with users using other Airtop automations. Let me know if you’d like this saved as a .md file or included in your Airtop automation library. Read more about how to extract posts from Facebook groups
by Anna Bui
Automatically monitor LinkedIn posts from your community members and create AI-powered content digests for efficient social media curation. This template is perfect for community managers, content creators, and social media teams who need to track LinkedIn activity from their network without spending hours manually checking profiles. It fetches recent posts, extracts key information, and creates digestible summaries using AI. Good to know API costs apply** - LinkedIn API calls ($0.01-0.05 per profile check) and OpenAI processing ($0.001-0.01 per post) Rate limiting included** - Built-in random delays prevent API throttling issues Flexible scheduling** - Easy to switch from daily schedule to webhook triggers for real-time processing Requires API setup** - Need RapidAPI access for LinkedIn data and OpenAI for content processing How it works Daily profile scanning** - Automatically checks each LinkedIn profile in your Airtable for posts from yesterday Smart data extraction** - Pulls post content, engagement metrics, author information, and timestamps AI-powered summarization** - Creates 30-character previews of posts for quick content scanning Duplicate prevention** - Checks existing records to avoid storing the same post multiple times Structured storage** - Saves all processed data to Airtable with clean formatting and metadata Batch processing** - Handles multiple profiles efficiently with proper error handling and delays How to use Set up Airtable base** - Create tables for LinkedIn profiles and processed posts using the provided structure Configure API credentials** - Add your RapidAPI LinkedIn access and OpenAI API key to n8n credentials Import LinkedIn profiles** - Add community members' LinkedIn URLs and URNs to your profiles table Test the workflow** - Run manually with a few profiles to ensure everything works correctly Activate schedule** - Enable daily automation or switch to webhook triggers for real-time processing Requirements Airtable account** - For storing profile lists and managing processed posts with proper field structure RapidAPI Professional Network Data API** - Access to LinkedIn post data (requires subscription) OpenAI API account** - For intelligent content summarization and preview generation LinkedIn profile URNs** - Properly formatted LinkedIn profile identifiers for API calls Customising this workflow Change monitoring frequency** - Switch from daily to hourly checks or use webhook triggers for real-time updates Expand data extraction** - Add company information, hashtag analysis, or engagement trending Integrate notification systems** - Add Slack, email, or Discord alerts for high-engagement posts Connect content tools** - Link to Buffer, Hootsuite, or other social media management platforms for direct publishing Add filtering logic** - Set up conditions to only process posts with minimum engagement thresholds Scale with multiple communities** - Duplicate workflow for different LinkedIn communities or industry segments
by Vadym Nahornyi
How it works Automatically sends Telegram notifications when any n8n workflow fails. Includes workflow name, error message, and execution ID in the alert. Setup Complete setup instructions included in the workflow's sticky note in 5 languages: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский Features Monitors all workflows 24/7 Instant Telegram notifications Zero configuration needed Just add your bot token and chat ID Important ⚠️ Keep this workflow active 24/7 to capture all errors.
by Alex Kim
Printify Automation - Update Title and Description Workflow This n8n workflow automates the process of retrieving products from Printify, generating optimized product titles and descriptions, and updating them back to the platform. It leverages OpenAI for content generation and integrates with Google Sheets for tracking and managing updates. Features Integration with Printify**: Fetch shops and products through Printify's API. AI-Powered Optimization**: Generate engaging product titles and descriptions using OpenAI's GPT model. Google Sheets Tracking**: Log and manage updates in Google Sheets. Custom Brand Guidelines**: Ensure consistent tone by incorporating brand-specific instructions. Loop Processing**: Iteratively process each product in batches. Workflow Structure Nodes Overview Manual Trigger: Manually start the workflow for testing purposes. Printify - Get Shops: Retrieves the list of shops from Printify. Printify - Get Products: Fetches product details for each shop. Split Out: Breaks down the product list into individual items for processing. Loop Over Items: Iteratively processes products in manageable batches. Generate Title and Desc: Uses OpenAI GPT to create optimized product titles and descriptions. Google Sheets Integration: Trigger: Monitors Google Sheets for changes. Log Updates: Records product updates, including old and new titles/descriptions. Conditional Logic: If Nodes: Ensure products are ready for updates and stop processing once completed. Printify - Update Product: Sends updated titles and descriptions back to Printify. Brand Guidelines + Custom Instructions: Sets brand tone and seasonal instructions. Setup Instructions Prerequisites n8n Instance: Ensure n8n is installed and configured. Printify API Key: Obtain an API key from your Printify account. Add it to n8n under HTTP Header Auth. OpenAI API Key: Obtain an API key from OpenAI. Add it to n8n under OpenAI API. Google Sheets Integration: Share your Google Sheets with the Google API service account. Configure Google Sheets credentials in n8n. Workflow Configuration Set Brand Guidelines: Update the Brand Guidelines + Custom Instructions node with your brand name, tone, and seasonal instructions. Batch Size: Configure the Loop Over Items node for optimal batch sizes. Google Sheets Configuration: Set the correct Google Sheets document and sheet names in the integration nodes. Run the Workflow: Start manually or configure the workflow to trigger automatically. Key Notes Customization**: Modify API calls to support other platforms like Printful or Vistaprint. Scalability**: Use batch processing for efficient handling of large product catalogs. Error Handling**: Configure retries or logging for any failed nodes. Output Examples Optimized Content Example Input Title**: "Classic White T-Shirt" Generated Title**: "Stylish Classic White Tee for Everyday Wear" Input Description**: "Plain white T-shirt made of cotton." Generated Description**: "Discover comfort and style with our classic white tee, crafted from premium cotton for all-day wear. Perfect for casual outings or layering." Next Steps Monitor Updates: Use Google Sheets to review logs of updated products. Expand Integration: Add support for more Printify shops or integrate with other platforms. Enhance AI Prompts: Customize prompts for different product categories or seasonal needs. Feel free to reach out for additional guidance or troubleshooting!
by lin@davoy.tech
This workflow template, "n8n Error Report to LINE," is designed to streamline error handling by sending real-time notifications to your LINE account whenever an error occurs in any of your n8n workflows. By integrating with the LINE Messaging API , this template ensures you stay informed about workflow failures, allowing you to take immediate action and minimize downtime. Whether you're a developer managing multiple workflows or a business owner relying on automation, this template provides a simple yet powerful way to monitor and resolve errors efficiently. Who Is This Template For? Developers: Who manage complex n8n workflows and need real-time error notifications. DevOps Teams: Looking to enhance monitoring and incident response for automated systems. Business Owners: Who rely on n8n workflows for critical operations and want to ensure reliability. Automation Enthusiasts: Seeking tools to simplify error tracking and improve workflow performance. What Problem Does This Workflow Solve? When automating processes with n8n, errors can occur due to various reasons such as misconfigurations, API changes, or unexpected inputs. Without proper error handling, these issues may go unnoticed, leading to delays or disruptions. This workflow solves that problem by: 1) Automatically detecting errors in your n8n workflows. 2) Sending instant notifications to your LINE account with details about the failed workflow, including its name and execution URL. Allowing you to quickly identify and resolve issues, ensuring smooth operation of your automated systems. What This Workflow Does 1) Error Trigger: The workflow is triggered automatically whenever an error occurs in any n8n workflow configured to use this error-handling flow. 2) Send Notification via LINE: Using the LINE Push API , the workflow sends a message to your LINE account with key details about the error, such as the workflow name and execution URL. You can also customize the notification message to include additional information or format it to suit your preferences. Setup Guide Pre-Requisites Access to the LINE Developers Console with a registered bot and access to the Push API. https://developers.line.biz/console/ [API Reference]( https://developers.line.biz/en/reference/messaging-api/#send-narrowcast-message) Basic knowledge of n8n workflows and JSON formatting. An active n8n instance where you can configure error workflows. Step-by-Step Setup Configure the Error Trigger: Set this workflow as the default error workflow in your n8n instance. https://docs.n8n.io/flow-logic/error-handling/ Set Up LINE Push API: Replace <UID HERE> in the HTTP Request node with your LINE user ID to ensure notifications are sent to the correct account.
by Dustin
Short an simple: This Workflow will sync (add and delete) your Liked Songs to an custom playlist that can be shared. Setup: Create an app on the Spotify Developer Dashboard. Create Spotify Credentials - Just click on one of the Spotify Nodes in the Workflow an click on "create new credentials" and follow the guide. Create the Spotify Playlist that you want to sync to. Copy the exact name of you playlist, go into Node "Edit set Vars" and replace the value "CHANGE MEEEE" with your playlist name. Set your Spotify Credentiels on every Spotify Node. (Should be marekd with Yellow and Red Notes) Do you use Gotify? - No: Delete the Gotify Nodes (all the way to the right end of the Workflow) - Yes: Customize the Gotify Nodes to your needs.
by David
Who might benfit from this workflow? Do you have to record your working hours yourself? Then this n8n workflow in combination with an iOS shortcut will definitely help you. Once set up, you can use a shortcut, which can be stored as an app icon on your home screen, to record the start, end and duration of your break. How it works Once setup you can tap the iOS shortcut on your iPhone. You will see a menu containing three options: "Track Start", "Track Break" and "Track End". After time is tracked iOS will display you a notification about the successful operation. How to set it up Copy the notion database to your notion workspace (Top right corner). Copy the n8n workflow to your n8n workspace In the notion nodes in the n8n workflow, add your notion credentials and select the copied notion database. Download the iOS Shortcut from our documentation page Edit the shortcut and paste the url of your n8n Webhook trigger node to the first "Text" node of the iOS shortcut flow. It is a best practice to use authentication. You can do so by adding "Header" auth to the webhook node and to the shrotcut. You need help implementing this or any other n8n workflow? Feel free to contact me via LinkedIn or my business website. You want to start using n8n? Use this link to register for n8n (This is an affiliate link)
by Davide
How It Works This workflow creates an AI-powered Telegram chatbot with session management, allowing users to: Start new conversations** (/new). Check current sessions** (/current). Resume past sessions** (/resume). Get summaries** (/summary). Ask questions** (/question). Key Components: Session Management**: Uses Google Sheets to track active/expired sessions (storing SESSION IDs and STATE). /new creates a session; /resume reactivates past ones. AI Processing**: OpenAI GPT-4 generates responses with contextual memory (via Simple Memory node). Summarization: Condenses past conversations when requested. Data Logging**: All interactions (prompts/responses) are saved to Google Sheets for audit and retrieval. User Interaction**: Telegram commands trigger specific actions (e.g., /question [query] fetches answers from session history). Main Advantages 1. Multi-session Handling Each user can create, manage, and switch between multiple sessions independently, perfect for organizing different conversations without confusion. 2. Persistent Memory Conversations are stored in Google Sheets, ensuring that chat history and session states are preserved even if the server or n8n instance restarts. 3. Commands for Full Control With intuitive commands like /new, /current, /resume, /summary, and /question, users can manage sessions easily without needing a web interface. 4. Smart Summarization and Q&A Thanks to OpenAI models, the workflow can summarize entire conversations or answer specific questions about past discussions, saving time and improving the chatbot’s usability. 5. Easy Setup and Scalability By using Google Sheets instead of a database, the workflow is easy to clone, modify, and deploy — ideal for quick prototyping or lightweight production uses. 6. Modular and Extensible Each logical block (new session, get current session, resume, summarize, ask question) is modular, making it easy to extend the workflow with additional features like analytics, personalized greetings, or integrations with CRM systems. Setup Steps Prerequisites: Telegram Bot Token**: Create via BotFather. Google Sheets**: Duplicate this template. Two sheets: Session (active/inactive sessions) and Database (chat logs). OpenAI API Key**: For GPT-4 responses. Configuration: Telegram Integration: Add your bot token to the Telegram Trigger and Telegram Send nodes. Google Sheets Setup: Authenticate the Google Sheets nodes with OAuth. Ensure sheet names (Session, Database) and column mappings match the template. OpenAI & Memory: Add your API key to the OpenAI Chat Model nodes. Adjust contextWindowLength in the Simple Memory node for conversation history length. Testing: Use Telegram commands to test: /new: Starts a session. /question [query]: Tests AI responses. /summary: Checks summarization. Deployment: Activate the workflow; the bot will respond to Telegram messages in real-time. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of integrating the Binance API and storing data in Airtable to manage funding statements associated with tokens in a wallet. Youtube Link Who is this for? This workflow is ideal for developers, financial analysts, and cryptocurrency enthusiasts who want to automate the process of managing funding statements and token prices. It’s particularly useful for those who need a systematic approach to track and report funding fees associated with tokens in their wallets. What problem does this workflow solve? Managing funding statements and token prices across multiple platforms can be cumbersome and error-prone. This workflow automates the process, allowing users to seamlessly fetch funding fees from Binance and record them alongside token prices in Airtable, minimizing manual data entry and potential discrepancies. What this workflow does This workflow integrates the Binance API with an Airtable database, facilitating the storage and management of funding statements linked to tokens in a wallet. The agent can: Fetch funding fees and current positions from Binance. Aggregate data to create structured funding statements. Insert records into Airtable, ensuring proper linkage between funding data and tokens. API Authentication: The workflow establishes authentication with the Binance API using a Crypto Node to handle API keys and signatures, ensuring secure and verified requests. Data Collection: It retrieves necessary data, including funding fees and current positions with properly formatted API requests to ensure seamless communication with Binance. Airtable Integration: The workflow inserts aggregated funding statements and token data into the corresponding Airtable records, managing token existence checks to avoid duplicate entries. Setup Set Up Airtable Database: Create an Airtable base with tables for Funding Statements and Tokens. Generate Binance API Key: Log in and create an API key with appropriate permissions. Set Up Authentication in N8N: Utilize a Crypto Node for Binance API authentication. Configure API Request to Binance: Set request method and headers for communication with the Binance API. Fetch Funding Fees and Current Positions: Retrieve funding data and current positions efficiently. Aggregate and Create Statements: Aggregate data to create detailed funding statements. Insert Data into Airtable: Input the structured data into Airtable and manage token records. Using Get Price Node: Implement a Get Price Node to maintain current token price tracking without additional setup.