by ankitkansaldev
๐ฌ TikTok Influencer Scraper (URL Input) via Bright Data + n8n & Sheets A comprehensive n8n automation that scrapes TikTok influencer profiles using Bright Data's TikTok dataset and automatically saves detailed profile information to Google Sheets. ๐ Overview This workflow provides an automated TikTok influencer data collection solution that scrapes comprehensive profile information and saves it to Google Sheets. Perfect for influencer marketing research, competitor analysis, social media monitoring, and marketing campaign planning. โจ Key Features ๐ Form-Based Input: Simple web form to submit TikTok profile URLs ๐ค Bright Data Integration: Uses Bright Data's TikTok dataset for reliable scraping โณ Status Monitoring: Intelligent polling system to check scraping progress ๐ Retry Logic: Automatic retry mechanism with 30-second intervals ๐ Data Extraction: Comprehensive profile data including engagement metrics ๐ Google Sheets Storage: Automatic data storage and organization โก Error Handling: Built-in error handling and status reporting ๐ฏ Custom Fields: Configurable output fields for specific data needs ๐ฏ What This Workflow Does Input Profile URLs**: TikTok profile URLs submitted through web form Custom Fields**: Configurable data fields for extraction Country Settings**: Geo-targeting for accurate data collection Processing Form Submission: User submits TikTok profile URL through web form API Trigger: Sends profile data to Bright Data for scraping Status Polling: Continuously checks scraping progress Wait & Retry: Implements 30-second delays between status checks Data Retrieval: Fetches complete profile data when ready Sheet Update: Saves extracted data to Google Sheets Status Reporting: Provides completion status and messages Output Data Points | Field | Description | Example | |-------|-------------|---------| | Account ID | Unique TikTok account identifier | @username123 | | Nickname | Display name on profile | "John Doe" | | Biography | Profile bio/description | "Content creator & influencer" | | Followers | Number of followers | 1,250,000 | | Following | Number of accounts following | 500 | | Likes | Total likes across all videos | 50,000,000 | | Videos Count | Total number of videos posted | 1,200 | | Profile URL | Direct link to TikTok profile | https://www.tiktok.com/@username | | Profile Picture | Profile image URL | https://p16-sign-sg.tiktokcdn.com/... | | Profile Picture HD | High-definition profile image | https://p16-sign-sg.tiktokcdn.com/... | | Is Verified | Verification status | true/false | | Bio Link | External link in bio | https://linktr.ee/username | | Like Engagement Rate | Engagement rate based on likes | 5.2% | | Comment Engagement Rate | Engagement rate based on comments | 2.1% | | Top Videos | List of top performing videos | [video_objects] | | Region | Geographic region | "US" | | Is Under Age 18 | Age status indicator | true/false | ๐ Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with TikTok dataset access Valid TikTok profile URLs for testing 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows โ + Add workflow โ Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials โ + Add credential โ HTTP Request Generic Credential Name: "Bright Data API" Authentication: Bearer Token Token: Your Bright Data API key Test the connection Configure dataset: Ensure you have access to TikTok dataset (gd_l1villgoiiidt09ci) Verify dataset permissions in Bright Data dashboard Check dataset limits and pricing Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "TikTok Influencer Data" Create a sheet tab named "TikTok profile by url" Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials โ + Add credential โ Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Column A: Account ID Column B: Nickname Column C: Biography Column D: Followers Column E: Following Column F: Likes Column G: Videos Count Column H: Profile URL Column I: Is Verified Column J: Bio Link Column K: Like Engagement Rate Column L: Comment Engagement Rate Column M: Region Column N: Status Column O: Message Step 4: Update Workflow Settings Update API credentials: Open "Sends profile URLs to Bright Data to trigger scraping" node Replace BRIGHT_DATA_API_KEY with your actual API key Update dataset ID if different Update Google Sheets nodes: Open "Google Sheets" node Replace document ID: 1OeqtCFm4Wek9DI5YFOWQXTpQJS-SJxC10iAPKEKkmiY Select your Google Sheets credential Choose the correct sheet/tab name Configure form settings: Open "Search by Profile URL" node Customize form title and field labels as needed Note the webhook URL for form access Step 5: Test & Activate Add test profiles: Access the form using the webhook URL Submit 1-2 TikTok profile URLs for testing Use full URLs (e.g., https://www.tiktok.com/@username) Test the workflow: Submit a test profile through the form Monitor execution in n8n Verify data appears in Google Sheet Check for any error messages ๐ Usage Guide Submitting TikTok Profiles Navigate to your form URL (found in Form Trigger node) Enter TikTok profile URL in the format: https://www.tiktok.com/@username Click Submit to start the scraping process Wait for processing (typically 1-3 minutes) Understanding the Process The workflow follows this sequence: Form Submission โ Profile URL captured API Trigger โ Scraping job submitted to Bright Data Status Polling โ Checks every 30 seconds if data is ready Data Retrieval โ Fetches complete profile information Sheet Update โ Saves data to Google Sheets Monitoring Progress Check n8n execution logs for real-time status Bright Data dashboard shows scraping progress Google Sheets will populate when data is ready Status column shows "ready" when complete Reading the Results Your Google Sheet will show: Complete TikTok profile information Engagement metrics and statistics Profile verification status Bio links and external connections Timestamp of data collection ๐ง Customization Options Adding More Data Points Edit the JSON body in "Sends profile URLs to Bright Data" node to include additional fields: "custom_output_fields": [ "account_id", "nickname", "biography", "followers", "following", "likes", "videos_count", "language", "creation_time", "last_post_time", "avg_video_duration", "hashtags_used", "music_used" ] Modifying Input Parameters Customize the scraping parameters: Country targeting**: Change "country" field in input Search limits**: Adjust "limit_per_input" value Discovery method**: Modify "discover_by" parameter Error handling**: Toggle "include_errors" setting Batch Processing Multiple Profiles To process multiple profiles simultaneously: Modify the input array in the API call Add multiple profile URLs in single request Implement loop logic for processing results Add rate limiting between requests Custom Form Fields Enhance the form with additional inputs: Open "Search by Profile URL" node Add form fields for: Country selection Number of videos to analyze Specific date ranges Custom tags or categories ๐จ Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify API key in Bright Data dashboard, check dataset permissions "Profile not found or private" Cause: Invalid TikTok URL or private profile Solution: Verify profile URL format, ensure profile is public "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Scraping timeout" Cause: Profile data taking too long to process Solution: Increase wait time or implement longer polling intervals "Invalid dataset ID" Cause: Incorrect or expired dataset configuration Solution: Check Bright Data dashboard for correct dataset ID "Form submission failed" Cause: Webhook configuration issues Solution: Verify webhook URL and form trigger settings Advanced Troubleshooting Check execution logs** in n8n for detailed error messages Test individual nodes** by running them separately Verify data formats** ensure URLs are properly formatted Monitor API limits** check Bright Data usage quotas Add error handling** implement try-catch logic for robust operation ๐ Use Cases & Examples 1. Influencer Marketing Research Goal: Identify and analyze potential influencers for campaigns Research influencers in specific niches Analyze engagement rates and audience size Compare multiple influencers for campaign selection Track influencer growth over time 2. Competitive Analysis Goal: Monitor competitors' TikTok presence and performance Track competitor follower growth Analyze content strategies and engagement Monitor posting frequency and timing Identify trending content themes 3. Social Media Monitoring Goal: Track brand mentions and user-generated content Monitor branded hashtag usage Track brand advocates and micro-influencers Analyze sentiment and engagement patterns Identify trending topics in your industry 4. Market Research Pipeline Goal: Gather social media intelligence for business decisions Analyze target audience behavior Study content preferences and trends Generate reports for stakeholders Support marketing strategy development โ Advanced Configuration Rate Limiting and Performance To optimize for large-scale scraping: Adjust wait times between status checks Implement exponential backoff for retries Add batch processing for multiple profiles Monitor API usage to avoid limits Data Validation and Cleaning Enhance data quality with validation: Add data type validation for numeric fields Implement URL format checking Clean and standardize text fields Add data completeness checks Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with influencer data Slack Notifications**: Send alerts when new data is available Database Storage**: Store data in PostgreSQL/MySQL for analysis BI Tools**: Connect to Tableau/Power BI for visualization Webhook Integration For real-time updates: Add webhook triggers for immediate profile checks Integrate with external systems via webhooks Create API endpoints for programmatic access Implement authentication for secure access ๐ Performance & Limits Expected Performance Single Profile**: 30-60 seconds average processing time Concurrent Requests**: 5-10 simultaneous (depends on Bright Data plan) Data Accuracy**: 95%+ for public TikTok profiles Success Rate**: 90%+ for accessible profiles Daily Capacity**: 100-1000 profiles (depends on rate limits) Resource Usage Memory**: ~50MB per execution Storage**: Minimal (data stored in Google Sheets) API Calls**: 3-5 Bright Data calls per profile (including status checks) Bandwidth**: ~1-2MB per profile scraped Execution Time**: 1-2 minutes per profile Scaling Considerations Rate Limiting**: Add delays for high-volume scraping Error Handling**: Implement retry logic for failed requests Data Validation**: Add checks for malformed profile data Monitoring**: Track success/failure rates over time Cost Optimization**: Monitor API usage to control costs ๐ค Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned ๐ Quick Setup Checklist Before You Start โ n8n instance running (self-hosted or cloud) โ Google account with Sheets access โ Bright Data account with TikTok dataset access โ Valid TikTok profile URLs for testing โ 15 minutes for setup Setup Steps โ Import Workflow - Copy JSON and import to n8n โ Configure Bright Data - Set up API credentials and test โ Create Google Sheet - New sheet with proper column structure โ Set up Google Sheets credentials - OAuth setup and test โ Update workflow settings - Replace sheet ID and API keys โ Test with sample profiles - Submit 1-2 URLs and verify results โ Activate workflow - Enable form trigger for production use Ready to Use! ๐ Your form URL: https://your-n8n-instance.com/form/[webhook-id] ๐ฏ Happy TikTok Scraping! This workflow provides a solid foundation for automated TikTok influencer data collection. Customize it to fit your specific needs and use cases for influencer marketing, competitive analysis, and social media research.
by ist00dent
This n8n template allows you to automatically create shortened URLs using the TinyURL API by simply sending a webhook request. It's a quick and efficient way to integrate URL shortening into your automated workflows, ideal for sharing long links in social media, emails, or other applications. ๐ง How it works Receive Link Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing the url to be shortened and your api_key for TinyURL. Create TinyURL: This node sends a POST request to the TinyURL API, passing the long URL and your API key. It can also accept optional parameters like domain, alias, and description to customize the shortened link. Respond with Shortened URL: This node sends the response from the TinyURL API (which includes the new shortened URL) back to the service that initiated the webhook. ๐ค Who is it for? This workflow is ideal for: Content Managers & Marketers: Quickly shorten links for campaigns, social media posts, or tracking. Developers: Automate the process of link shortening within applications or scripts. Automation Enthusiasts: Integrate a URL shortener into various n8n workflows (e.g., after generating a report, before sending a notification). Anyone needing on-demand short links: A flexible solution for ad-hoc link shortening. ๐ Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "api_key": "YOUR_TINYURL_API_KEY", "url": "https://www.verylongwebsite.com/path/to/specific/page?param1=value1¶m2=value2", "domain": "tinyurl.com", // Optional: defaults to tinyurl.com "alias": "myCustomAlias", // Optional: desired custom alias for the link "description": "My project link" // Optional: description for the link } The workflow will return the JSON response directly from the TinyURL API, which will include the short_url and other details about the newly created link. โ๏ธ Setup Instructions Obtain TinyURL API Key: Before importing, make sure you have an API key from TinyURL. You can typically get this by signing up for an account on their website. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Link Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /shorten-link). Activate Workflow: Save and activate the workflow. ๐ Tips Dynamic Inputs: The workflow is set up to dynamically use the url, api_key, alias, and description from the incoming webhook data. This makes it highly flexible. Error Handling: You can add an Error Trigger node to catch any issues (e.g., invalid API key, malformed URL) during the TinyURL creation process. Configure it to send notifications or log errors for easy troubleshooting. Post-Shortening Actions: After generating the shortened URL, you can insert additional nodes before the Respond with Shortened URL node to perform other actions. For example, you could: Save to a Database: Store the original and shortened URLs in a database like Airtable, Google Sheets, or a PostgreSQL database. Send a Message: Automatically send the shortened URL via Slack, Discord, email, or SMS. Update a Record: Update a CRM record or project management task with the new shortened link. Custom Domains: If you have a custom domain configured with your TinyURL account, you can change the domain parameter in the Create TinyURL node to use it.
by Matthieu
LinkedIn Profile Enrichment Automation Who is this for? This template is perfect for sales teams, recruiters, marketing professionals, and business development specialists who need to gather comprehensive LinkedIn profile data at scale. Ideal for lead generation teams building prospect databases, recruiters sourcing candidate information, sales professionals researching prospects, and marketing teams creating targeted outreach campaigns. What problem does this workflow solve? Manually collecting detailed information from LinkedIn profiles is incredibly time-consuming and prone to inconsistency. Visiting each profile individually to extract names, job titles, experience, education, skills, and contact details can take hours for even small prospect lists. This automation eliminates the tedious manual data entry while ensuring consistent, comprehensive profile enrichment. What this workflow does This workflow automatically enriches a list of LinkedIn profile URLs by extracting comprehensive professional data including: Personal details** (first name, last name, headline, location) Professional status** (hiring status, open to work indicators) Network metrics** (connections, followers count) Work experience** (up to 5 most recent positions with company details, dates, and roles) Education background** (up to 3 educational institutions with degrees and dates) Skills and languages** (complete skill sets and language proficiencies) Professional summary** (profile description and bio) The enriched data is automatically organized and updated in your Google Sheets database with structured formatting for easy analysis and outreach. Setup Create a Ghost Genius API account and obtain your API key for cookieless LinkedIn profile scraping Configure HTTP Request credentials with Header Auth using your Ghost Genius API key Set up your Google Sheets database using the provided template with columns: URL, Firstname, Lastname, Tagline, Location Connections, Followers, Hiring?, Open to work? Summary, Languages, Skills Experience 1-5, Education 1-3 Configure Google Sheets OAuth2 credentials following n8n's authentication setup guide Add LinkedIn profile URLs to the first column of your Google Sheet to begin enrichment Test the workflow with a small batch before processing larger lists How to customize this workflow Adjust batch processing settings** to handle larger volumes by modifying the batch size and interval timing Add data validation rules** to filter out incomplete or invalid profiles before processing Integrate with CRM systems** like HubSpot or Salesforce to automatically sync enriched data Set up automated scheduling** to regularly re-enrich profiles and capture profile updates Add email notifications** to alert when enrichment batches are completed or encounter errors Customize data mapping** to include additional profile fields or reorganize the output structure Add duplicate detection** to prevent re-processing the same profiles multiple times
by Don Jayamaha Jr
This advanced agent analyzes long-term price action in the Binance Spot Market using 1-day candles. It calculates key macro indicators like RSI, MACD, BBANDS, EMA, SMA, and ADX to identify high-confidence trend setups and market momentum. Used by the Quant AI system for directional bias and macro-level signal validation. ๐ฅ Watch Tutorial: ๐ฏ Purpose Detect major trend reversals, consolidation zones, and macro bias Support long-term swing trading decisions Provide reliable 1-day signals for downstream agents ๐ง Core Features | Feature | Description | | --------------------------- | ------------------------------------------------------------ | | ๐ Trigger | Called by parent workflows via Execute Workflow | | ๐ฅ Input Format | { "message": "MATICUSDT", "sessionId": "telegram_id" } | | ๐ก Webhook Call | Sends request to internal 1d indicators webhook | | ๐งฎ Technical Indicators | RSI, MACD, BBANDS, EMA, SMA, ADX (based on 40 daily candles) | | ๐ง GPT (gpt-4.1-mini) Agent | Interprets numerical data into human-readable trend signals | | ๐ฌ Output | Summary suitable for Telegram or further agent consumption | ๐ External Tools Called https://treasurium.app.n8n.cloud/webhook/1d-indicators Sends: { "symbol": "SOLUSDT" } ๐ Indicator Calculations | Indicator | Purpose | | -------------- | ------------------------------- | | RSI (14) | Overbought / Oversold Signals | | MACD (12,26,9) | Trend Reversals / Momentum | | BBANDS (20, 2) | Volatility Expansion | | EMA (20) | Short-Term Trend Confirmation | | SMA (20) | Macro-Level Support/Resistance | | ADX (14) | Trend Strength + Directional DI | ๐ฆ Setup Import the JSON into n8n. Add your OpenAI API credentials. Ensure webhook /1d-indicators is connected and working. Use this agent as a sub-workflow in: Binance SM Financial Analyst Tool Binance Spot Market Quant AI Agent ๐ค Output Example ๐ 1D Overview โ MATICUSDT โข RSI: 71 โ Overbought โข MACD: Bearish Cross forming โข BBANDS: Widening Volatility โข EMA < SMA โ Downtrend Momentum โข ADX: 33 โ High Trend Strength ๐ Notes Not user-facing โ outputs are structured JSON or Telegram-style summaries. Pairs well with shorter timeframe tools (15mโ4h) for confidence stacking. ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ Need help? Reach out on LinkedIn โ Don Jayamaha
by Joseph LePage
MCP AI Chatbot using Brave Search Disclaimer: This workflow only works with local installations of n8n because it uses a community MCP node Who is this for? This workflow is ideal for developers, automation enthusiasts, and businesses looking to integrate AI-powered chat capabilities into their workflows. It's particularly useful for those leveraging Brave Search and MCP tools to enhance user interactions and streamline data retrieval. What problem is this workflow solving? This workflow addresses the challenge of creating an intelligent chatbot that can process user queries, execute searches using Brave Search, and provide responses enriched by AI. It simplifies the integration of multiple tools into a cohesive system, saving time and effort for users who need a robust conversational AI solution. What this workflow does Listens for incoming chat messages using the Chat Trigger node. Processes user input with an AI Agent powered by GPT-4o. Retrieves relevant tools using the MCP Get Brave Tools node. Executes specific search queries via the MCP Execute Brave Search node. Maintains short-term memory of conversations with the Simple Memory node. Setup Prerequisites: Access to an n8n instance (self-hosted). API credentials for OpenAI and MCP Client Tools. Brave Search API key. Steps: Import the workflow JSON into your n8n instance. Configure the API credentials for OpenAI and MCP Client Tools in their respective nodes. Set up your Brave Search API key in the MCP nodes. https://brave.com/search/api/ Testing: Use the built-in chat interface to send test messages. Verify that the chatbot processes queries and returns results as expected. How to customize this workflow to your needs Modify the AI Agent's prompt settings to tailor responses to your specific use case. Adjust the memory buffer in the Simple Memory node to retain more or less conversational context. Replace or add additional tools in the MCP nodes to expand functionality.
by Deborah
How it works This workflow shows how to set credentials dynamically using expressions. It accepts an API key via a form, then uses it in the NASA node to authenticate a request. Setup steps First, set up your NASA credential: Create a new NASA credential. Hover over API Key. Toggle Expression on. In the API Key field, enter {{ $json["Enter your NASA API key"] }}. Then, test the workflow: Get an API key from NASA Select Test workflow Enter your key using the form. The workflow runs and sends you to the NASA picture of the day. For more information on expressions, refer to n8n documentation | Expressions.
by n8n Team
This template shows how you can create reports on data in an app and share a summary in another app. Specifically, this example checks a Notion database for new submissions, filters for submissions with a specific tag, and then sends a Slack message with the number created this week. Setup instructions are located inside the workflow template.
by Hunyao
What it does Pulls up to 700 Amazon reviews per product (recent and top-rated) and writes them straight into a Google Sheet tab you choose. Perfect for โข Brand and product managers tracking sentiment โข Marketplace sellers analysing competitor feedback โข Agencies building product-review dashboards Apps used RapidAPI Real-Time Amazon Data, Google Sheets, n8n Form Trigger How it works Form Trigger collects brand, product and sheet info. Code node extracts the ASIN and builds 70 API requests (10 pages ร star ratings). Split-in-batches loops through the request list, throttled by two Wait nodes. HTTP Request fetches reviews from RapidAPI. IF node drops empty or error responses. Split Out breaks arrays into single reviews. Google Sheets appends every review to the target tab. Loop continues until all pages finish. Setup Fill in Brand name, Product / Model Name, Amazon Product URL, Tab URL to insert reviews in the form. Grab your X-RapidAPI-Key from RapidAPI โ Add as httpHeaderAuth credential. Connect Google Sheets OAuth2 and make the spreadsheet Anyone with the link can edit. Open Workflow Settings โ set timezone if you plan to schedule runs. Hit Execute workflow or share the form link. Credentials โข Real-Time Amazon Data (RapidAPI HTTP Header Auth) โข Google Sheets OAuth2 Limits and notes โข \~100 RapidAPI calls for the free plan. Plan quota accordingly. โข Assumes Amazon returns 10 pages per star rating; fewer pages skip silently. โข Large sheets may hit Google API write quotas. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Jay Emp0
MCP Tool โ Replicate (Flux) Image Generator โ WordPress/Twitter Generates images via Replicate Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an MCP module that other agents/workflows call for on-demand image creation. Models configured in this workflow:\ black-forest-labs/flux-schnell, black-forest-labs/flux-dev, black-forest-labs/flux-1.1-pro Switch rationale: lower cost ๐ฐ, broader model choice ๐ฏ, full control of parameters โ๏ธ Leonardo API credits cannot be used in the web UI ๐ โโ๏ธ; separate spend for API vs UI Links: ๐ Prior Leonardo-based workflow: https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/ ๐ฐ Blog automation consuming these images: https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/ ๐ฅ Inputs | Field | Type | Description | | ------ | ------ | --------------------------------- | | prompt | string | Text description for the image | | slug | string | Filename slug for WP media | | model | string | One of the configured Flux models | Example: { "prompt":"Joker watching a Batman movie on his laptop", "slug":"joker-watching-batman", "model":"black-forest-labs/flux-dev" } ๐ค Output { "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", "wordpress": {...}, "twitter": {...} } ๐ Flow Trigger with prompt, slug, model Build model payload (quality/steps/ratio/output format) Call Replicate: POST /v1/models/{model}/predictions (Prefer: wait) Download the generated image URL Upload to WordPress (returns public URL) Optional: upload to Twitter/X Return URL + metadata ๐ค MCP Use at Scale (emp0.com) Operational pattern: I currently use this setup for my blog where i generate 300 posts/month, each with 4 images (banner + 2 to 3 inline images) โ 1,000 images/month produced by this MCP. ๐ก Hybrid Cost-Optimized Setup: High-priority images* (banners, main visuals): Generated using *Flux Dev** on Leonardo for slightly better prompt adherence. Low-priority images* (inline blog visuals): Generated using *Flux Schnell** on Replicate for maximum cost efficiency. ๐ฐ Pricing Comparison (per image) Leonardo per-image cost uses API Basic math: $9 / 3,500 credits = $0.0025714 per credit. Flux Schnell (Leonardo)** = 7 credits Flux Dev (Leonardo)** = 7 credits Flux 1.1 Pro equivalent in Leonardo* = *Leonardo Phoenix** based on my experience = 10 credits | Flux Model | Replicate | Leonardo API* | | ------------------------ | ------------------------- | ------------------------------- | | flux-schnell | $0.0030 (=$3/1,000) | $0.0180 (7 ร $0.0025714) | | flux-dev | $0.0250 | $0.0180 (7 ร $0.0025714) | | flux-1.1-pro / Phoenix | $0.0400 | $0.0257 (10 ร $0.0025714) | Replicate pricing: https://replicate.com/pricing\ Leonardo pricing: https://leonardo.ai/pricing/\ Leonardo API usage: https://docs.leonardo.ai/docs/commonly-used-api-values ๐ Monthly Cost Example (1,000 images/month) Mix: 300 รflux-dev on Leonardo, 700 รflux-schnell on Replicate. | Platform/Model | Images | Price per Image | Total | | ------------------------ | ------ | --------------- | ---------- | | Leonardo flux-dev | 300 | $0.0180 | $5.40 | | Replicate flux-schnell | 700 | $0.0030 | $2.10 | | Total Monthly Spend | 1000 | โ | $7.50 | ๐ต If using Leonardo for both: 300 ร $0.0180 = $5.40 700 ร $0.0180 = $12.60 Total = $18.00** Savings: $10.50/month (โ58% lower) with the hybrid setup. ๐ Notes More Replicate models can be added in Code1 node. Parameters tuned for aspect ratio, inference steps, quality, guidance. Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branchโall in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings โ Developer Settings โ Personal Access Tokens โ Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Paul
Gmail AI Email Manager - Setup Guide ๐ฏ Workflow Overview This workflow will create an intelligent Gmail email manager that can: Monitor incoming emails via webhook Analyze email content using AI Categorize emails automatically Generate smart responses Take actions based on email content Send notifications for important emails ๐ Pre-Setup Checklist Before we build the workflow, let me gather the necessary information and validate our approach. Phase 1: Discovery & Planning [ ] Search for Gmail nodes [ ] Find AI analysis nodes [ ] Identify webhook trigger options [ ] Check notification nodes Phase 2: Configuration Requirements [ ] Gmail API credentials [ ] AI service (OpenAI/Claude) API key [ ] Webhook URL setup [ ] Email classification rules ๐ง Setup Instructions Step 1: Gmail API Setup Go to Google Cloud Console Create new project or select existing Enable Gmail API Create OAuth 2.0 credentials Add authorized redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Step 2: AI Service Setup Choose one of the following: OpenAI**: Get API key from platform.openai.com Claude**: Get API key from console.anthropic.com Local AI**: Set up Ollama or similar Step 3: n8n Credentials Gmail OAuth2: Add client ID, secret, and scopes AI Service: Add API key Webhook: Configure webhook URL Gmail AI Email Manager - Setup Guide ๐ง Quick Setup Checklist 1. Google Cloud Console [ ] Enable Gmail API [ ] Create OAuth2 credentials [ ] Add redirect URI: https://your-n8n.com/rest/oauth2-credential/callback [ ] Set up Gmail push notifications with Pub/Sub 2. API Keys [ ] Get OpenAI API key from platform.openai.com [ ] Create Google Sheets for logging (optional) 3. n8n Credentials [ ] Gmail OAuth2: Client ID, Secret, Scopes: gmail.readonly,gmail.modify,gmail.compose [ ] OpenAI API: Your API key 4. Gmail Labels (Create these) [ ] URGENT (red) [ ] IMPORTANT (orange) [ ] PROMOTIONAL (purple) [ ] PERSONAL (green) [ ] WORK (blue) [ ] SPAM (gray) 5. Update Workflow Values [ ] High Priority Alert: Change notification email [ ] Spreadsheet Log: Update sheet ID (if using) [ ] Webhook: Copy URL after saving workflow 6. Test [ ] Save & activate workflow [ ] Send test email to Gmail [ ] Check execution log [ ] Verify auto-categorization works That's it! Your AI email manager is ready! ๐
by Airtop
Automating Company Data Enrichment and HubSpot Integration Use Case This automation enriches company data based on email domain and LinkedIn profile, calculates an ICP (Ideal Customer Profile) score, and updates the corresponding company record in HubSpot. Itโs ideal for onboarding, qualification, and CRM enrichment. What This Automation Does Input Parameters Contact email**: Used to derive the company domain. Company domain**: Primary web domain of the company. Company LinkedIn* *(optional): LinkedIn URL for enrichment accuracy. Airtop Profile (connected to LinkedIn)**: An authenticated Airtop Profile. What It Outputs Full company profile (name, tagline, website, headquarters) Employee count ICP score based on AI/tech profile, scale, agency type, and location Updates/creates record in HubSpot with all enriched attributes How It Works Input Validation: Filters out non-corporate domains like Gmail, Yahoo, or .edu. Enrichment Trigger: Launches Airtop workflows to extract and analyze data from LinkedIn and calculate the ICP score. Data Mapping: Compiles structured fields including: Overview, location (city, state, country) Company website and domain LinkedIn URL, employee count ICP score HubSpot Sync: Sends all the enriched fields to the designated HubSpot object for upsertion. Setup Requirements Airtop API Key Airtop Profile with active LinkedIn authentication HubSpot integration enabled for object updates Next Steps Use in Webforms**: Trigger this on signup to auto-populate CRM records. Enrich Manually Entered Contacts**: Use with list-based workflows for batch enrichment. Sync to Other CRMs**: Replace HubSpot step with Salesforce, Pipedrive, etc. for flexible integration. Read more about comapny data enrichment