by explorium
HubSpot Contact Enrichment with Explorium Template Download the following json file and import it to a new n8n workflow: hubspot\_flow.json Overview This n8n workflow monitors your HubSpot instance for newly created contacts and automatically enriches them with additional contact information. When a contact is created, the workflow: Detects the new contact via HubSpot webhook trigger Retrieves recent contact details from HubSpot Matches the contact against Explorium's database using name, company, and email Enriches the contact with professional emails and phone numbers Updates the HubSpot contact record with discovered information This automation ensures your sales and marketing teams have complete contact information, improving outreach success rates and data quality. Key Features Real-time Webhook Trigger**: Instantly processes new contacts as they're created Intelligent Matching**: Uses multiple data points (name, company, email) for accurate matching Comprehensive Enrichment**: Adds both professional and work emails, plus phone numbers Batch Processing**: Efficiently handles multiple contacts to optimize API usage Smart Data Mapping**: Intelligently maps multiple emails and phone numbers Profile Enrichment**: Optional additional enrichment for deeper contact insights Error Resilience**: Continues processing other contacts if some fail to match Prerequisites Before setting up this workflow, ensure you have: n8n instance (self-hosted or cloud) HubSpot account with: Developer API access (for webhooks) Private App or OAuth2 app created Contact object permissions (read/write) Explorium API credentials (Bearer token) - Get explorium api key Understanding of HubSpot contact properties HubSpot Requirements Required Contact Properties The workflow uses these HubSpot contact properties: firstname - Contact's first name lastname - Contact's last name company - Associated company name email - Primary email (read and updated) work_email - Work email (updated by workflow) phone - Phone number (updated by workflow) API Access Setup Create a Private App in HubSpot: Navigate to Settings โ Integrations โ Private Apps Create new app with Contact read/write scopes Copy the Access Token Set up Webhooks (for Developer API): Create app in HubSpot Developers portal Configure webhook for contact.creation events Note the App ID and Developer API Key Custom Properties (Optional) Consider creating custom properties for: Multiple email addresses Mobile vs. office phone numbers Data enrichment timestamps Match confidence scores Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Navigate to Workflows โ Add Workflow โ Import from File Paste the JSON and click Import Step 2: Configure HubSpot Developer API (Webhook) Click on the HubSpot Trigger node Under Credentials, click Create New Enter your HubSpot Developer credentials: App ID: From your HubSpot app Developer API Key: From your developer account Client Secret: From your app settings Save as "HubSpot Developer account" Step 3: Configure HubSpot App Token Click on the HubSpot Recently Created node Under Credentials, click Create New (App Token) Enter your Private App access token Save as "HubSpot App Token account" Apply the same credentials to the Update HubSpot node Step 4: Configure Explorium API Credentials Click on the Explorium Match Prospects node Under Credentials, click Create New (HTTP Header Auth) Configure the authentication: Name: Authorization Value: Bearer YOUR_EXPLORIUM_API_TOKEN Save as "Header Auth Connection" Apply to all Explorium nodes: Explorium Enrich Contacts Information Explorium Enrich Profiles Step 5: Configure Webhook Subscription In HubSpot Developers portal: Go to your app's webhook settings Add subscription for contact.creation events Set the target URL from the HubSpot Trigger node Activate the subscription Step 6: Activate the Workflow Save the workflow Toggle the Active switch to ON The webhook is now listening for new contacts Node Descriptions HubSpot Trigger: Webhook that fires when new contacts are created HubSpot Recently Created: Fetches details of recently created contacts Loop Over Items: Processes contacts in batches of 6 Explorium Match Prospects: Finds matching person in Explorium database Filter: Validates successful matches Extract Prospect IDs: Collects matched prospect identifiers Enrich Contacts Information: Fetches emails and phone numbers Enrich Profiles: Gets additional profile data (optional) Merge: Combines all enrichment results Split Out: Separates individual enriched records Update HubSpot: Updates contact with new information Data Mapping Logic The workflow maps Explorium data to HubSpot properties: | Explorium Data | HubSpot Property | Notes | | ------------------------------ | ------------------ | ----------------------------- | | professions_email | email | Primary professional email | | emails[].address | work_email | All email addresses joined | | phone_numbers[].phone_number | phone | All phones joined with commas | | mobile_phone | phone (fallback) | Used if no other phones found | Data Processing The workflow handles complex data scenarios: Multiple emails**: Joins all discovered emails with commas Phone numbers**: Combines all phone numbers into a single field Missing data**: Uses "null" as placeholder for empty fields Name parsing**: Cleans sample data and special characters Usage & Operation Automatic Processing Once activated: Every new contact triggers the webhook immediately Contact is enriched within seconds HubSpot record is updated automatically Process repeats for each new contact Manual Testing To test the workflow: Use the pinned test data in the HubSpot Trigger node, or Create a test contact in HubSpot Monitor the execution in n8n Verify the contact was updated in HubSpot Monitoring Performance Track workflow health: Go to Executions in n8n Filter by this workflow Monitor success rates Review any failed executions Check webhook delivery in HubSpot Troubleshooting Common Issues Webhook not triggering Verify webhook subscription is active in HubSpot Check the webhook URL is correct and accessible Ensure workflow is activated in n8n Test webhook delivery in HubSpot developers portal Contacts not matching Verify contact has firstname, lastname, and company Check for typos or abbreviations in company names Some individuals may not be in Explorium's database Email matching improves accuracy significantly Updates failing in HubSpot Check API token has contact write permissions Verify property names exist in HubSpot Ensure rate limits haven't been exceeded Check for validation rules on properties Missing enrichment data Not all prospects have all data types Phone numbers may be less available than emails Profile enrichment is optional and may not always return data Error Handling Built-in error resilience: Failed matches don't block other contacts Each batch processes independently Partial enrichment is possible All errors are logged for review Debugging Tips Check webhook logs: HubSpot shows delivery attempts Review executions: n8n logs show detailed error messages Test with pinned data: Use the sample data for isolated testing Verify API responses: Check Explorium API returns expected data Best Practices Data Quality Complete contact records: Ensure name and company are populated Standardize company names: Use official names, not abbreviations Include existing emails: Improves match accuracy Regular data hygiene: Clean up test and invalid contacts Performance Optimization Batch size: 6 is optimal for rate limits Webhook reliability: Monitor delivery success API quotas: Track usage in both platforms Execution history: Regularly clean old executions Compliance & Privacy GDPR compliance: Ensure lawful basis for enrichment Data minimization: Only enrich necessary fields Access controls: Limit who can modify enriched data Audit trail: Document enrichment for compliance Customization Options Additional Enrichment Extend with more Explorium data: Job titles and departments Social media profiles Professional experience Skills and interests Company information Enhanced Processing Add workflow logic for: Lead scoring based on enrichment Routing based on data quality Notifications for high-value matches Custom field mapping Integration Extensions Connect to other systems: Sync enriched data to CRM Trigger marketing automation Update data warehouse Send notifications to Slack API Considerations HubSpot Limits API calls**: Monitor daily limits Webhook payload**: Max 200 contacts per trigger Rate limits**: 100 requests per 10 seconds Property limits**: Max 1000 custom properties Explorium Limits Match API**: Batched for efficiency Enrichment calls**: Two parallel enrichments Rate limits**: Based on your plan Data freshness**: Real-time matching Architecture Considerations This workflow integrates with: HubSpot workflows and automation Marketing campaigns and sequences Sales engagement tools Reporting and analytics Other enrichment services Security Best Practices Webhook validation**: Verify requests are from HubSpot Token security**: Rotate API tokens regularly Access control**: Limit workflow modifications Data encryption**: All API calls use HTTPS Audit logging**: Track all enrichments Advanced Configuration Custom Field Mapping Modify the Update HubSpot node to map to custom properties: // Example custom mapping { "custom_mobile": "{{ $json.data.mobile_phone }}", "custom_linkedin": "{{ $json.data.linkedin_url }}", "enrichment_date": "{{ $now.toISO() }}" } Conditional Processing Add logic to process only certain contacts: Filter by contact source Check for specific properties Validate email domains Exclude test contacts Support Resources For assistance: n8n issues**: Check n8n documentation and forums HubSpot API**: Reference HubSpot developers documentation Explorium API**: Contact Explorium support Webhook issues**: Use HubSpot webhook testing tools
by Incrementors
๐ฆ Twitter Profile Scraper via Bright Data API with Google Sheets Output A comprehensive n8n automation that scrapes Twitter profile data using Bright Data's Twitter dataset and stores comprehensive tweet analytics, user metrics, and engagement data directly into Google Sheets. ๐ Overview This workflow provides an automated Twitter data collection solution that extracts profile information and tweet data from specified Twitter accounts within custom date ranges. Perfect for social media analytics, competitor research, brand monitoring, and content strategy analysis. โจ Key Features ๐ Form-Based Input: Easy-to-use form for Twitter URL and date range selection ๐ฆ Twitter Integration: Uses Bright Data's Twitter dataset for accurate data extraction ๐ Comprehensive Data: Captures tweets, engagement metrics, and profile information ๐ Google Sheets Storage: Automatically stores all data in organized spreadsheet format ๐ Progress Monitoring: Real-time status tracking with automatic retry mechanisms โก Fast & Reliable: Professional scraping with built-in error handling ๐ Date Range Control: Flexible time period selection for targeted data collection ๐ฏ Customizable Fields: Advanced data field selection and mapping ๐ฏ What This Workflow Does Input Twitter Profile URL**: Target Twitter account for data scraping Date Range**: Start and end dates for tweet collection period Custom Fields**: Configurable data points to extract Processing Form Trigger: Collects Twitter URL and date range from user input API Request: Sends scraping request to Bright Data with specified parameters Progress Monitoring: Continuously checks scraping job status until completion Data Retrieval: Downloads complete dataset when scraping is finished Data Processing: Formats and structures extracted information Sheet Integration: Automatically populates Google Sheets with organized data Output Data Points | Field | Description | Example | |-------|-------------|---------| | user_posted | Username who posted the tweet | @elonmusk | | name | Display name of the user | Elon Musk | | description | Tweet content/text | "Exciting updates coming soon..." | | date_posted | When the tweet was posted | 2025-01-15T10:30:00Z | | likes | Number of likes on the tweet | 1,234 | | reposts | Number of retweets | 567 | | replies | Number of replies | 89 | | views | Total view count | 12,345 | | followers | User's follower count | 50M | | following | Users they follow | 123 | | is_verified | Verification status | true/false | | hashtags | Hashtags used in tweet | #AI #Technology | | photos | Image URLs in tweet | image1.jpg, image2.jpg | | videos | Video content URLs | video1.mp4 | | user_id | Unique user identifier | 12345678 | | timestamp | Data extraction timestamp | 2025-01-15T11:00:00Z | ๐ Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Twitter dataset access Google account with Sheets access Valid Twitter profile URLs to scrape 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows โ + Add workflow โ Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials โ + Add credential โ HTTP Header Auth Enter your Bright Data API credentials Test the connection Configure dataset: Ensure you have access to Twitter dataset (gd_lwxkxvnf1cynvib9co) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Twitter Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials โ + Add credential โ Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Use the column headers from the data points table above The workflow will automatically populate these fields Step 4: Update Workflow Settings Update Bright Data nodes: Open "๐ Trigger Twitter Scraping" node Replace BRIGHT_DATA_API_KEY with your actual API token Verify dataset ID is correct Update Google Sheets node: Open "๐ Store Twitter Data in Google Sheet" node Replace YOUR_GOOGLE_SHEET_ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Step 5: Test & Activate Add test data: Use the form trigger to input a Twitter profile URL Set a small date range for testing (e.g., last 7 days) Test the workflow: Submit the form to trigger the workflow Monitor progress in n8n execution logs Verify data appears in Google Sheet Check all expected columns are populated ๐ Usage Guide Running the Workflow Access the workflow form trigger URL (available when workflow is active) Enter the Twitter profile URL you want to scrape Set the start and end dates for tweet collection Submit the form to initiate scraping Monitor progress - the workflow will automatically check status every minute Once complete, data will appear in your Google Sheet Understanding the Data Your Google Sheet will show: Real-time tweet data** for the specified date range User engagement metrics** (likes, replies, retweets, views) Profile information** (followers, following, verification status) Content details** (hashtags, media URLs, quoted tweets) Timestamps** for each tweet and data extraction Customizing Date Ranges Recent data**: Use last 7-30 days for current activity analysis Historical analysis**: Select specific months or quarters for trend analysis Event tracking**: Focus on specific date ranges around events or campaigns Comparative studies**: Use consistent time periods across different profiles ๐ง Customization Options Modifying Data Fields Edit the custom_output_fields array in the "๐ Trigger Twitter Scraping" node to add or remove data points: "custom_output_fields": [ "id", "user_posted", "name", "description", "date_posted", "likes", "reposts", "replies", "views", "hashtags", "followers", "is_verified" ] Changing Google Sheet Structure Modify the column mapping in the "๐ Store Twitter Data in Google Sheet" node to match your preferred sheet layout and add custom formulas or calculations. Adding Multiple Recipients To process multiple Twitter profiles: Modify the form to accept multiple URLs Add a loop node to process each URL separately Implement delays between requests to respect rate limits ๐จ Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify credentials in Bright Data dashboard, check dataset permissions "No data extracted" Cause: Invalid Twitter URLs or private/protected accounts Solution: Verify URLs are valid public Twitter profiles, test with different accounts "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Workflow timeout" Cause: Large date ranges or high-volume accounts Solution: Use smaller date ranges, implement pagination for high-volume accounts "Progress monitoring stuck" Cause: Scraping job failed or API issues Solution: Check Bright Data dashboard for job status, restart workflow if needed Advanced Troubleshooting Check execution logs in n8n for detailed error messages Test individual nodes by running them separately Verify data formats and ensure consistent field mapping Monitor rate limits if scraping multiple profiles consecutively Add error handling and implement retry logic for robust operation ๐ Use Cases & Examples 1. Social Media Analytics Goal: Track engagement metrics and content performance Monitor tweet engagement rates over time Analyze hashtag effectiveness and reach Track follower growth and audience interaction Generate weekly/monthly performance reports 2. Competitor Research Goal: Monitor competitor social media activity Track competitor posting frequency and timing Analyze competitor content themes and strategies Monitor competitor engagement and audience response Identify trending topics and hashtags in your industry 3. Brand Monitoring Goal: Track brand mentions and sentiment analysis Monitor specific Twitter accounts for brand mentions Track hashtag campaigns and user-generated content Analyze sentiment trends and audience feedback Identify influencers and brand advocates 4. Content Strategy Development Goal: Analyze successful content patterns Identify high-performing tweet formats and topics Track optimal posting times and frequencies Analyze hashtag performance and reach Study audience engagement patterns 5. Market Research Goal: Collect social media data for market analysis Gather consumer opinions and feedback Track industry trends and discussions Monitor product launches and market reactions Support product development with social insights โ Advanced Configuration Batch Processing Multiple Profiles To monitor multiple Twitter accounts efficiently: Create a master sheet with profile URLs and date ranges Add a loop node to process each profile separately Implement delays between requests to respect rate limits Use separate sheets or tabs for different profiles Adding Data Analysis Enhance the workflow with analytical capabilities: Create additional sheets for processed data and insights Add formulas to calculate engagement rates and trends Implement data visualization with charts and graphs Generate automated reports and summaries Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with social media data Slack Notifications**: Send alerts when data collection is complete Database Storage**: Store data in PostgreSQL/MySQL for advanced analysis BI Tools**: Connect to Tableau/Power BI for comprehensive visualization ๐ Performance & Limits Expected Performance Single profile**: 30 seconds to 5 minutes (depending on date range) Data accuracy**: 95%+ for public Twitter profiles Success rate**: 90%+ for accessible accounts Daily capacity**: 10-50 profiles (depends on rate limits and data volume) Resource Usage Memory**: ~200MB per execution Storage**: Minimal (data stored in Google Sheets) API calls**: 1 Bright Data call + multiple Google Sheets calls per profile Bandwidth**: ~5-10MB per profile scraped Execution time**: 2-10 minutes for typical date ranges Scaling Considerations Rate limiting**: Add delays for high-volume scraping Error handling**: Implement retry logic for failed requests Data validation**: Add checks for malformed or missing data Monitoring**: Track success/failure rates over time Cost optimization**: Monitor API usage to control costs ๐ค Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned ๐ Quick Setup Checklist Before You Start โ n8n instance running (self-hosted or cloud) โ Bright Data account with Twitter dataset access โ Google account with Sheets access โ Valid Twitter profile URLs ready for scraping โ 10-15 minutes available for setup Setup Steps โ Import Workflow - Copy JSON and import to n8n โ Configure Bright Data - Set up API credentials and test โ Create Google Sheet - New sheet with proper column structure โ Set up Google Sheets credentials - OAuth setup and test โ Update workflow settings - Replace API keys and sheet IDs โ Test with sample data - Add 1 Twitter URL and small date range โ Verify data flow - Check data appears in Google Sheet correctly โ Activate workflow - Enable form trigger for production use Ready to Use! ๐ Your workflow URL: Access form trigger when workflow is active ๐ฏ Happy Twitter Scraping! This workflow provides a solid foundation for automated Twitter data collection. Customize it to fit your specific social media analytics and research needs. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by David Olusola
Overview This workflow watches for new rows in a Google Sheet (e.g., where you manually log customer reviews) and uses a Code node to perform a simple sentiment analysis, then updates the same row with the detected sentiment. Use Case: Quickly gauge customer satisfaction, identify positive/negative trends, and prioritize follow-ups based on sentiment. How It Works This workflow operates in four main steps: Google Sheets Trigger (New Row): The workflow starts with a Google Sheets Trigger node configured to monitor a specific Google Sheet for new rows. This triggers the workflow whenever a new review is added. Code Node (Sentiment Analysis): A Code node receives the new row data (containing the review text). Inside this node, JavaScript code performs a basic sentiment analysis by checking for keywords (e.g., "great", "excellent" for positive; "bad", "problem" for negative). It assigns "Positive", "Negative", or "Neutral" sentiment. Update Google Sheet Row: A Google Sheets node is configured to update the same row that triggered the workflow. It adds the sentiment result (and potentially other analysis data) to a new column in that row. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Create Google Sheets Credentials in n8n In your n8n instance, click on Credentials in the left sidebar. Click New Credential. Search for and select "Google Sheets OAuth2 API" and follow the authentication steps with your Google account. Save it. Make note of the Credential Name (e.g., "My Google Sheets Account"). Step 2: Prepare Your Google Sheet (or better Make a copy of the one provided in the template) Create a new Google Sheet in your Google Drive (e.g., Customer Reviews). In the first row, add these column headers: Timestamp Customer Name Review Text Sentiment (This column will be updated by the workflow) Review ID (Optional, for tracking) Copy the Sheet ID from the URL (e.g., https://docs.google.com/spreadsheets/d/YOUR_GOOGLE_SHEET_ID_HERE/edit). Copy the GID of the specific sheet tab (e.g., https://docs.google.com/spreadsheets/d/YOUR_GOOGLE_SHEET_ID_HERE/edit#gid=YOUR_GID_HERE). This is the sheetName value. Step 3: Import the Workflow JSON Step 4: Activate and Test the Workflow Click the "Activate" toggle button in the top right corner of the n8n workflow editor. Go to your Google Sheet and manually add a new row with a "Review Text" (e.g., "This product is great, I love it!"). Leave the "Sentiment" column empty. The workflow should trigger automatically (it polls every minute by default), analyze the sentiment, and update the "Sentiment" column in your Google Sheet. You can also manually "Execute Workflow" to test immediately.
by Gleb D
This n8n workflow automatically retrieves recent Reuters news articles related to a user-specified keyword, summarizes the main findings using Google Gemini, formats the output into styled HTML, and sends a clean email report to a specified address. ๐ What It Does Collects news data from Bright Data's Reuters dataset. Sorts and filters top 10 most recent news articles by publication_date. Sends structured news data to Gemini Flash for summarization. Converts Gemini's response (in Markdown) into styled HTML. Delivers a concise news briefing via email, including clickable source links and topic highlights. ๐ ๏ธ Step-by-Step Setup User Form: Accepts a keyword from the user via an n8n form trigger. Bright Data API: Posts a discover_new request to Bright Data's Reuters dataset using the keyword. Snapshot Polling: Waits and checks for dataset readiness using the snapshot ID. Data Retrieval: Downloads the news data once the snapshot is complete. Parsing: Filters and sorts the latest 10 articles using a Python Code node. AI Analysis: Google Gemini summarizes the filtered content into one briefing. Markdown โ HTML: Converts AI response into styled HTML using Markdown + Code node. Email Delivery: Sends the briefing as an email to a predefined recipient. ๐ง How It Works Polling Control: Uses Wait and If nodes to handle Bright Data snapshot readiness. Date Sorting: Publication dates (ISO 8601 format) are parsed and used for sorting. AI Summarization: Gemini condenses multiple articles into one cohesive summary. Formatting: Clean HTML with readable styles is generated dynamically before sending. ๐จ Final Output The email includes: A brief summary of the most important developments Date range of the collected news Topics covered ๐ Credentials Used Bright Data API (replace YOUR_API_KEY in the HTTP nodes) Google Gemini (Flash) API Email SMTP (configured in Email Send node) โ ๏ธ Notes You must replace all YOUR_API_KEY placeholders in Bright Data request headers with your actual Bright Data API key. You can customize the keyword prompt and output style freely. I would recommend to keep the sort = relevance option for best chronological results - sorting by date is handled later.
by n8n Team
This workflow combines customers' details with their payment data and passes the input to Pipedrive as a note to the organization. Prerequisites Stripe account and Stripe credentials Pipedrive account and Pipedrive credentials How it works Cron node triggers the workflow every day at 8 a.m. HTTP Request node searches for payments in Stripe. The Item Lists node creates separate items from a list of payment data. Merge node takes in the payment data as an input 1. Stripe node gets all the customers data. Set node renames customer-related data fields and keeps only needed fields. Merge node takes in the customer data as an input 2. Merge node combines the payment data with the customers one. Pipedrive node searches for the organization and creates a note with payment data.
by ist00dent
This n8n template allows you to automatically create shortened URLs using the TinyURL API by simply sending a webhook request. It's a quick and efficient way to integrate URL shortening into your automated workflows, ideal for sharing long links in social media, emails, or other applications. ๐ง How it works Receive Link Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing the url to be shortened and your api_key for TinyURL. Create TinyURL: This node sends a POST request to the TinyURL API, passing the long URL and your API key. It can also accept optional parameters like domain, alias, and description to customize the shortened link. Respond with Shortened URL: This node sends the response from the TinyURL API (which includes the new shortened URL) back to the service that initiated the webhook. ๐ค Who is it for? This workflow is ideal for: Content Managers & Marketers: Quickly shorten links for campaigns, social media posts, or tracking. Developers: Automate the process of link shortening within applications or scripts. Automation Enthusiasts: Integrate a URL shortener into various n8n workflows (e.g., after generating a report, before sending a notification). Anyone needing on-demand short links: A flexible solution for ad-hoc link shortening. ๐ Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "api_key": "YOUR_TINYURL_API_KEY", "url": "https://www.verylongwebsite.com/path/to/specific/page?param1=value1¶m2=value2", "domain": "tinyurl.com", // Optional: defaults to tinyurl.com "alias": "myCustomAlias", // Optional: desired custom alias for the link "description": "My project link" // Optional: description for the link } The workflow will return the JSON response directly from the TinyURL API, which will include the short_url and other details about the newly created link. โ๏ธ Setup Instructions Obtain TinyURL API Key: Before importing, make sure you have an API key from TinyURL. You can typically get this by signing up for an account on their website. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Link Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /shorten-link). Activate Workflow: Save and activate the workflow. ๐ Tips Dynamic Inputs: The workflow is set up to dynamically use the url, api_key, alias, and description from the incoming webhook data. This makes it highly flexible. Error Handling: You can add an Error Trigger node to catch any issues (e.g., invalid API key, malformed URL) during the TinyURL creation process. Configure it to send notifications or log errors for easy troubleshooting. Post-Shortening Actions: After generating the shortened URL, you can insert additional nodes before the Respond with Shortened URL node to perform other actions. For example, you could: Save to a Database: Store the original and shortened URLs in a database like Airtable, Google Sheets, or a PostgreSQL database. Send a Message: Automatically send the shortened URL via Slack, Discord, email, or SMS. Update a Record: Update a CRM record or project management task with the new shortened link. Custom Domains: If you have a custom domain configured with your TinyURL account, you can change the domain parameter in the Create TinyURL node to use it.
by Matthieu
LinkedIn Profile Enrichment Automation Who is this for? This template is perfect for sales teams, recruiters, marketing professionals, and business development specialists who need to gather comprehensive LinkedIn profile data at scale. Ideal for lead generation teams building prospect databases, recruiters sourcing candidate information, sales professionals researching prospects, and marketing teams creating targeted outreach campaigns. What problem does this workflow solve? Manually collecting detailed information from LinkedIn profiles is incredibly time-consuming and prone to inconsistency. Visiting each profile individually to extract names, job titles, experience, education, skills, and contact details can take hours for even small prospect lists. This automation eliminates the tedious manual data entry while ensuring consistent, comprehensive profile enrichment. What this workflow does This workflow automatically enriches a list of LinkedIn profile URLs by extracting comprehensive professional data including: Personal details** (first name, last name, headline, location) Professional status** (hiring status, open to work indicators) Network metrics** (connections, followers count) Work experience** (up to 5 most recent positions with company details, dates, and roles) Education background** (up to 3 educational institutions with degrees and dates) Skills and languages** (complete skill sets and language proficiencies) Professional summary** (profile description and bio) The enriched data is automatically organized and updated in your Google Sheets database with structured formatting for easy analysis and outreach. Setup Create a Ghost Genius API account and obtain your API key for cookieless LinkedIn profile scraping Configure HTTP Request credentials with Header Auth using your Ghost Genius API key Set up your Google Sheets database using the provided template with columns: URL, Firstname, Lastname, Tagline, Location Connections, Followers, Hiring?, Open to work? Summary, Languages, Skills Experience 1-5, Education 1-3 Configure Google Sheets OAuth2 credentials following n8n's authentication setup guide Add LinkedIn profile URLs to the first column of your Google Sheet to begin enrichment Test the workflow with a small batch before processing larger lists How to customize this workflow Adjust batch processing settings** to handle larger volumes by modifying the batch size and interval timing Add data validation rules** to filter out incomplete or invalid profiles before processing Integrate with CRM systems** like HubSpot or Salesforce to automatically sync enriched data Set up automated scheduling** to regularly re-enrich profiles and capture profile updates Add email notifications** to alert when enrichment batches are completed or encounter errors Customize data mapping** to include additional profile fields or reorganize the output structure Add duplicate detection** to prevent re-processing the same profiles multiple times
by Don Jayamaha Jr
This advanced agent analyzes long-term price action in the Binance Spot Market using 1-day candles. It calculates key macro indicators like RSI, MACD, BBANDS, EMA, SMA, and ADX to identify high-confidence trend setups and market momentum. Used by the Quant AI system for directional bias and macro-level signal validation. ๐ฅ Watch Tutorial: ๐ฏ Purpose Detect major trend reversals, consolidation zones, and macro bias Support long-term swing trading decisions Provide reliable 1-day signals for downstream agents ๐ง Core Features | Feature | Description | | --------------------------- | ------------------------------------------------------------ | | ๐ Trigger | Called by parent workflows via Execute Workflow | | ๐ฅ Input Format | { "message": "MATICUSDT", "sessionId": "telegram_id" } | | ๐ก Webhook Call | Sends request to internal 1d indicators webhook | | ๐งฎ Technical Indicators | RSI, MACD, BBANDS, EMA, SMA, ADX (based on 40 daily candles) | | ๐ง GPT (gpt-4.1-mini) Agent | Interprets numerical data into human-readable trend signals | | ๐ฌ Output | Summary suitable for Telegram or further agent consumption | ๐ External Tools Called https://treasurium.app.n8n.cloud/webhook/1d-indicators Sends: { "symbol": "SOLUSDT" } ๐ Indicator Calculations | Indicator | Purpose | | -------------- | ------------------------------- | | RSI (14) | Overbought / Oversold Signals | | MACD (12,26,9) | Trend Reversals / Momentum | | BBANDS (20, 2) | Volatility Expansion | | EMA (20) | Short-Term Trend Confirmation | | SMA (20) | Macro-Level Support/Resistance | | ADX (14) | Trend Strength + Directional DI | ๐ฆ Setup Import the JSON into n8n. Add your OpenAI API credentials. Ensure webhook /1d-indicators is connected and working. Use this agent as a sub-workflow in: Binance SM Financial Analyst Tool Binance Spot Market Quant AI Agent ๐ค Output Example ๐ 1D Overview โ MATICUSDT โข RSI: 71 โ Overbought โข MACD: Bearish Cross forming โข BBANDS: Widening Volatility โข EMA < SMA โ Downtrend Momentum โข ADX: 33 โ High Trend Strength ๐ Notes Not user-facing โ outputs are structured JSON or Telegram-style summaries. Pairs well with shorter timeframe tools (15mโ4h) for confidence stacking. ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ Need help? Reach out on LinkedIn โ Don Jayamaha
by Joseph LePage
MCP AI Chatbot using Brave Search Disclaimer: This workflow only works with local installations of n8n because it uses a community MCP node Who is this for? This workflow is ideal for developers, automation enthusiasts, and businesses looking to integrate AI-powered chat capabilities into their workflows. It's particularly useful for those leveraging Brave Search and MCP tools to enhance user interactions and streamline data retrieval. What problem is this workflow solving? This workflow addresses the challenge of creating an intelligent chatbot that can process user queries, execute searches using Brave Search, and provide responses enriched by AI. It simplifies the integration of multiple tools into a cohesive system, saving time and effort for users who need a robust conversational AI solution. What this workflow does Listens for incoming chat messages using the Chat Trigger node. Processes user input with an AI Agent powered by GPT-4o. Retrieves relevant tools using the MCP Get Brave Tools node. Executes specific search queries via the MCP Execute Brave Search node. Maintains short-term memory of conversations with the Simple Memory node. Setup Prerequisites: Access to an n8n instance (self-hosted). API credentials for OpenAI and MCP Client Tools. Brave Search API key. Steps: Import the workflow JSON into your n8n instance. Configure the API credentials for OpenAI and MCP Client Tools in their respective nodes. Set up your Brave Search API key in the MCP nodes. https://brave.com/search/api/ Testing: Use the built-in chat interface to send test messages. Verify that the chatbot processes queries and returns results as expected. How to customize this workflow to your needs Modify the AI Agent's prompt settings to tailor responses to your specific use case. Adjust the memory buffer in the Simple Memory node to retain more or less conversational context. Replace or add additional tools in the MCP nodes to expand functionality.
by n8n Team
This template shows how you can create reports on data in an app and share a summary in another app. Specifically, this example checks a Notion database for new submissions, filters for submissions with a specific tag, and then sends a Slack message with the number created this week. Setup instructions are located inside the workflow template.
by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ๐ง AI-Powered Blog Post Generator Category: Content Automation / AI Writing / Marketing Description: This automated workflow helps you generate fresh, SEO-optimized blog posts daily using AI toolsโperfect for solo creators, marketers, and content teams looking to stay on top of the latest AI trends without manual research or writing. For more of such builds and step-by-step Tutorial Guides, check out: https://www.youtube.com/@Automatewithmarc Hereโs how it works: Schedule Trigger kicks off the workflow daily (or at your preferred interval). Perplexity AI Node researches the most interesting recent AI news tailored for a non-technical audience. AI Agent (Claude via Anthropic) turns that news into a full-length blog post based on a structured prompt that includes title, intro, 3+ section headers, takeaway, and meta descriptionโdesigned for clarity, engagement, and SEO. Optional Memory & Perplexity Tool Nodes enhance the agent's responses by allowing it to clarify facts or fetch more context. Google Docs Node automatically saves the final blog post to your selected documentโready for review, scheduling, or publishing. Key Features: Combines Perplexity AI + Claude AI (Anthropic) for research + writing Built-in memory and retrieval logic for deeper contextual accuracy Non-technical, friendly writing style ideal for general audiences Output saved directly to Google Docs Fully no-code, customizable, and extendable Use Cases: Automate weekly blog content for your newsletter or site Repurpose content into social posts or scripts Keep your brand relevant in the fast-moving AI landscape Setup Requirements: Perplexity API Key Anthropic API Key Google Docs (OAuth2 connected)
by Joseph
This workflow automates invoice generation from form submissions, ensuring unique order IDs, creating PDF invoices, storing files, emailing customers, and logging invoice data โ all seamlessly integrated. ๐น Workflow Overview Trigger (Webhook) Starts when an order form is submitted, capturing customer and order details. Generate Random Order ID A Function node creates a unique alphanumeric invoice ID (e.g., INV-X92B7D). Check for Duplicate Order ID Google Sheets looks up the generated order ID in your invoice log sheet to prevent duplicates. Conditional Check (IF Node) If the ID already exists โ regenerates a new ID (loops back) If unique โ proceeds to invoice creation Prepare Invoice Data A Set node formats customer info, date, order items, and the unique order ID to fit your invoice template. Convert HTML to PDF HTTP Request node sends your invoice HTML to the RapidAPI HTML-to-PDF service and receives the PDF file. Upload PDF to Cloud Storage Save the PDF in Google Drive or Dropbox with a clear file name like Invoice-INV-X92B7D.pdf. Send Invoice Email to Customer Email node attaches the PDF and includes the order ID in the email subject/body. Log Invoice Details Append invoice data (customer info, order ID, total, PDF link) to your Google Sheet for tracking. โ๏ธ Node Details & Setup 1. Webhook Trigger Configure to receive form submissions (order details like name, email, items, total). 2. Function: Generate Random Order ID Sample JS code generates unique IDs prefixed by INV-. 3. Google Sheets: Lookup Row Set up connection to your invoice log sheet. Search for existing order ID to avoid duplicates. 4. IF Node: Check Order ID Existence Condition: If order ID found โ loop to regenerate. Else โ continue workflow. 5. Set Node: Prepare Invoice HTML Define variables like customer name, date, items, and order ID. This data populates your HTML invoice template. 6. HTTP Request: Convert HTML to PDF API URL to get your key Send invoice HTML in the request body. Receive PDF file blob or download URL. 7. Google Drive (or Dropbox) Upload Upload the PDF file. Use file name format: Invoice-{{$json["order_id"]}}.pdf 8. Email Node Recipient: customer email from the form data. Attach generated PDF. Include order ID in email subject or body for reference. 9. Google Sheets: Append Row Log invoice metadata to keep records updated. ๐ Google Sheets Template You can make a copy of the invoice log template here This sheet includes columns for order\_id, customer name, email, total, and invoice PDF link. Customize it as needed. ๐ Additional Notes Customize the invoice HTML template inside the Set node to match your branding. Ensure API credentials for RapidAPI, Google Drive/Dropbox, and email are properly set up in your n8n credentials. You can expand this workflow by adding payment processing or SMS notifications. Need help or want a custom workflow? Reach out via email at joseph@uppfy.com.