by phil
Generate royalty-free sound effects for all your projects: ASMR, YouTube videos, podcasts, and more. This workflow generates unique AI-powered sound effects using the ElevenLabs Sound Effects API. Enter a text description of the sound you envision, and the workflow will generate it, save the MP3 file to your Google Drive, and instantly provide a link to listen to your creation. It is a powerful tool for quickly producing unique ASMR triggers, ambient sounds, or specific audio textures without any complex software. Who's it for This template is ideal for: Content Creators**: Generate royalty-free sound effects for videos, podcasts, and games on the fly. Sound Designers & Foley Artists**: Quickly prototype and generate specific audio clips for projects from a simple text prompt. Developers & Hobbyists**: Integrate AI sound effect generation into projects or simply experiment with the capabilities of the ElevenLabs API. How to set up Configure API Key: Sign up for an ElevenLabs account and get your API key. In the "ElevenLabs API" node, create new credentials and add your ElevenLabs API key. Connect Google Drive: Select the "Upload mp3" node. Create new credentials to connect your Google Drive account. Activate the Workflow: Save and activate the workflow. Use the Form Trigger's production URL to access the AI ASMR Sound Generator web form. Requirements An active n8n instance. An ElevenLabs account for the API key. A Google Drive account. How to customize this workflow Change Storage**: Replace the Google Drive node with another storage service node like Dropbox, AWS S3, or an FTP server to save your sound effects elsewhere. Modify Sound Parameters**: In the "elevenlabs_api" node, you can adjust the JSON body to control the output. Key parameters include: loop (boolean, optional, default: false): Creates a sound effect that loops smoothly. Note: Only available for the ‘eleven_text_to_sound_v2’ model. duration_seconds (number, optional, default: auto): Sets the sound's duration in seconds (from 0.5 to 30). If not set, the AI guesses the optimal duration from the prompt. prompt_influence (number, optional, default: 0.3): A value between 0 and 1 that controls how strictly the generation follows the prompt. Higher values result in less variation. Customize Confirmation Page**: Edit the "prepare reponse" node to change the design and text of the final page shown to the user. . Phil | Inforeole | Linkedin 🇫🇷 Contactez nous pour automatiser vos processus
by Hassan
AI-Powered Personalized Cold Email Icebreaker Generator Overview This intelligent automation system transforms generic cold outreach into highly personalized email campaigns by automatically scraping prospect websites, analyzing their content with AI, and generating unique, conversational icebreakers that reference specific, non-obvious details about each business. The workflow integrates seamlessly with Instantly.ai to deliver campaigns that achieve significantly higher response rates compared to traditional cold email approaches. The system processes leads from your n8n data table, validates contact information, scrapes multiple pages from each prospect's website, uses GPT-4.1 to synthesize insights, and crafts personalized openers that make recipients believe you've done deep research on their business—all without manual intervention. Key Benefits 🎯 Hyper-Personalization at Scale: Generate unique icebreakers for 30+ leads per execution that reference specific details about each prospect's business, creating the impression of manual research while automating 100% of the process. 💰 Dramatically Higher Response Rates: Personalized cold emails using this system typically achieve 4-5% response rates for campaigns, directly translating to more booked meetings and closed deals. ⏱️ Massive Time Savings: What would take 10-15 minutes of manual research per prospect (website review, note-taking, icebreaker writing) now happens in 30-45 seconds automatically, freeing your team to focus on conversations instead of research. 🧠 AI-Powered Intelligence: Dual GPT model approach uses GPT-4.1-mini for efficient content summarization and GPT-4.1 for creative icebreaker generation, ensuring both cost efficiency and high-quality output with a distinctive "spartan" tone that converts. 🔄 Built-In Error Handling: Comprehensive retry logic (5 attempts with 5-second delays) and graceful failure management ensure the workflow continues processing even when websites are down or inaccessible, automatically removing problem records from your queue. 🗃️ Clean Data Management: Automatically removes processed leads from your database after successful campaign addition, preventing duplicate outreach and maintaining organized lead lists for future campaigns. 📊 Batch Processing Control: Processes leads in configurable batches (default 30) to manage API costs and rate limits while maintaining efficiency, with easy scaling for larger lists. 🔌 Instantly.ai Integration: Direct API integration pushes leads with custom variables into your campaigns automatically, supporting skip_if_in_campaign logic to prevent duplicate additions and maintain clean campaign lists. How It Works Stage 1: Lead Acquisition & Validation The workflow begins with a manual trigger, allowing you to control when processing starts. It queries your n8n data table and retrieves up to 30 records filtered by Email_Status. The Limit node caps this at 30 items to control processing costs and API usage. Records then pass through the "Only Websites & Emails" filter, which uses strict validation to ensure both organization_website_url and email fields exist and contain data—eliminating invalid records before expensive AI processing occurs. Stage 2: Intelligent Web Scraping Valid leads enter the Loop Over Items batch processor, which handles them sequentially to manage API rate limits. For each lead, the workflow fetches their website homepage using the HTTP Request node with retry logic (5 attempts, 5-second waits) and "always output data" enabled to capture even failed requests. The If node checks response names for error indicators, if errors are detected, the problematic record is immediately deleted from the database via Delete row(s) to prevent future processing waste. Successfully scraped HTML content passes through the Markdown converter, which transforms it into clean markdown format that AI models can analyze more effectively. Stage 3: AI Content Analysis The markdown content flows into the first AI node, "Summarize Website Page," which uses GPT-4.1-mini (cost-efficient for summarization tasks) with a specialized system prompt. The AI reads the scraped content and generates a comprehensive two-paragraph abstract similar in detail to an academic paper abstract, focusing on what the business does, their projects, services, and unique differentiators. The output is structured JSON with an "abstract" field. Multiple page summaries (if the workflow is extended to scrape additional pages) are collected by the Aggregate node, which combines all abstracts into a single array for comprehensive analysis. Stage 4: Personalized Icebreaker Generation The aggregated summaries, along with prospect profile data (name, headline, company), flow into the "Generate Multiline Icebreaker" node powered by GPT-4.1 (higher intelligence for creative writing). This node uses an advanced system prompt with specific rules: write in a spartan/laconic tone, avoid special characters and hyphens, use the format "Really Loved {thing}, especially how you're {doing/managing/handling} {otherThing}," reference small non-obvious details (never generic compliments like "Love your website!"), shorten company names and locations naturally. The prompt includes a few-shot example teaching the AI the exact style and depth expected. Temperature is set to 0.5 for creative but consistent output. Stage 5: Campaign Deployment & Cleanup The generated icebreaker is formatted into Instantly.ai's API structure and sent via HTTP POST to the "Sending ice breaker to instantly" node. The payload includes the lead's email, first name, last name, company name, the personalized icebreaker as the "personalization" field, website URL, and supports custom_variables for additional personalization fields. The API call uses skip_if_in_campaign: true to prevent duplicate additions. After successful campaign addition, the Delete row(s)1 node removes the processed record from your data table, maintaining a clean queue. The Loop Over Items node then processes the next lead until all 30 are complete. Required Setup & Database Structure n8n Data Table Requirements: Table Name: Configurable (default "Real estate") Required Columns: id (unique identifier for each record) first_name (prospect's first name) last_name (prospect's last name) email (valid email address) organization_website_url (full URL with https://) Headline (job title/company descriptor) Email_Status (filter field for processing control) API Credentials: OpenAI API Key (connected as "Sycorda" credential) Access to GPT-4.1-mini model Access to GPT-4.1 model Sufficient credits for batch processing (approximately $0.01-0.03 per lead) Instantly.ai API Key Campaign ID (replace the placeholder "00000000-0000-0000-0000-000000000000") Active campaign with proper email accounts configured Environment Setup: n8n instance with @n8n/n8n-nodes-langchain package installed Stable internet connection for web scraping Adequate execution timeout limits (recommended 5+ minutes for 30 leads) Business Use Cases B2B Service Providers: Agencies, consultancies, and professional services firms can personalize outreach by referencing prospect's specific service offerings, client types, or operational approach to book discovery calls and consultations. SaaS Companies: Software vendors across any vertical can use this to demonstrate product value through highly relevant cold outreach that references prospect pain points, tech stack, or business model visible on their websites. Marketing & Creative Agencies: Agencies offering design, content creation, SEO, or digital marketing services can personalize outreach by referencing prospects' current marketing approach, website quality, or brand positioning. E-commerce & Retail: Online retailers and D2C brands can reach potential wholesale partners, distributors, or B2B clients by mentioning their product lines, target markets, or unique value propositions. Financial Services: Fintech companies, accounting firms, and financial advisors can personalize cold outreach by referencing prospect's business size, industry focus, or financial complexity to offer relevant solutions. Recruitment & Staffing: Agencies can reach potential clients by mentioning their hiring needs, company growth, team structure, or industry specialization visible on career pages and about sections. Technology & Development: Software development agencies, IT consultancies, and tech vendors can reference prospect's current technology stack, digital transformation initiatives, or technical challenges to position relevant solutions. Education & Training: Corporate training providers, coaching services, and educational platforms can personalize outreach by mentioning company culture, team development focus, or learning initiatives referenced on websites. Revenue Potential Same icebreaker approach used by leading cold email experts delivers 4-5% higher reply rates compared to generic outreach templates. By investing approximately $0.11-0.18 per personalized lead (AI processing + email sending costs), businesses achieve response rates of 4-5% versus the industry standard non-personalized campaigns. Scalability: Process 30 leads (or any much you want just replace the number 30 with your number) and in minutes with minimal manual oversight, allowing sales teams to maintain high personalization quality while reaching hundreds of prospects weekly. The automation handles the research-intensive work, letting your team focus on high-value conversations with engaged prospects. Difficulty Level & Build Time Difficulty: Intermediate Estimated Build Time: 2-3 hours for complete setup Technical Requirements: Familiarity with n8n node configuration Basic understanding of API integrations JSON data structure knowledge OpenAI prompt engineering basics Setup Complexity Breakdown: Data table creation and population: 30 minutes Workflow node configuration: 45 minutes OpenAI credential setup and testing: 20 minutes Instantly.ai API integration: 25 minutes Prompt optimization and testing: 45 minutes Error handling verification: 15 minutes Maintenance Requirements: Minimal once configured. Monthly tasks include monitoring OpenAI costs, updating prompts based on performance data, and refilling the data table with new leads. Detailed Setup Steps Step 1: Create Your Data Table In n8n, navigate to your project Create a new data table with a name relevant to your industry Add columns: id (auto), first_name (text), last_name (text), email (text), organization_website_url (text), Headline (text), Email_Status (text) Import your lead list via CSV or manual entry Set Email_Status to blank or a specific value you'll filter by Step 2: Configure OpenAI Credentials Obtain an OpenAI API key from platform.openai.com In n8n, go to Credentials → Add Credential → OpenAI Name it "Sycorda" (or update all OpenAI nodes with your credential name) Paste your API key and test the connection Ensure your OpenAI account has access to GPT-4.1 models Step 3: Import and Configure the Workflow Copy the provided workflow JSON In n8n, create a new workflow and paste the JSON Update the "Get row(s)" node: Select your data table Configure the Email_Status filter condition Adjust limit if needed (default 30) Verify the "Loop Over Items" node has reset: false Step 4: Configure Website Scraping In "Request web page for URL1" node, verify: URL expression references correct field: {{ $('Get row(s)').item.json.organization_website_url }} Retry settings: 5 attempts, 5000ms wait "Always Output Data" is enabled Test with a single lead to verify HTML retrieval Step 5: Customize AI Prompts for Your Industry In "Summarize Website Page" node: Review the system prompt Adjust the abstract detail level if needed Keep JSON output enabled In "Generate Multiline Icebreaker" node: CRITICAL: Update the few-shot example with your target industry specifics Customize the tone guidance to match your brand voice Modify the icebreaker format template if desired Adjust temperature (0.5 default; lower for consistency, higher for variety) Update the profile format to match your industry (change "Property Manager or Real estate" references) Step 6: Set Up Instantly.ai Integration Log into your Instantly.ai account Navigate to Settings → API Key and copy your key Create or select the campaign where leads will be added Copy the Campaign ID from the URL (format: 00000000-0000-0000-0000-000000000000) In the "Sending ice breaker to instantly" node: Update the JSON body with your api_key Replace the campaign_id placeholder Adjust skip_if_in_workspace and skip_if_in_campaign flags Map the lead fields correctly: email: {{ $('Loop Over Items').item.json.email }} first_name: {{ $('Loop Over Items').item.json.first_name }} last_name: {{ $('Loop Over Items').item.json.last_name }} personalization: {{ $json.message.content.icebreaker }} company_name: Extract from Headline or add to data table website: {{ $('Loop Over Items').item.json.organization_website_url }} Step 7: Test and Validate Start with 3-5 test leads in your data table Execute the workflow manually Verify each stage: Data retrieval from table Website scraping success AI summary generation Icebreaker quality and format Instantly.ai lead addition Database cleanup Check your Instantly.ai campaign to confirm leads appear with custom variables Review error handling by including one lead with an invalid website Step 8: Scale and Monitor Increase batch size in the Limit node (30 → 50+ if needed) Add more leads to your data table Set up execution logs to monitor costs Track response rates in Instantly.ai A/B test prompt variations to optimize icebreaker performance Consider scheduling automatic execution with n8n's Schedule Trigger Advanced Customization Options Multi-Page Scraping: Extend the workflow to scrape additional pages (about, services, portfolio) by adding multiple HTTP Request nodes after the first scrape, then modify the Aggregate node to combine all page summaries before icebreaker generation. Industry-Specific Prompts: Create separate workflow versions with customized prompts for different verticals or buyer personas to maximize relevance and response rates for each segment. Dynamic Campaign Routing: Add Switch or If nodes after icebreaker generation to route leads to different Instantly.ai campaigns based on company size, location, or detected business focus from the AI analysis. Sentiment Analysis: Insert an additional OpenAI node after summarization to analyze the prospect's website tone and adjust your icebreaker style accordingly (formal vs. casual, technical vs. conversational). CRM Integration: Replace or supplement the data table with direct CRM integration (HubSpot, Salesforce, Pipedrive) to pull leads and push results back, creating a fully automated lead enrichment pipeline. Competitor Mention Detection: Add a specialized prompt to the summarization phase that identifies if prospects mention competitors or specific pain points, then use this intelligence in the icebreaker for even higher relevance. LinkedIn Profile Enrichment: Add Clay or Clearbit integration before the workflow to enrich email lists with LinkedIn profile data, then reference recent posts or career changes in the icebreaker alongside website insights. A/B Testing Framework: Duplicate the "Generate Multiline Icebreaker" node with different prompt variations and use a randomizer to split leads between versions, then track performance in Instantly.ai to identify the highest-converting approach. Webhook Trigger: Replace the manual trigger with a webhook that fires when new leads are added to your data table or CRM, creating a fully automated lead-to-campaign pipeline that requires zero manual intervention. Cost Optimization: Replace GPT-4.1 models with GPT-4o-mini or Claude models for cost savings if response quality remains acceptable, or implement a tiered approach where only high-value leads get premium model processing.
by Dart
Automatically generate a meeting summary from your meetings through Fireflies.ai, save it to a Dart document, and create a review task with the meeting link attached. What it does This workflow activates when a Fireflies.ai meeting is processed (via a webhook). It retrieves the meeting transcript via the FirefliesAI transcript node and uses an AI model to generate a structured summary. Who’s it for Teams or individuals needing automatic meeting notes. Project managers tracking reviews and actions from meetings. Users of Fireflies.ai and Dart who want to streamline their documentation and follow-up process. How to set up Import the workflow into n8n. Connect your Dart account (it will need workspace and folder access). Add your PROD webhook link from the webhook node to your Fireflies.ai API settings. Add your Fireflies.ai API key on the Fireflies Transcript node. Replace the dummy Folder ID and Dartboard ID with your actual target IDs. Choose your preferred AI model for generating the summaries. Requirements n8n account Connected Dart account Connected Fireflies.ai account (with access to API key and webhooks) How to customize the workflow Edit the AI prompt to adjust the tone, style, or format of the meeting summaries. Add, remove, or change the summary sections to match your needs (e.g., Key takeaways, Action Items, Summary).
by The O Suite
The Bug Bounty Target Recon n8n workflow is a powerful automation tool for security professionals and ethical hackers. It efficiently automates the time-consuming process of external attack surface mapping. By taking a domain, the workflow performs DNS Lookups to identify all associated IP addresses, and then utilizes the Shodan API to query: Detailed service banners Open ports Technologies Known vulnerabilities This system delivers crucial, organized OSINT data, saving the user hours of manual scripting and reconnaissance, and providing a clear, actionable map of a target's exposed infrastructure.
by Amir Tadrisi
Direct Booking Site Generator Workflow This workflow instantly transforms any Airbnb listing into a polished, mobile-ready direct booking site hosted on the Netlify platform. Requirements 1. Install the Airbnb Scraper Node This workflow depends on the community package n8n-nodes-airbnb-scraper. Install it on your n8n instance (Settings → Community Nodes) before importing the workflow. 2. Generate the Required API Tokens | Credential | Purpose | Where to create it | | --- | --- | --- | | Airbnb Scraper API Token | Authenticates the Airbnb Scraper node so it can fetch listing data. | Sign up at scraper.shortrentals.ai and copy your API token from the dashboard. | Netlify Personal Access Token | Allows the workflow to create sites and deploy ZIP assets through the Netlify API. | Go to Netlify User Settings → Applications → Personal access tokens and generate a token with Deploy sites permissions. Store both tokens as credentials in n8n (Airbnb Scraper API and Netlify API Token) before executing the workflow. How the Workflow Works Manual Trigger & Listing Input – Provide any Airbnb listingId and run the workflow. Data Collection – n8n-nodes-airbnb-scraper pulls rich listing data (photos, amenities, host details, pricing, reviews, etc.). Static Site Generation – The Generate HTML Site node transforms that data into a premium, mobile-responsive landing page with sticky booking card, amenities grid, gallery, and shortrentals.ai credit. ZIP Packaging – Prepare Binary and Create ZIP convert the HTML into a Netlify-ready archive (rooted index.html). Netlify Deploy – Create Netlify Site spins up a new site (unique subdomain per run). Deploy ZIP uploads the packaged site via Netlify’s deploy API. Output – The final node returns the public URL, admin dashboard link, site ID, and deploy metadata so you can verify or reuse the site later. Need More Functionality? If you require conversion-ready sites with payments, Calendar Sync sync, or Booking engine, head to sitebuilder.shortrentals.ai to explore the full product suite. Questions or Custom Builds? Visit shortrentals.ai for product info and tutorials. Reach our team anytime at hello@shortrentals.ai. Happy hosting! 🚀
by rana tamure
Overview This n8n workflow, named "Keyword Search for Blogs," automates the process of gathering and organizing keyword research data for SEO purposes. It integrates with Google Sheets and Google Drive to manage input and output data, and leverages the DataForSEO API to fetch comprehensive keyword-related information, including related keywords, keyword suggestions, keyword ideas, autocomplete suggestions, subtopics, and SERP (Search Engine Results Page) analysis. The workflow is designed to streamline SEO research by collecting, processing, and storing data in an organized manner for blog content creation. Workflow Functionality The workflow performs the following key functions: Trigger: Initiated manually via the "When clicking ‘Test workflow’" node, allowing users to start the process on-demand. Input Data Retrieval: Reads primary keywords, location, and language data from a specified Google Sheet ("SEO PRO"). Spreadsheet Creation: Creates a new Google Sheet with a dynamic title based on the current date (e.g., "YYYY-MM-DD-seo pro") and predefined sheet names for organizing different types of keyword data (e.g., keyword, SERP, Content, related keyword, keyword ideas, suggested keyword, subtopics, autocomplete). Google Drive Integration: Moves the newly created spreadsheet to a designated folder ("seo pro") in Google Drive for organized storage. API Data Collection: Related Keywords: Fetches related keywords using the DataForSEO API (/v3/dataforseo_labs/google/related_keywords/live), including SERP information and keyword metrics like search volume, CPC, and competition. Keyword Suggestions: Retrieves keyword suggestions via the DataForSEO API (/v3/dataforseo_labs/google/keyword_suggestions/live). Keyword Ideas: Collects keyword ideas using the DataForSEO API (/v3/dataforseo_labs/google/keyword_ideas/live). Autocomplete Suggestions: Gathers Google autocomplete suggestions through the DataForSEO API (/v3/serp/google/autocomplete/live/advanced). Subtopics: Generates subtopics for the primary keyword using the DataForSEO API (/v3/content_generation/generate_sub_topics/live). People Also Ask & Organic Results: Pulls "People Also Ask" questions and organic SERP results via the DataForSEO API (/v3/serp/google/organic/live/advanced). Data Processing: Uses Split Out nodes to break down API responses into individual items for processing. Employs Edit Fields nodes to map and format data, extracting relevant fields like keyword, search intent, search volume, CPC, competition, keyword difficulty, and SERP item types. Filters SERP results to separate "People Also Ask" and organic results for targeted processing. Data Storage: Appends processed data to multiple sheets in the destination Google Sheet ("2025-06-08-seo pro") across different tabs: Master Sheet: Stores comprehensive data including keywords, search intent, related keywords, SERP analysis, and more. Related Keywords: Stores related keyword data with metrics. Suggested Keywords: Stores suggested keyword data. Keyword Ideas: Stores keyword ideas with relevant metrics. Autocomplete: Stores autocomplete suggestions. Subtopics: Stores generated subtopics. Organic Results: Stores organic SERP data with details like domain, URL, title, and description. Key Features Automation: Eliminates manual keyword research by automating data collection and organization. Scalability: Processes multiple keywords and their related data in a single workflow run, with a limit of 100 related items per API call. Dynamic Organization: Creates and organizes data in a new Google Sheet with a timestamped title, ensuring easy tracking of research over time. Comprehensive SEO Insights: Collects diverse SEO metrics (e.g., keyword difficulty, search intent, SERP item types) to inform content strategy. Error Handling: Uses filters to ensure only relevant data (e.g., "people_also_ask" or "organic" results) is processed and stored. Use Case This workflow is ideal for SEO professionals, content creators, and digital marketers who need to perform in-depth keyword research for blog content. It provides a structured dataset that can be used to identify high-potential keywords, understand search intent, analyze SERP competition, and generate content ideas, all of which are critical for optimizing blog posts to rank higher on search engines. Inputs Google Sheet ("SEO PRO"): Contains primary keywords, location names, and language names. Google Drive Folder: Destination folder ("seo pro") for storing the output spreadsheet. DataForSEO API Credentials: Requires HTTP Basic Authentication credentials for accessing DataForSEO API endpoints. Outputs A new Google Sheet titled with the current date (e.g., "2025-06-08-seo pro") containing multiple tabs: Master Sheet: Aggregated data for all keyword types. Related Keywords: Detailed metrics for related keywords. Suggested Keywords: Suggested keywords with metrics. Keyword Ideas: Keyword ideas with metrics. Autocomplete: Google autocomplete suggestions. Subtopics: Generated subtopics for content planning. Organic Results: Organic SERP data including domains, URLs, titles, and descriptions. Benefits Time-Saving: Automates repetitive tasks, reducing manual effort in keyword research. Organized Data: Stores all data in a structured Google Sheet for easy access and analysis. Actionable Insights: Provides detailed SEO metrics to guide content creation and optimization strategies. Scalable and Reusable: Can be reused for different keywords by updating the input Google Sheet. Technical Details Nodes: Utilizes n8n nodes including manualTrigger, googleSheets, googleDrive, httpRequest, splitOut, set, and filter. API Integration: Leverages DataForSEO API for real-time keyword and SERP data. Credentials: Requires Google Sheets OAuth2 and Google Drive OAuth2 credentials, along with DataForSEO HTTP Basic Authentication. Data Mapping: Uses set nodes to map API response fields to desired output formats, ensuring compatibility with Google Sheets. Potential Enhancements Add error handling for API failures or invalid inputs. Include additional DataForSEO API endpoints for more granular data (e.g., competitor analysis). Implement deduplication logic to avoid redundant keyword entries. Add a scheduling node to run the workflow automatically at regular intervals. This workflow is a powerful tool for SEO-driven content planning, providing a robust foundation for creating optimized blog content.
by Anchor
Find Company linkedin Urls directly in Google sheets This n8n template shows how to populate a Google Spreadsheet with LinkedIn company URLs automatically using the Apify LinkedIn Company URL Finder actor from Anchor. It will create a new sheet with the matched LinkedIn URLs. You can use it to speed up lead research, keep CRM records consistent, or prep outreach lists — all directly inside n8n. Who is this for Sales Teams: Map accounts to their official LinkedIn pages fast. Recruiters: Locate company pages before sourcing. Growth Marketers: Clean and enrich account lists at scale. Researchers: Track competitors and market segments. CRM Builders: Normalize company records with an authoritative URL. Lead-Gen Agencies: Deliver verified company URLs at volume. How it works Write a list of company names in Google Sheets (one per row) The Apify node resolves each name to its LinkedIn company page The results are then stored in a new Google Sheet How to use In Google Sheets: Create a Google Sheet, rename the sheet companies, and add all the company names you want to resolve (one per row) In this Workflow: Open “Set google sheet URL & original sheet name” and replace the example Google Sheet URL, and the name of the sheet where your company names are. In the n8n credentials: Connect your Google Sheets account with read and write privileges. Connect your Apify account. In Apify: Sign up for this Apify Actor Requirements Apify account with access to LinkedIn Company URL Finder. A list of company names to process. Need Help? Open an issue directly on Apify! Avg answer in less than 24h Happy URL Finding!
by Sk developer
Threads Video Downloader & Google Drive Logger Automate downloading Threads videos from URLs, upload them to Google Drive, and log results in Google Sheets using n8n. API Source: Threads Downloader on RapidAPI Workflow Explanation | Node | Explanation | | ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | On form submission | Trigger workflow when a user submits a Threads URL via a form. | | Fetch Threads Video Data | Sends the submitted URL to Threads Downloader API to get video info. | | Check If Video Exists | Checks if the API returned a valid downloadable video URL. | | Download Threads Video File | Downloads the video from the API-provided URL. | | Upload Video to Google Drive | Uploads the downloaded video to a designated Google Drive folder. | | Set Google Drive Sharing Permissions | Sets sharing permissions so the uploaded video is accessible via a link. | | Log Success to Google Sheets | Records the original URL and Google Drive link in Google Sheets for successful downloads. | | Wait Before Logging Failure | Adds a pause before logging failed downloads to avoid timing issues. | | Log Failed Download to Google Sheets | Logs URLs with “N/A” for videos that failed to download. | How to Obtain a RapidAPI Key Go to Threads Downloader API on RapidAPI. Sign up or log in to RapidAPI. Subscribe to the API (free or paid plan). Copy the X-RapidAPI-Key from your dashboard and paste it into the n8n HTTP Request node. ✅ Note: Keep your API key private. How to Configure Google Drive & Google Sheets Google Drive Go to Google Drive and create a folder for videos. In n8n, create Google Drive OAuth2 credentials and connect your account. Configure the Upload Video node to target your folder. Google Sheets Create a spreadsheet with columns: URL | Drive_URL. Create Google API credentials in n8n (service account or OAuth2). Map the nodes to log successful or failed downloads. Google Sheet Column Table Example | URL | Drive_URL | | -------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | | https://www.threads.net/p/abc123 | https://drive.google.com/file/d/xyz/view | | https://www.threads.net/p/def456 | N/A | Use Case & Benefits Use Case:** Automate downloading Threads videos for marketing, content archiving, or research. Benefits:** Saves time with automated downloads. Centralized storage in Google Drive. Keeps a clear log in Google Sheets. Works with multiple Threads URLs without manual effort.
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. High-Volume PDF to Markdown Conversion Convert multiple PDF documents to clean, structured Markdown format in bulk. Perfect for documentation teams, content managers, and anyone needing to process large volumes of PDFs. Key Features: Process PDFs from multiple sources (URLs, Google Drive, Dropbox) Intelligent LLM-based parsing for complex layouts Preserve formatting, tables, and structure Export to various destinations Workflow Components: Input Sources: Multiple file sources supported Batch Processing: Handle hundreds of PDFs efficiently Smart Parsing: Auto-detect when LLM parsing is needed Quality Check: Validate conversion results Export Options: Save to cloud storage or database Ideal For: Converting technical documentation Migrating legacy PDF content Building searchable knowledge bases
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Discord message. You can easily streamline this with your Discord servers and users. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Discord node with your Discord server webhook URL, add your own expressions to the Text field
by Jan Oberhauser
This workflow allows creating a new Asana task via bash-dash Example usage: \- asana My new task Example bash-dash config: commands[asana]="http://localhost:5678/webhook/asana"
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to a Google Drive update event and creates an Onfleet delivery task. You can easily change which Onfleet entity to interact with. Configurations Connect to Google Drive with your own Google credentials Specify the Poll Times and File URL or ID to your own preference, the poll time determines the frequency of this check while the file URL/ID specifies which file to monitor Update the Onfleet node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started