by Daniel
Apollo Lead Scraper to Airtable CRM Automate your lead generation by scraping targeted prospects from Apollo.io, enriching with contact details, and seamlessly syncing to Airtable for organized outreach—all without manual data entry. What It Does This workflow pulls search URLs from Airtable, uses Apify to scrape Apollo leads (up to 50k), enriches with emails and LinkedIn profiles, removes duplicates, filters valid entries, and categorizes contacts into Airtable tables based on email availability for efficient CRM management. Key Features Apify Apollo Scraper** - Extracts up to 50k leads with personal/work emails Smart Deduplication** - Removes duplicates based on key fields like email and name Email Categorization** - Separates contacts with/without emails into dedicated tables Field Mapping** - Customizable data transformation for Airtable compatibility Configurable Limits** - Adjust total records and memory for optimal performance Error Handling** - Built-in troubleshooting for common issues like invalid URLs Perfect For Sales Teams**: Build targeted B2B pipelines for email campaigns Recruiters**: Source candidates by job title, location, and skills Marketers**: Create datasets for market research and analysis Agencies**: Automate client prospecting from custom filters Researchers**: Collect professional data for industry studies CRM Managers**: Maintain clean, enriched contact databases Technical Highlights Leveraging n8n's Airtable and Apify integrations, this workflow showcases: Dynamic data fetching from Airtable tables Actor-based web scraping with custom parameters Conditional branching for data routing Efficient data processing with set, filter, and if nodes Scalable design for large datasets with memory optimization Ideal for automating lead workflows and scaling prospecting efforts. No advanced coding needed—just set up credentials and run!
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Guillaume Duvernay
This advanced template automates the creation of a Lookio Assistant populated with a specific corpus of text. Instead of uploading files one by one, you can simply upload a CSV containing multiple text resources. The workflow iterates through the rows, converts them to text files, uploads them to Lookio, and finally creates a new Assistant with strict access limited to these specific resources. Who is this for? Knowledge Managers** who want to spin up specific "Topic Bots" (e.g., an "RFP Bot" or "HR Policy Bot") based on a spreadsheet of Q&As or articles. Product Teams** looking to bulk-import release notes or documentation to test RAG (Retrieval-Augmented Generation) responses. Automation Builders** who need a reference implementation for looping through CSV rows, converting text strings to binary files, and aggregating IDs for a final API call. What is the RAG platform Lookio for knowledge retrieval? Lookio is an API-first platform that solves the complexity of building RAG (Retrieval-Augmented Generation) systems. While tools like NotebookLM are great for individuals, Lookio is built for business automation. It handles the difficult backend work—file parsing, chunking, vector storage, and semantic retrieval—so you can focus on the workflow. API-First:** Unlike consumer AI tools, Lookio allows you to integrate your knowledge base directly into n8n, Slack, or internal apps. No "DIY" Headache:** You don't need to manage a vector database or write chunking algorithms. Free to Start:** You can sign up without a credit card and get 100 free credits to test this workflow immediately. What problem does this workflow solve? Bulk Ingestion:** Converts a CSV export (with columns for Title and Content) into individual text resources in Lookio. Automated Provisioning:** Eliminates the manual work of creating an Assistant and selecting resources one by one. Dynamic Configuration:** Allows the user to define the Assistant's specific name, context (system prompt), and output guidelines directly via the upload form. How it works Form Trigger: The user uploads a CSV and specifies the Assistant details (Name, Context, Guidelines) and maps the CSV column names. Parsing: The workflow converts the CSV to JSON and uses the Convert to File node to transform the raw text content of each row into a binary .txt file. Loop & Upload: It loops through the items, uploading them via the Lookio Add Resource API (/webhook/add-resource), and collects the returned Resource IDs. Creation: Once all files are processed, it aggregates the IDs and calls the Create Assistant API (/webhook/create-assistant), setting the resources_access_type to "Limited selection" so the bot relies only on the uploaded data. Completion: Returns the new Assistant ID and a success message to the user. CSV File Requirements Your CSV file should look like this (headers can be named anything, as you will map them in the form): | Title | Content | | --- | --- | | How to reset password | Go to settings, click security, and press reset... | | Vacation Policy | Employees are entitled to 20 days of PTO... | How to set up Lookio Credentials: Get your API Key and Workspace ID from your Lookio API Settings (Free to sign up). Configure HTTP Nodes: Open the Import resource to Lookio node: Update headers (api_key) and body (workspace_id). Open the Create Lookio assistant node: Update headers (api_key) and body (workspace_id). Form Configuration (Optional): The form is pre-configured to ask for column mapping, but you can hardcode these in the "Convert to txt" node if you always use the same CSV structure. Activate & Share: Activate the workflow and use the Production URL from the Form Trigger to let your team bulk-create assistants.
by System Admin
Received the doc
by Juan Cristóbal Andrews
Who's it for This template is designed for filmmakers, content creators, social media managers, and AI developers who want to harness OpenAI's Sora 2 for creating physically accurate, cinematic videos with synchronized audio. Whether you're generating realistic scenes from text prompts or reference images with proper physics simulation, creating multi-shot sequences with persistent world state, or producing content with integrated dialogue and sound effects, this workflow streamlines the entire video generation process from prompt to preview and Google Drive upload. What it does This workflow: Accepts a text prompt, optional reference image, OpenAI API key, and generation settings via form submission Validates reference image format (jpg, png, or webp only) Sends the prompt and optional reference to the Sora 2 API endpoint to request video generation Continuously polls the video rendering status (queued → in progress → completed) Waits 30 seconds between status checks to avoid rate limiting Handles common generation errors with descriptive error messages Automatically fetches the generated video once rendering is complete Downloads the final .mp4 file Uploads the resulting video to your Google Drive Displays the download link and video preview/screenshot upon completion How to set up 1. Get Your OpenAI API Key You'll need an OpenAI API key to use this workflow. Here's the general process: Create an OpenAI account at https://platform.openai.com Set up billing - Add payment information to enable API access Generate your API key through the API keys section in your OpenAI dashboard Copy and save your key immediately - you won't be able to view it again! ⚠️ Important: Your API key will start with sk- and should be kept secure. If you lose it, you'll need to generate a new one. 2. Connect Google Drive Add your Google Drive OAuth2 credential to n8n Grant necessary permissions for file uploads 3. Import and Run Import this workflow into n8n Execute the workflow via the form trigger Enter your API key, prompt, and desired settings in the form Optionally upload a reference image** to guide the video generation All generation settings are configured through the form, including: Model**: Choose between sora-2 or sora-2-pro Duration**: 4, 8, or 12 seconds Resolution**: Portrait or Landscape options Reference Image** (optional): Upload jpg, png, or webp matching your target resolution ⚠️ Sora 2 Pricing The workflow supports two Sora models which have the following API pricing: Sora 2 - $0.10/sec Portrait: 720x1280 Landscape: 1280x720 Sora 2 Pro - $0.30/sec (720p) or $0.50/sec (1080p) 720p - Portrait: 720x1280, Landscape: 1280x720 1080p - Portrait: 1024x1792, Landscape: 1792x1024 Duration options: 4, 8, 12 seconds (default: 4) Example costs: 4-second video with Sora 2: $0.40 12-second video with Sora 2 Pro (1080p): $6.00 Requirements Valid OpenAI API key (starting with sk-) Google Drive OAuth2 credential connected to n8n Reference image** (optional): jpg, png, or webp format - should match your selected video resolution for best results How to customize the workflow Modify generation parameters Edit the form fields to include additional options: Style presets (cinematic, anime, realistic) Camera movement preferences Audio generation options Image reference strength/influence settings It's recommended to visit the official documentation on prompting for a detailed Sora 2 guide. Adjust polling behavior Change the Wait node duration (default: 30 seconds) Modify the Check Status polling frequency based on typical generation times Add timeout logic for very long renders Customize error handling Extend error messages for additional failure scenarios Add retry logic for transient errors Configure notification webhooks for error alerts Alternative upload destinations Replace the Google Drive node with: Dropbox AWS S3 Azure Blob Storage YouTube direct upload Slack/Discord notification with video attachment Enhance result display Customize the completion form to show additional metadata Add video thumbnail generation Include generation parameters in the results page Enable direct playback in the completion form Workflow Architecture Step-by-step flow: Form Submission → User inputs text prompt, optional reference image, API key, and generation settings Create Video → Sends request to Sora 2 API endpoint with all parameters and reference image (if provided) Check Status → Polls the API for video generation status Status Decision → Routes based on status: Queued → Wait 30 seconds → Check Status again In Progress → Wait 30 seconds → Check Status again Completed → Proceed to download Failed → Display descriptive error message Wait → 30-second delay between status checks Download → Fetches the generated video file Google Drive → Uploads .mp4 to your Drive Completion Form → Displays download link and video preview/screenshot If you have any questions, just contact me on Linkedin Ready to create cinematic AI videos with physics-accurate motion, synchronized audio, and optional image references? Import this workflow and start generating! 🎬✨
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Wevanta Infotech
LinkedIn Auto-Post Agent for n8n 🚀 Automate your LinkedIn presence with AI-powered content generation This n8n workflow automatically generates and publishes engaging LinkedIn posts using OpenAI's GPT models. Perfect for professionals and businesses who want to maintain an active LinkedIn presence without manual effort. ✨ Features 🤖 AI-Powered Content**: Generate professional LinkedIn posts using OpenAI GPT-3.5-turbo or GPT-4 ⏰ Automated Scheduling**: Post content automatically on weekdays at 9 AM (customizable) 🎯 Manual Trigger**: Generate and post content on-demand 🔒 Secure**: All credentials stored securely in n8n's encrypted credential system 📊 Error Handling**: Built-in retry logic and error notifications 🎨 Customizable**: Easily modify prompts, scheduling, and content parameters 🏗️ Architecture This workflow uses a streamlined 3-node architecture: Schedule/Manual Trigger → OpenAI Content Generation → LinkedIn Post Node Details Schedule Trigger: Automatically triggers the workflow (default: weekdays at 9 AM) Manual Trigger: Allows on-demand content generation OpenAI Content Generation: Creates LinkedIn-optimized content using AI LinkedIn Post: Publishes the generated content to LinkedIn 📋 Prerequisites n8n instance (self-hosted or cloud) OpenAI API account and API key LinkedIn account with API access Basic familiarity with n8n workflows 🚀 Quick Start 1. Import the Workflow Download the linkedin-auto-post-agent.json file In your n8n instance, go to Workflows → Import from File Select the downloaded JSON file Click Import 2. Set Up Credentials OpenAI API Credentials Go to Credentials in your n8n instance Click Create New Credential Select OpenAI Enter your OpenAI API key Name it "OpenAI API" and save LinkedIn OAuth2 Credentials Create a LinkedIn App at LinkedIn Developer Portal Configure OAuth 2.0 settings: Redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Scopes: r_liteprofile, w_member_social In n8n, create new LinkedIn OAuth2 credentials Enter your LinkedIn App's Client ID and Client Secret Complete the OAuth authorization flow 3. Configure the Workflow Open the imported workflow Click on the OpenAI Content Generation node Select your OpenAI credentials Customize the content prompt if desired Click on the LinkedIn Post node Select your LinkedIn OAuth2 credentials Save the workflow 4. Test the Workflow Click the Manual Trigger node Click Execute Node to test content generation Verify the generated content in the LinkedIn node output Check your LinkedIn profile to confirm the post was published 5. Activate Automated Posting Click the Active toggle in the top-right corner The workflow will now run automatically based on the schedule ⚙️ Configuration Options Scheduling The default schedule posts content on weekdays at 9 AM. To modify: Click the Schedule Trigger node Modify the Cron Expression: 0 9 * * 1-5 0 9 * * 1-5: Weekdays at 9 AM 0 12 * * *: Daily at noon 0 9 * * 1,3,5: Monday, Wednesday, Friday at 9 AM Content Customization Modify the OpenAI prompt to change content style: Click the OpenAI Content Generation node Edit the System Message to adjust tone and style Modify the User Message to change topic focus Example Prompts Professional Development Focus: Create a LinkedIn post about professional growth, skill development, or career advancement. Keep it under 280 characters and include 2-3 relevant hashtags. Industry Insights: Generate a LinkedIn post sharing an industry insight or trend in technology. Make it thought-provoking and include relevant hashtags. Motivational Content: Write an inspiring LinkedIn post about overcoming challenges or achieving goals. Keep it positive and engaging with appropriate hashtags. Model Selection Choose between OpenAI models based on your needs: gpt-3.5-turbo**: Cost-effective, good quality gpt-4**: Higher quality, more expensive gpt-4-turbo**: Latest model with improved performance 🔧 Advanced Configuration Error Handling The workflow includes built-in error handling: Retry Logic**: 3 attempts with 1-second delays Continue on Fail**: Workflow continues even if individual nodes fail Error Notifications**: Optional email/Slack notifications on failures Content Review Workflow (Optional) To add manual content review before posting: Add a Wait node between OpenAI and LinkedIn nodes Configure webhook trigger for approval Add conditional logic based on approval status Rate Limiting To respect API limits: OpenAI: 3 requests per minute (default) LinkedIn: 100 posts per day per user Adjust scheduling frequency accordingly 📊 Monitoring and Analytics Execution History Go to Executions in your n8n instance Filter by workflow name to see all runs Click on individual executions to see detailed logs Key Metrics to Monitor Success Rate**: Percentage of successful executions Content Quality**: Review generated posts periodically API Usage**: Monitor OpenAI token consumption LinkedIn Engagement**: Track post performance on LinkedIn 🛠️ Troubleshooting Common Issues OpenAI Node Fails Verify API key is correct and has sufficient credits Check if you've exceeded rate limits Ensure the model name is spelled correctly LinkedIn Node Fails Verify OAuth2 credentials are properly configured Check if LinkedIn app has required permissions Ensure the content doesn't violate LinkedIn's posting policies Workflow Doesn't Trigger Confirm the workflow is marked as "Active" Verify the cron expression syntax Check n8n's timezone settings Debug Mode Enable Save Manual Executions in workflow settings Run the workflow manually to see detailed execution data Check each node's input/output data 🔒 Security Best Practices Store all API keys in n8n's encrypted credential system Regularly rotate API keys (monthly recommended) Use environment variables for sensitive configuration Enable execution logging for audit trails Monitor for unusual API usage patterns 📈 Optimization Tips Content Quality Review and refine prompts based on output quality A/B test different prompt variations Monitor LinkedIn engagement metrics Adjust posting frequency based on audience response Cost Optimization Use gpt-3.5-turbo for cost-effective content generation Set appropriate token limits (200 tokens recommended) Monitor OpenAI usage in your dashboard Performance Keep workflows simple with minimal nodes Use appropriate retry settings Monitor execution times and optimize if needed 🤝 Contributing We welcome contributions to improve this workflow: Fork the repository Create a feature branch Make your improvements Submit a pull request 📄 License This project is licensed under the MIT License - see the LICENSE file for details. 🆘 Support If you encounter issues or have questions: Check the troubleshooting section above Review n8n's official documentation Join the n8n community forum Create an issue in this repository 🔗 Useful Links n8n Documentation OpenAI API Documentation LinkedIn API Documentation n8n Community Forum Happy Automating! 🚀 This workflow helps you maintain a consistent LinkedIn presence while focusing on what matters most - your business and professional growth.
by Guillaume Duvernay
This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers:** Produce draft answers to large question sets using your company docs. Sales & proposal teams:** Auto-generate RFP answer drafts informed by internal docs. Support & operations teams:** Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders:** Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries:** Eliminates the manual process of running many individual lookups. Ensures answers are grounded:* Responses come from your uploaded documents via *Lookio**, reducing hallucinations. Produces ready-to-use output:* Delivers an enriched CSV with a new *Response** column for downstream use. Simple UX:* Users only need to upload a CSV with a *Query** column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the api_key header value with your Lookio API Key and update assistant_id with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries:** Insert error handling and delay nodes to respect API limits for large batches. Improve the speed**: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results:* Add an *Airtable* or *Google Sheets** node to archive questions and responses for audit and reuse. Post-process answers:** Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations:* Replace the *Form Trigger* with a *Google Drive* or *Airtable** trigger to process CSVs automatically from a folder or table.
by franck fambou
Extract and Convert PDF Documents to Markdown with LlamaIndex Cloud API Overview This workflow automatically converts PDF documents to Markdown format using the LlamaIndex Cloud API. LlamaIndex is a powerful data framework that specializes in connecting large language models with external data sources, offering advanced document processing capabilities with high accuracy and intelligent content extraction. How It Works Automatic Processing Pipeline: Form Submission Trigger**: Workflow initiates when a user submits a document through a web form Document Upload**: PDF files are automatically uploaded to LlamaIndex Cloud for processing Smart Status Monitoring**: The system continuously checks processing status and adapts the workflow based on results Conditional Content Extraction**: Upon successful processing, extracted Markdown content is retrieved for further use Setup Instructions Estimated Setup Time: 5-10 minutes Prerequisites LlamaIndex Cloud account and API credentials Access to n8n instance (cloud or self-hosted) Configuration Steps Configure Form Trigger Set up the webhook form trigger with file upload capability Add required fields to capture document metadata and processing preferences Setup LlamaIndex API Connection Obtain your API key from LlamaIndex Cloud dashboard Configure the HTTP Request node with your credentials and endpoint URL Set proper authentication headers and request parameters Configure Status Verification Define polling intervals for status checks (recommended: 10-30 seconds) Set maximum retry attempts to avoid infinite loops Configure success/failure criteria based on API response codes Setup Content Extractor Configure output format preferences (Markdown styling, headers, etc.) Set up error handling for failed extractions Define content storage or forwarding destinations Use Cases Document Digitization**: Convert legacy PDF documents to editable Markdown format Content Management**: Prepare documents for CMS integration or static site generators Knowledge Base Creation**: Transform PDF manuals and guides into searchable Markdown content Academic Research**: Convert research papers and publications for analysis and citation Technical Documentation**: Process PDF specifications and manuals for developer documentation Key Features Fully automated PDF to Markdown conversion Intelligent content structure preservation Error handling and retry mechanisms Status monitoring with real-time feedback Scalable processing for batch operations Requirements LlamaIndex Cloud API key n8n instance (v0.200.0 or higher recommended) Internet connectivity for API access Support For issues related to LlamaIndex API, consult their official documentation docs. For n8n-specific questions, refer to the n8n community forum.