by System Admin
Tagged with: , , , ,
by Guillaume Duvernay
This advanced template automates the creation of a Lookio Assistant populated with a specific corpus of text. Instead of uploading files one by one, you can simply upload a CSV containing multiple text resources. The workflow iterates through the rows, converts them to text files, uploads them to Lookio, and finally creates a new Assistant with strict access limited to these specific resources. Who is this for? Knowledge Managers** who want to spin up specific "Topic Bots" (e.g., an "RFP Bot" or "HR Policy Bot") based on a spreadsheet of Q&As or articles. Product Teams** looking to bulk-import release notes or documentation to test RAG (Retrieval-Augmented Generation) responses. Automation Builders** who need a reference implementation for looping through CSV rows, converting text strings to binary files, and aggregating IDs for a final API call. What is the RAG platform Lookio for knowledge retrieval? Lookio is an API-first platform that solves the complexity of building RAG (Retrieval-Augmented Generation) systems. While tools like NotebookLM are great for individuals, Lookio is built for business automation. It handles the difficult backend work—file parsing, chunking, vector storage, and semantic retrieval—so you can focus on the workflow. API-First:** Unlike consumer AI tools, Lookio allows you to integrate your knowledge base directly into n8n, Slack, or internal apps. No "DIY" Headache:** You don't need to manage a vector database or write chunking algorithms. Free to Start:** You can sign up without a credit card and get 100 free credits to test this workflow immediately. What problem does this workflow solve? Bulk Ingestion:** Converts a CSV export (with columns for Title and Content) into individual text resources in Lookio. Automated Provisioning:** Eliminates the manual work of creating an Assistant and selecting resources one by one. Dynamic Configuration:** Allows the user to define the Assistant's specific name, context (system prompt), and output guidelines directly via the upload form. How it works Form Trigger: The user uploads a CSV and specifies the Assistant details (Name, Context, Guidelines) and maps the CSV column names. Parsing: The workflow converts the CSV to JSON and uses the Convert to File node to transform the raw text content of each row into a binary .txt file. Loop & Upload: It loops through the items, uploading them via the Lookio Add Resource API (/webhook/add-resource), and collects the returned Resource IDs. Creation: Once all files are processed, it aggregates the IDs and calls the Create Assistant API (/webhook/create-assistant), setting the resources_access_type to "Limited selection" so the bot relies only on the uploaded data. Completion: Returns the new Assistant ID and a success message to the user. CSV File Requirements Your CSV file should look like this (headers can be named anything, as you will map them in the form): | Title | Content | | --- | --- | | How to reset password | Go to settings, click security, and press reset... | | Vacation Policy | Employees are entitled to 20 days of PTO... | How to set up Lookio Credentials: Get your API Key and Workspace ID from your Lookio API Settings (Free to sign up). Configure HTTP Nodes: Open the Import resource to Lookio node: Update headers (api_key) and body (workspace_id). Open the Create Lookio assistant node: Update headers (api_key) and body (workspace_id). Form Configuration (Optional): The form is pre-configured to ask for column mapping, but you can hardcode these in the "Convert to txt" node if you always use the same CSV structure. Activate & Share: Activate the workflow and use the Production URL from the Form Trigger to let your team bulk-create assistants.
by Juan Cristóbal Andrews
Who's it for This template is designed for filmmakers, content creators, social media managers, and AI developers who want to harness OpenAI's Sora 2 for creating physically accurate, cinematic videos with synchronized audio. Whether you're generating realistic scenes from text prompts or reference images with proper physics simulation, creating multi-shot sequences with persistent world state, or producing content with integrated dialogue and sound effects, this workflow streamlines the entire video generation process from prompt to preview and Google Drive upload. What it does This workflow: Accepts a text prompt, optional reference image, OpenAI API key, and generation settings via form submission Validates reference image format (jpg, png, or webp only) Sends the prompt and optional reference to the Sora 2 API endpoint to request video generation Continuously polls the video rendering status (queued → in progress → completed) Waits 30 seconds between status checks to avoid rate limiting Handles common generation errors with descriptive error messages Automatically fetches the generated video once rendering is complete Downloads the final .mp4 file Uploads the resulting video to your Google Drive Displays the download link and video preview/screenshot upon completion How to set up 1. Get Your OpenAI API Key You'll need an OpenAI API key to use this workflow. Here's the general process: Create an OpenAI account at https://platform.openai.com Set up billing - Add payment information to enable API access Generate your API key through the API keys section in your OpenAI dashboard Copy and save your key immediately - you won't be able to view it again! ⚠️ Important: Your API key will start with sk- and should be kept secure. If you lose it, you'll need to generate a new one. 2. Connect Google Drive Add your Google Drive OAuth2 credential to n8n Grant necessary permissions for file uploads 3. Import and Run Import this workflow into n8n Execute the workflow via the form trigger Enter your API key, prompt, and desired settings in the form Optionally upload a reference image** to guide the video generation All generation settings are configured through the form, including: Model**: Choose between sora-2 or sora-2-pro Duration**: 4, 8, or 12 seconds Resolution**: Portrait or Landscape options Reference Image** (optional): Upload jpg, png, or webp matching your target resolution ⚠️ Sora 2 Pricing The workflow supports two Sora models which have the following API pricing: Sora 2 - $0.10/sec Portrait: 720x1280 Landscape: 1280x720 Sora 2 Pro - $0.30/sec (720p) or $0.50/sec (1080p) 720p - Portrait: 720x1280, Landscape: 1280x720 1080p - Portrait: 1024x1792, Landscape: 1792x1024 Duration options: 4, 8, 12 seconds (default: 4) Example costs: 4-second video with Sora 2: $0.40 12-second video with Sora 2 Pro (1080p): $6.00 Requirements Valid OpenAI API key (starting with sk-) Google Drive OAuth2 credential connected to n8n Reference image** (optional): jpg, png, or webp format - should match your selected video resolution for best results How to customize the workflow Modify generation parameters Edit the form fields to include additional options: Style presets (cinematic, anime, realistic) Camera movement preferences Audio generation options Image reference strength/influence settings It's recommended to visit the official documentation on prompting for a detailed Sora 2 guide. Adjust polling behavior Change the Wait node duration (default: 30 seconds) Modify the Check Status polling frequency based on typical generation times Add timeout logic for very long renders Customize error handling Extend error messages for additional failure scenarios Add retry logic for transient errors Configure notification webhooks for error alerts Alternative upload destinations Replace the Google Drive node with: Dropbox AWS S3 Azure Blob Storage YouTube direct upload Slack/Discord notification with video attachment Enhance result display Customize the completion form to show additional metadata Add video thumbnail generation Include generation parameters in the results page Enable direct playback in the completion form Workflow Architecture Step-by-step flow: Form Submission → User inputs text prompt, optional reference image, API key, and generation settings Create Video → Sends request to Sora 2 API endpoint with all parameters and reference image (if provided) Check Status → Polls the API for video generation status Status Decision → Routes based on status: Queued → Wait 30 seconds → Check Status again In Progress → Wait 30 seconds → Check Status again Completed → Proceed to download Failed → Display descriptive error message Wait → 30-second delay between status checks Download → Fetches the generated video file Google Drive → Uploads .mp4 to your Drive Completion Form → Displays download link and video preview/screenshot If you have any questions, just contact me on Linkedin Ready to create cinematic AI videos with physics-accurate motion, synchronized audio, and optional image references? Import this workflow and start generating! 🎬✨
by Noriwal AlMa Jr
WhatsApp Audio Transcriber Bot Overview Automatically transcribe WhatsApp audio messages to text using AI-powered speech recognition. This workflow receives audio messages via webhook, processes them through Groq's Whisper API, and replies with the transcribed text in the same conversation. Use Cases Accessibility**: Help users with hearing impairments access audio content Workplace Communication**: Quickly scan audio messages in professional settings Language Learning**: Get text versions of audio for better comprehension Meeting Notes**: Convert voice messages to searchable text format Multilingual Support**: Transcribe audio in Portuguese (configurable for other languages) How it Works Message Reception: Webhook receives WhatsApp messages in real-time Audio Detection: Filters only audio messages using Switch node Format Conversion: Converts base64 audio to MP3 file format AI Transcription: Processes audio through Groq API with Whisper Large V3 model Response Delivery: Sends transcribed text back to the original conversation Key Features ✅ Real-time Processing: Instant transcription of incoming audio messages ✅ High Accuracy: Uses Whisper Large V3 model for reliable transcription ✅ Auto-Reply: Automatically responds in the same WhatsApp conversation ✅ Message Quoting: References the original audio message in the reply ✅ Portuguese Optimized: Configured for Brazilian Portuguese transcription ✅ Self-Message Filtering: Ignores messages sent by the bot itself Prerequisites Required Services Evolution API**: WhatsApp integration service Groq API**: AI transcription service (Whisper model) n8n Instance**: Workflow automation platform API Keys & Configuration Groq API key (set as environment variable: GROQ_API_KEY) Evolution API instance properly configured Webhook URL configured in Evolution API Setup Instructions Import Workflow: Import the JSON workflow into your n8n instance Configure Environment: Set GROQ_API_KEY environment variable Setup Webhook: Configure Evolution API to send messages to the webhook endpoint Test Connection: Send a test audio message to verify the workflow Workflow Nodes Webhook**: Receives WhatsApp messages from Evolution API Edit Fields**: Extracts relevant data (number, name, message, audio) Switch**: Filters only audio messages (audioMessage type) Convert to File**: Transforms base64 audio to MP3 format HTTP Request**: Sends audio to Groq API for transcription Evolution API**: Sends transcribed text back to WhatsApp Configuration Options Groq API Settings Model**: whisper-large-v3 Language**: pt (Portuguese) Temperature**: 0 (maximum accuracy) Response Format**: json Customization Options Change language by modifying the language parameter Adjust temperature for different accuracy/creativity balance Modify response format for different output styles Response Format Mensagem transcrita automaticamente. [Transcribed text content] Technical Specifications Input**: Base64 encoded audio from WhatsApp Output**: Plain text transcription Processing Time**: Typically 2-5 seconds per audio message Supported Audio**: MP3 format (converted from WhatsApp audio) Language**: Portuguese (configurable) Troubleshooting No Response**: Check Groq API key and webhook configuration Poor Transcription**: Ensure audio quality and check language settings Error Messages**: Monitor n8n execution logs for detailed error information Version History v0.0.1**: Initial release with basic transcription functionality
by System Admin
Tagged with: , , , ,
by Evoort Solutions
Automated SEO Competitor Analysis with Google Sheets Logging Description: This n8n workflow automates SEO competitor analysis by integrating the Competitor Analysis API. It captures website domains via a simple user form, sends them to the API to retrieve competitor data, and logs the results directly into Google Sheets. The workflow intelligently handles empty or missing data and incorporates a wait node to respect API rate limits, making your competitor tracking seamless and reliable. Node-by-Node Explanation On Form Submission Triggers the workflow when a user submits a website domain. Global Storage Stores the submitted website domain for use in subsequent nodes. Competitor Analysis Request Sends a POST request to the Competitor Analysis API to fetch organic competitors, pages, and keyword data. If (Condition Check) Verifies that the API response contains valid, non-empty data. Google Sheets (Insert Success Data) Appends the fetched competitor data to a specified Google Sheet. Wait Node Adds a 5-second delay to avoid exceeding API rate limits. Google Sheets (Insert 'Not Found' Record) Logs a “Not data found.” entry into Google Sheets if the API response lacks relevant data. Use Cases & Benefits SEO Professionals:** Automate competitor insights collection for efficient SEO research. Marketing Teams:** Maintain consistent, automated logs of competitor data across multiple websites. Agencies:** Manage organic search competitor tracking for many clients effortlessly. Reliability:** Conditional checks and wait nodes ensure smooth API interaction and data integrity. Scalable & User-Friendly:** Simple form input and Google Sheets integration enable easy adoption and scalability. 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Competitor Keyword Analysis API Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
🔍 SERP Keyword Ranking Checker with RapidAPI & Google Sheets Automate keyword SERP ranking lookups and log the data into Google Sheets using this no-code n8n workflow. Perfect for SEO professionals, digital marketers, or anyone tracking keyword visibility across regions. 🧰 Tools Used SERP Keyword Ranking Checker API – Fetch real-time keyword SERP data Google Sheets** – Log keyword data for tracking, analysis, or client reporting 📌 Workflow Overview User submits a keyword and country code via an n8n form Workflow sends a request to the SERP Keyword Ranking Checker API API response is checked: If valid data is found, it is logged to Google Sheets If no results are found, a fallback message is saved instead Optional delays added to space out operations ⚙️ Node-by-Node Breakdown 1. 🟢 Form Trigger: On form submission Accepts user input for: keyword (e.g. "labubu") country (e.g. "us") 2. 📦 Set Node: Global Storage Stores form input into variables (keyword, country) for use in API requests and logging. 3. 🌐 HTTP Request: SERP Keyword Ranking Checker Sends a POST request to the SERP Keyword Ranking Checker API with: keyword country Includes headers: x-rapidapi-host: serp-keyword-ranking-checker.p.rapidapi.com x-rapidapi-key: YOUR_RAPIDAPI_KEY 4. ⚖️ If Node: Condition Checking Checks whether the serpResults array returned by the API is non-empty. ✅ True Branch: Proceeds if valid SERP data is available. ❌ False Branch: Proceeds if no SERP data is returned (e.g., empty result). 5. ⏳ Wait Node: 5-Second Delay Adds a 5-second delay before proceeding to Google Sheets insertion. This helps control execution pace and ensures API rate limits or spreadsheet latency is handled smoothly. Used on both True and False branches for consistency. 6. 📊 Google Sheets (Success Path) Appends a new row into the selected Google Sheet with: Keyword – the submitted keyword Country – selected country code Json data – full serpResults JSON array returned by the API 💡 Ideal for tracking keyword rankings over time or populating live dashboards. 7. 📊 Google Sheets (Failure Path) Appends a fallback message into the Google Sheet when no SERP results are found. Keyword – the submitted keyword Country – selected country code Json data – "No result found. Please try another keyword..." 🔍 Helps maintain a log of unsuccessful queries for debugging or auditing. 💡 Use Cases SEO Audits** Automate keyword performance snapshots across different markets to identify opportunities and gaps. Competitor Analysis** Track keyword rankings of rival brands in real time to stay ahead of competition. Client Reporting** Feed live SERP data into dashboards or reports for transparent, real-time agency deliverables. Content Strategy** Discover which keywords surface top-ranking pages to guide content creation and optimization efforts. 🔑 How to Obtain Your API Key for SERP Keyword Ranking Checker API Sign Up or Log In Visit RapidAPI and create a free account using your email or social login. Go to the API Page Navigate to the SERP Keyword Ranking Checker APi. Subscribe to the API Click Subscribe to Test, then choose a pricing plan that fits your needs (Free, Basic, Pro). Get Your API Key After subscribing, go to the Security tab on the API page to find your X-RapidAPI-Key. Use Your API Key Add the API key to your HTTP request headers: X-RapidAPI-Key: YOUR_API_KEY Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Wevanta Infotech
LinkedIn Auto-Post Agent for n8n 🚀 Automate your LinkedIn presence with AI-powered content generation This n8n workflow automatically generates and publishes engaging LinkedIn posts using OpenAI's GPT models. Perfect for professionals and businesses who want to maintain an active LinkedIn presence without manual effort. ✨ Features 🤖 AI-Powered Content**: Generate professional LinkedIn posts using OpenAI GPT-3.5-turbo or GPT-4 ⏰ Automated Scheduling**: Post content automatically on weekdays at 9 AM (customizable) 🎯 Manual Trigger**: Generate and post content on-demand 🔒 Secure**: All credentials stored securely in n8n's encrypted credential system 📊 Error Handling**: Built-in retry logic and error notifications 🎨 Customizable**: Easily modify prompts, scheduling, and content parameters 🏗️ Architecture This workflow uses a streamlined 3-node architecture: Schedule/Manual Trigger → OpenAI Content Generation → LinkedIn Post Node Details Schedule Trigger: Automatically triggers the workflow (default: weekdays at 9 AM) Manual Trigger: Allows on-demand content generation OpenAI Content Generation: Creates LinkedIn-optimized content using AI LinkedIn Post: Publishes the generated content to LinkedIn 📋 Prerequisites n8n instance (self-hosted or cloud) OpenAI API account and API key LinkedIn account with API access Basic familiarity with n8n workflows 🚀 Quick Start 1. Import the Workflow Download the linkedin-auto-post-agent.json file In your n8n instance, go to Workflows → Import from File Select the downloaded JSON file Click Import 2. Set Up Credentials OpenAI API Credentials Go to Credentials in your n8n instance Click Create New Credential Select OpenAI Enter your OpenAI API key Name it "OpenAI API" and save LinkedIn OAuth2 Credentials Create a LinkedIn App at LinkedIn Developer Portal Configure OAuth 2.0 settings: Redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Scopes: r_liteprofile, w_member_social In n8n, create new LinkedIn OAuth2 credentials Enter your LinkedIn App's Client ID and Client Secret Complete the OAuth authorization flow 3. Configure the Workflow Open the imported workflow Click on the OpenAI Content Generation node Select your OpenAI credentials Customize the content prompt if desired Click on the LinkedIn Post node Select your LinkedIn OAuth2 credentials Save the workflow 4. Test the Workflow Click the Manual Trigger node Click Execute Node to test content generation Verify the generated content in the LinkedIn node output Check your LinkedIn profile to confirm the post was published 5. Activate Automated Posting Click the Active toggle in the top-right corner The workflow will now run automatically based on the schedule ⚙️ Configuration Options Scheduling The default schedule posts content on weekdays at 9 AM. To modify: Click the Schedule Trigger node Modify the Cron Expression: 0 9 * * 1-5 0 9 * * 1-5: Weekdays at 9 AM 0 12 * * *: Daily at noon 0 9 * * 1,3,5: Monday, Wednesday, Friday at 9 AM Content Customization Modify the OpenAI prompt to change content style: Click the OpenAI Content Generation node Edit the System Message to adjust tone and style Modify the User Message to change topic focus Example Prompts Professional Development Focus: Create a LinkedIn post about professional growth, skill development, or career advancement. Keep it under 280 characters and include 2-3 relevant hashtags. Industry Insights: Generate a LinkedIn post sharing an industry insight or trend in technology. Make it thought-provoking and include relevant hashtags. Motivational Content: Write an inspiring LinkedIn post about overcoming challenges or achieving goals. Keep it positive and engaging with appropriate hashtags. Model Selection Choose between OpenAI models based on your needs: gpt-3.5-turbo**: Cost-effective, good quality gpt-4**: Higher quality, more expensive gpt-4-turbo**: Latest model with improved performance 🔧 Advanced Configuration Error Handling The workflow includes built-in error handling: Retry Logic**: 3 attempts with 1-second delays Continue on Fail**: Workflow continues even if individual nodes fail Error Notifications**: Optional email/Slack notifications on failures Content Review Workflow (Optional) To add manual content review before posting: Add a Wait node between OpenAI and LinkedIn nodes Configure webhook trigger for approval Add conditional logic based on approval status Rate Limiting To respect API limits: OpenAI: 3 requests per minute (default) LinkedIn: 100 posts per day per user Adjust scheduling frequency accordingly 📊 Monitoring and Analytics Execution History Go to Executions in your n8n instance Filter by workflow name to see all runs Click on individual executions to see detailed logs Key Metrics to Monitor Success Rate**: Percentage of successful executions Content Quality**: Review generated posts periodically API Usage**: Monitor OpenAI token consumption LinkedIn Engagement**: Track post performance on LinkedIn 🛠️ Troubleshooting Common Issues OpenAI Node Fails Verify API key is correct and has sufficient credits Check if you've exceeded rate limits Ensure the model name is spelled correctly LinkedIn Node Fails Verify OAuth2 credentials are properly configured Check if LinkedIn app has required permissions Ensure the content doesn't violate LinkedIn's posting policies Workflow Doesn't Trigger Confirm the workflow is marked as "Active" Verify the cron expression syntax Check n8n's timezone settings Debug Mode Enable Save Manual Executions in workflow settings Run the workflow manually to see detailed execution data Check each node's input/output data 🔒 Security Best Practices Store all API keys in n8n's encrypted credential system Regularly rotate API keys (monthly recommended) Use environment variables for sensitive configuration Enable execution logging for audit trails Monitor for unusual API usage patterns 📈 Optimization Tips Content Quality Review and refine prompts based on output quality A/B test different prompt variations Monitor LinkedIn engagement metrics Adjust posting frequency based on audience response Cost Optimization Use gpt-3.5-turbo for cost-effective content generation Set appropriate token limits (200 tokens recommended) Monitor OpenAI usage in your dashboard Performance Keep workflows simple with minimal nodes Use appropriate retry settings Monitor execution times and optimize if needed 🤝 Contributing We welcome contributions to improve this workflow: Fork the repository Create a feature branch Make your improvements Submit a pull request 📄 License This project is licensed under the MIT License - see the LICENSE file for details. 🆘 Support If you encounter issues or have questions: Check the troubleshooting section above Review n8n's official documentation Join the n8n community forum Create an issue in this repository 🔗 Useful Links n8n Documentation OpenAI API Documentation LinkedIn API Documentation n8n Community Forum Happy Automating! 🚀 This workflow helps you maintain a consistent LinkedIn presence while focusing on what matters most - your business and professional growth.
by System Admin
Tagged with: , , , ,
by Guillaume Duvernay
This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers:** Produce draft answers to large question sets using your company docs. Sales & proposal teams:** Auto-generate RFP answer drafts informed by internal docs. Support & operations teams:** Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders:** Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries:** Eliminates the manual process of running many individual lookups. Ensures answers are grounded:* Responses come from your uploaded documents via *Lookio**, reducing hallucinations. Produces ready-to-use output:* Delivers an enriched CSV with a new *Response** column for downstream use. Simple UX:* Users only need to upload a CSV with a *Query** column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the api_key header value with your Lookio API Key and update assistant_id with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries:** Insert error handling and delay nodes to respect API limits for large batches. Improve the speed**: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results:* Add an *Airtable* or *Google Sheets** node to archive questions and responses for audit and reuse. Post-process answers:** Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations:* Replace the *Form Trigger* with a *Google Drive* or *Airtable** trigger to process CSVs automatically from a folder or table.
by franck fambou
Extract and Convert PDF Documents to Markdown with LlamaIndex Cloud API Overview This workflow automatically converts PDF documents to Markdown format using the LlamaIndex Cloud API. LlamaIndex is a powerful data framework that specializes in connecting large language models with external data sources, offering advanced document processing capabilities with high accuracy and intelligent content extraction. How It Works Automatic Processing Pipeline: Form Submission Trigger**: Workflow initiates when a user submits a document through a web form Document Upload**: PDF files are automatically uploaded to LlamaIndex Cloud for processing Smart Status Monitoring**: The system continuously checks processing status and adapts the workflow based on results Conditional Content Extraction**: Upon successful processing, extracted Markdown content is retrieved for further use Setup Instructions Estimated Setup Time: 5-10 minutes Prerequisites LlamaIndex Cloud account and API credentials Access to n8n instance (cloud or self-hosted) Configuration Steps Configure Form Trigger Set up the webhook form trigger with file upload capability Add required fields to capture document metadata and processing preferences Setup LlamaIndex API Connection Obtain your API key from LlamaIndex Cloud dashboard Configure the HTTP Request node with your credentials and endpoint URL Set proper authentication headers and request parameters Configure Status Verification Define polling intervals for status checks (recommended: 10-30 seconds) Set maximum retry attempts to avoid infinite loops Configure success/failure criteria based on API response codes Setup Content Extractor Configure output format preferences (Markdown styling, headers, etc.) Set up error handling for failed extractions Define content storage or forwarding destinations Use Cases Document Digitization**: Convert legacy PDF documents to editable Markdown format Content Management**: Prepare documents for CMS integration or static site generators Knowledge Base Creation**: Transform PDF manuals and guides into searchable Markdown content Academic Research**: Convert research papers and publications for analysis and citation Technical Documentation**: Process PDF specifications and manuals for developer documentation Key Features Fully automated PDF to Markdown conversion Intelligent content structure preservation Error handling and retry mechanisms Status monitoring with real-time feedback Scalable processing for batch operations Requirements LlamaIndex Cloud API key n8n instance (v0.200.0 or higher recommended) Internet connectivity for API access Support For issues related to LlamaIndex API, consult their official documentation docs. For n8n-specific questions, refer to the n8n community forum.
by Sk developer
🎵 Spotify to MP3 → Upload to Google Drive Automate the process of converting Spotify track URLs into MP3 files, uploading them to Google Drive, and instantly generating shareable links — all triggered by a simple form. ✅ What This Workflow Does Accepts a Spotify URL from a form. Sends the URL to Spotify Downloader MP3 API on RapidAPI. Waits briefly for conversion. Downloads the resulting MP3 file. Uploads it to Google Drive. Sets public sharing permissions for easy access. 🧩 Workflow Structure | Step | Node Name | Description | |------|--------------------------------|-----------------------------------------------------------------------------| | 1 | On form submission | Collects Spotify track URL via an n8n Form Trigger node. | | 2 | Spotify Rapid API | Calls Spotify Downloader MP3 API to generate the MP3 download link. | | 3 | Wait | Ensures download link is processed before proceeding. | | 4 | Downloader | Downloads the MP3 using the generated link. | | 5 | Upload MP3 to Google Drive | Uploads the file using Google Drive credentials. | | 6 | Update Permission | Makes the uploaded file publicly accessible via a shareable link. | 🔧 Requirements n8n instance (self-hosted or cloud) RapidAPI account & subscription to Spotify Downloader MP3 API Google Cloud service account with Drive API access Active Google Drive (root or specified folder) 🚀 How to Use Set up Google API credentials in n8n. Subscribe to the Spotify Downloader MP3 API on RapidAPI. Insert your RapidAPI key into the HTTP Request node. Deploy the workflow and access the webhook form URL. Submit a Spotify URL — the MP3 gets downloaded, uploaded, and shared. 🎯 Use Cases 🎧 Music collectors automating downloads 🧑🏫 Teachers creating music-based lessons 🎙 Podcasters pulling music samples 📥 Anyone who needs quick Spotify → MP3 conversion 🛠 Tech Stack n8n**: Visual workflow automation RapidAPI**: Spotify Downloader MP3 API Google Drive**: File storage and sharing Form Trigger**: Input collection interface HTTP Request Node**: Handles API communication 🔐 Notes on Security Do not expose your x-rapidapi-key publicly. Use environment variables or n8n credentials for secure storage. Adjust sharing permissions (reader, writer, or restricted) per your needs. 🔗 API Reference 🎵 Spotify Downloader MP3 API – skdeveloper 📦 Tags spotify mp3 google-drive automation rapidapi n8n music