by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Christian Moises
Since the Get Many Subreddit node often blocks requests because Reddit requires proper authentication headers, this workflow provides a reliable alternative. It uses the Reddit OAuth2 API through the HTTP Request node, processes the results, and outputs cleaned subreddit data. If you are using Get Many subreddit node and you are getting this error: *n8n You've been blocked by network security.To continue, log in to your Reddit account or use your developer token* Usecase: This is especially useful if you want to search multiple subreddits programmatically and apply filtering for members, descriptions, and categories. How It Works Trigger Input The workflow is designed to be called by another workflow using the Execute Workflow Trigger node. Input is passed in JSON format with parameters: { "Query": "RealEstateTechnology", "min_members": 0, "max_members": 20000, "limit": 50 } Fetch Subreddits The HTTP Request (Reddit OAuth2) node queries the Reddit API (/subreddits/search) with the given keyword and limit. Because it uses OAuth2 credentials, the request is properly authenticated and accepted by Reddit. Process Results Split Out: Iterates over each subreddit entry (data.children). Edit Fields: Extracts the following fields for clarity: Subreddit URL Description 18+ flag Member count Aggregate: Recombines the processed data into a structured output array. Output Returns a cleaned dataset with only the relevant subreddit details.(Saves token if Attached to an AI Agent) How to Use Import this workflow into n8n. In your main workflow, replace the Get Many Subreddit node with an Execute Workflow node and select this workflow. Pass in the required query parameters (Query, min_members, max_members, limit). Run your main workflow — results will now come through authenticated API requests without being blocked. Requirements Reddit OAuth2 API Credentials* (must be set up in n8n under *Credentials). Basic understanding of JSON parameters in n8n. An existing workflow that calls this one using Execute Workflow. Customizing This Workflow You can adapt this workflow to your specific needs by: Filtering by member range:** Add logic to exclude subreddits outside min_members or max_members. Expanding extracted fields:** Include additional subreddit properties such as created_utc, lang, or active_user_count. Changing authentication:** Switch to different Reddit OAuth2 credentials if managing multiple Reddit accounts. Integrating downstream apps:** Send the processed subreddit list to Google Sheets, Airtable, or a database for storage.
by Robert Breen
This n8n workflow pulls campaign data from Google Sheets and creates two pivot tables automatically each time it runs. ✅ Step 1: Connect Google Sheets In n8n, go to Credentials → click New Credential Select Google Sheets OAuth2 API Log in with your Google account and authorize access Use this sheet: 📄 Campaign Data Sheet Make sure the sheet includes: A Data tab (row 1 = headers, rows 2+ = campaign data) A tab for each pivot view (e.g. by Channel, by Campaign) 📬 Need Help? Feel free to reach out: 📧 robert@ynteractive.com 🔗 LinkedIn
by Đỗ Thành Nguyên
Get Long-Lived Facebook Page Access Token with Data Table > Set up n8n self-hosted via Tino.vn VPS — use code VPSN8N for up to 39% off (affiliate link). Good to Know This workflow automatically solves the common issue of Facebook Page Access Tokens expiring. It proactively renews your Page Tokens and stores them in an n8n Data Table. It runs every two months, ensuring your Page Access Tokens remain valid. This guarantees seamless and uninterrupted automation for all your Facebook API integrations. How It Works The workflow performs the following steps to keep your tokens up to date: Schedule Trigger: The workflow runs on a set schedule — every two months by default. Set Parameters: It initializes the required credentials: client_id, client_secret, a short-lived user_access_token, and the app_scoped_user_id (all obtained from Facebook Developer Tools). Get Long-Lived User Token: It exchanges the short-lived User Access Token for a long-lived one. Get Page Tokens: Using the long-lived User Token, it fetches all pages you manage and their corresponding Page Access Tokens. Update Data Table: For each page, it extracts the access_token, name, and id, then performs an Upsert operation to update or insert rows in your n8n Data Table, ensuring the stored tokens are always current. How to Use Import: Import this JSON file into your n8n instance. Configure Credentials: Open the Set Parameters node and replace the placeholder values for client_id, client_secret, user_access_token, and app_scoped_user_id with your actual credentials from Facebook. Configure Data Table: Open the Upsert row(s) node. Select or create an n8n Data Table to store your tokens. Make sure the column mapping (token, name_page, id_page) matches your table schema. Activate: Save and activate the workflow. It will now run automatically based on your configured schedule. Requirements n8n instance :** > Set up n8n self-hosted via Tino.vn VPS — use code VPSN8N for up to 39% off (affiliate link). Facebook App:** A Facebook Developer App to generate the following credentials: client_id and client_secret A short-lived user_access_token app_scoped_user_id Data Table:** An n8n Data Table configured with columns to store token information (e.g., token, name_page, id_page). Customizing This Workflow Change Schedule:* To modify how often tokens are renewed, edit the *Schedule Trigger* node. You can change the interval from *2 months* to *1 month**, or schedule it for a specific day. Filter Pages:* If you only want to store tokens for specific pages, insert a *Filter* node right after *Split Out. Use the page name or ID to filter before sending data to **Upsert row(s). Alternative Storage:* Instead of an n8n Data Table, you can replace the *Upsert row(s)* node with another option (e.g., Google Sheets, a database, or a *Set** node) to store tokens elsewhere.
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Jenny
Index Legal Dataset to Qdrant for Hybrid Retrieval This pipeline is the first part of *"Hybrid Search with Qdrant & n8n, Legal AI"**. The second part, "Hybrid Search with Qdrant & n8n, Legal AI: Retrieval", covers retrieval and simple evaluation.* Overview This pipeline transforms a Q&A legal corpus from Hugging Face (isaacus) into vector representations and indexes them to Qdrant, providing the foundation for running Hybrid Search, combining: Dense vectors (embeddings) for semantic similarity search; Sparse vectors for keyword-based exact search. After running this pipeline, you will have a Qdrant collection with your legal dataset ready for hybrid retrieval on BM25 and dense embeddings: either mxbai-embed-large-v1 or text-embedding-3-small. Options for Embedding Inference This pipeline equips you with two approaches for generating dense vectors: Using Qdrant Cloud Inference, conversion to vectors handled directly in Qdrant; Using external provider, e.g. OpenAI for generating embeddings. Prerequisites A cluster on Qdrant Cloud Paid cluster in the US region if you want to use Qdrant Cloud Inference Free Tier Cluster if using an external provider (here OpenAI) Qdrant Cluster credentials: You'll be guided on how to obtain both the URL and API_KEY from the Qdrant Cloud UI when setting up your cluster; An OpenAI API key (if you’re not using Qdrant’s Cloud Inference); P.S. To ask retrieval in Qdrant-related questions, join the Qdrant Discord. Star Qdrant n8n community node repo <3
by System Admin
Tagged with: Published Template