by Jon Doran
Summary Engage multiple, uniquely configured AI agents (using different models via OpenRouter) in a single conversation. Trigger specific agents with @mentions or let them all respond. Easily scalable by editing simple JSON settings. Overview This workflow is for users who want to experiment with or utilize multiple AI agents with distinct personalities, instructions, and underlying models within a single chat interface, without complex setup. It solves the problem of managing and interacting with diverse AI assistants simultaneously for tasks like brainstorming, comparative analysis, or role-playing scenarios. It enables dynamic conversations with multiple AI assistants simultaneously within a single chat interface. You can: Define multiple unique AI agents. Configure each agent with its own name, system instructions, and LLM model (via OpenRouter). Interact with specific agents using @AgentName mentions. Have all agents respond (in random order) if no specific agents are mentioned. Maintain conversation history across multiple turns. It's designed for flexibility and scalability, allowing you to easily add or modify agents without complex workflow restructuring. Key Features Multi-Agent Interaction:** Chat with several distinct AI personalities at once. Individual Agent Configuration:** Customize name, system prompt, and LLM for each agent. OpenRouter Integration:** Access a wide variety of LLMs compatible with OpenRouter. Mention-Based Triggering:** Direct messages to specific agents using @AgentName. All-Agent Fallback:** Engages all defined agents randomly if no mentions are used. Scalable Setup:** Agent configuration is centralized in a single Code node (as JSON). Conversation Memory:** Remembers previous interactions within the session. How to Set Up Configure Settings (Code Nodes): Open the Define Global Settings Code node: Edit the JSON to set user details (name, location, notes) and add any system message instructions that all agents should follow. Open the Define Agent Settings Code node: Edit the JSON to define your agents. Add or remove agent objects as needed. For each agent, specify: "name": The unique name for the agent (used for @mentions). "model": The OpenRouter model identifier (e.g., "openai/gpt-4o", "anthropic/claude-3.7-sonnet"). "systemMessage": Specific instructions or persona for this agent. Add OpenRouter Credentials: Locate the AI Agent node. Click the OpenRouter Chat Model node connected below it via the Language Model input. In the 'Credential for OpenRouter API' field, select or create your OpenRouter API credentials. How to Use Start a conversation using the Chat Trigger input. To address specific agents, include @AgentName in your message. Agents will respond sequentially in the order they are mentioned. Example: "@Gemma @Claude, please continue the count: 1" will trigger Gemma first, followed by Claude. If your message contains no @mentions, all agents defined in Define Agent Settings will respond in a randomized order. Example: "What are your thoughts on the future of AI?" will trigger Chad, Claude, and Gemma (based on your default settings) in a random sequence. The workflow will collect responses from all triggered agents and return them as a single, formatted message. How It Works (Technical Details) Settings Nodes: Define Global Settings and Define Agent Settings load your configurations. Mention Extraction: The Extract mentions Code node parses the user's input (chatInput) from the When chat message received trigger. It looks for @AgentName patterns matching the names defined in Define Agent Settings. Agent Selection: If mentions are found, it creates a list of the corresponding agent configurations in the order they were mentioned. If no mentions are found, it creates a list of all defined agent configurations and shuffles them randomly. Looping: The Loop Over Items node iterates through the selected agent list. Dynamic Agent Execution: Inside the loop: An If node (First loop?) checks if it's the first agent responding. If yes (true path -> Set user message as input), it passes the original user message to the Agent. If no (false path -> Set last Assistant message as input), it passes the previous agent's formatted output (lastAssistantMessage) to the next agent, creating a sequential chain. The AI Agent node receives the input message. Its System Message and the Model in the connected OpenRouter Chat Model node are dynamically populated using expressions referencing the current agent's data from the loop ({{ $('Loop Over Items').item.json.* }}). The Simple Memory node provides conversation history to the AI Agent. The agent's response is formatted (e.g., AgentName:\n\nResponse) in the Set lastAssistantMessage node. Response Aggregation: After the loop finishes, the Combine and format responses Code node gathers all the lastAssistantMessage outputs and joins them into a single text block, separated by horizontal rules (---), ready to be sent back to the user. Benefits Scalability & Flexibility:** Instead of complex branching logic, adding, removing, or modifying agents only requires editing simple JSON in the Define Agent Settings node, making setup and maintenance significantly easier, especially for those managing multiple assistants. Model Choice:** Use the best model for each agent's specific task or persona via OpenRouter. Centralized Configuration:** Keeps agent setup tidy and manageable. Limitations Sequential Responses:** Agents respond one after another based on mention order (or randomly), not in parallel. No Direct Agent-to-Agent Interaction (within a turn):* Agents cannot directly call or reply to each other *during the processing of a single user message. Agent B sees Agent A's response only because the workflow passes it as input in the next loop iteration. Delayed Output:* The user receives the combined response only *after all triggered agents have completed their generation.
by Eduard
This workflow creates a documentation system for n8n instances using Docsify.js. It serves a dynamic documentation website that allows users to: View an overview of all workflows in a tabular format Filter workflows by tags Access automatically generated documentation for each workflow Edit documentation with a live Markdown preview Visualize workflow structures using Mermaid.js diagrams > 📺 Check out the short 2-min demonstration on LinkedIn. Don't forget to connect! 🔧 Key Components Main Documentation Portal Serves a Docsify-powered website Provides a navigation sidebar with workflow tags Displays workflow status, creation date, and documentation links Documentation Generator Uses GPT model to auto-generate workflow descriptions Creates Mermaid.js diagrams of workflow structures Maintains consistent documentation format Live Editor Split-screen Markdown editor with preview Real-time Mermaid diagram rendering Save/Cancel functionality ⚙️ Technical Details Environment Setup Requires write access to the specified project directory Uses environment variables for n8n instance URL configuration Implements webhook endpoints for serving documentation ⚠️ Security Considerations > Note: The current implementation doesn't include authentication for editing. Consider adding authentication for production use. Dependencies Docsify.js for documentation rendering Mermaid.js for workflow visualization OpenAI GPT for documentation generation 🔍 Part of the n8n Observability Series This workflow is part of a broader series focused on n8n instance observability. Check out these related workflows: Workflow Dashboard - Get comprehensive analytics of your n8n instance Visualize Your n8n Workflows with Mermaid.js - Create beautiful workflow visualizations Each workflow in this series helps you better understand and manage your n8n automation ecosystem!
by Don Jayamaha Jr
📈 Get daily and on-demand Tesla (TSLA) trading signals via Telegram—powered by GPT-4.1 and real-time market data. This is the central AI supervisor that orchestrates seven sub-agents for technical analysis, price pattern recognition, and news sentiment. Reports are delivered in structured Telegram-ready HTML, optimized for traders seeking fast, intelligent decision-making signals. ⚠️ This master agent requires 7 connected sub-workflows to function. One of them, the News & Sentiment Agent, also requires a DeepSeek Chat API key for language processing. 🔌 Required Sub-Workflows You must download and publish the following workflows: Tesla Financial Market Data Analyst Tool Tesla News and Sentiment Analyst Tool (Requires DeepSeek Chat API Key) Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour & 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (Requires Alpha Vantage Premium API Key) 📍 See all tools at: 🔗 https://n8n.io/creators/don-the-gem-dealer/ 🔍 What This Agent Does Listens to your trading query via Telegram Calls the Financial Analyst and News & Sentiment Analyst These agents aggregate: RSI, MACD, BBANDS, SMA, EMA, ADX Candlestick pattern + volume divergence analysis News summaries and sentiment scoring via DeepSeek Chat GPT-4.1 composes the final structured TSLA trade report with: Spot and leverage setups Signal rationale Confidence score Sentiment tag News summary 🧠 Output Example TSLA Trading Report (Daily Summary) Spot Trade • Action: Buy • Entry: 172.45 • TP: 182.00 • SL: 169.80 • Signal: RSI bounce + Bullish Engulfing • Sentiment: Neutral Leveraged Position • Position: Long • Leverage: 3x • TP: 186 • SL: 170 • Confidence: High (83/100) 📰 Top News • Tesla Model Y delivery surge – Electrek • Options market pricing in upside – Bloomberg • FSD delayed in Canada – TeslaNorth 🛠️ Setup Instructions 1. Import All 8 Workflows Ensure all sub-agents above are published in your n8n instance. 2. Create Your Telegram Bot Use @BotFather to generate the token and connect to the trigger/send nodes. 3. Connect OpenAI GPT-4.1 Add your OpenAI credentials for GPT-4.1 in the designated node. 4. Add DeepSeek Chat API Key Sign up at https://deepseek.com and insert your DeepSeek Chat credentials in the News Agent. 5. Add Alpha Vantage Premium API Key Sign up at https://www.alphavantage.co/premium/ Use HTTP Header Auth for webhook-based indicator fetchers. 6. Replace Telegram ID Update the placeholder <<replace your ID here>> with your actual Telegram numeric ID in the auth node. 📌 Included Sticky Notes ✅ Telegram Bot Setup ✅ Agent Routing & Memory ✅ Financial vs. Sentiment Trigger Flow ✅ Report Formatting (HTML) ✅ API Requirements (GPT-4.1, DeepSeek, Alpha Vantage) ✅ Troubleshooting & Licensing 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: LinkedIn – Don Jayamaha 🚀 Deploy the Tesla Quant Trading AI system with GPT-4.1, DeepSeek Chat, and Alpha Vantage Premium—right into Telegram. All 8 workflows are required. 🎥 Tesla Quant AI Agent – Live Demo Experience the power of the Tesla Quant Trading AI Agent in action.
by Marian Tcaciuc
Manage Calendar with Voice & Text Commands using GPT-4, Telegram & Google Calendar This n8n workflow transforms your Telegram bot into a personal AI calendar assistant, capable of understanding both voice and text commands in Romanian, and managing your Google Calendar using the GPT-4 model via LangChain. Whether you want to create, update, fetch, or delete events, you can simply speak or write your request to your Telegram bot — and the assistant takes care of the rest. 🚀 Features Voice command support using Telegram voice messages (.ogg) Transcription using OpenAI Whisper Natural language understanding with GPT-4 via LangChain Google Calendar integration: ✅ Create Events 🔁 Update Events ❌ Delete Events 📅 Fetch Events Responses sent back via Telegram 🛠️ Step-by-Step Setup Instructions 1. Create a Telegram Bot Go to @BotFather on Telegram. Send /newbot and follow the instructions. Save the Bot Token. 2. Configure Telegram Trigger Node Paste the Telegram token into the Telegram Trigger and Telegram nodes. Set updates to ["message"]. 3. Set up OpenAI Credentials Get an OpenAI API key from https://platform.openai.com Create a credential in n8n for OpenAI. This is used for both transcription and AI reasoning. 4. Set up Google Calendar In Google Cloud Console: Enable Google Calendar API Set up OAuth2 credentials Add your n8n redirect URI (usually https://yourdomain/rest/oauth2-credential/callback) Create a credential in n8n using Google Calendar OAuth2 Grant access to your calendar (e.g., "Family" calendar). ⚙️ Customization Options 🗣️ Change Language or Locale The transcription node uses "en" for English. Change to another locale if needed. ✏️ Edit Prompt You can modify the prompt in the AI Agent node to include your name, work schedule, or specific behavior expectations. 📆 Change Calendar Logic Adjust time ranges or filters in the Get Events node Add custom logic before Create Event (e.g., validation, conflict checks) 📚 Helpful Tips Make sure n8n has HTTPS enabled to receive Telegram updates. You can test the flow first using only text, then voice. Use AI memory or vector stores (like Supabase) if you want context-aware planning in the future.
by Jay Emp0
🔥 Upgrade to V3 Longer blogs, Higher SEO ranking with images, charts and tables We’ve released Version 3 of our AI-Powered Blog Automation workflow. We heard your complains and made a complete redesign built for serious content creators. 📝 Read the New Articles Generated by v3 🛒 View the workflow on n8n.io ✅ Longer Blog contents 3-Agent AI Architecture as the Planner, Writer, Editor: simulate a full content team for structure, writing, and QC. A more continuous flow 📈 2x bump in SEO ranking SEO Scoring System so every article is graded on keyword density, readability, structure, backlinks, and uniqueness. IF quality doesnt meet threshold, we revise the content again. 🖼️ Smarter Visuals In-blog images via Leonardo, charts via QuickChart, tables and web scraped outbound links 🕸️ Multi-Platform Publishing Auto-posts to WordPress, Twitter (X), and Dev.to 🕵️♂️ Research Agent Adds quotes, stats, facts, outbound links, entities, and references to improve article credibility Content Farming V2 AI Powered Blog Automation for WordPress This workflow automatically generates and publishes 10 blog posts per day to a WordPress site. It collects tech-related news articles, filters and analyzes them for relevance, expands them with research, generates SEO-optimized long-form articles using AI, creates a matching image using Leonardo AI, and publishes them via the WordPress REST API. Every step is tracked and stored in MongoDB for reference and performance tracking. You can see the demo results for the AI based articles here: Emp0 Articles How it works A scheduler runs daily to fetch the latest news from RSS feeds including BBC, TechCrunch, Wired, MIT Tech Review, HackerNoon, and others. The RSS data is normalized and filtered to include only articles published within the past 24 hours. Each article is passed through an OpenAI-powered classifier to check for relevance to predefined user topics like AI, robotics, or tech policy. Relevant articles are then aggregated, researched, and summarized with supporting sources and citations. An AI agent generates five long-tail SEO blog title ideas, ranks them by uniqueness and performance score, and selects the top one. A blog outline is created including H1 and H2 headers, keyword targeting, content structure, and featured snippet optimization. A full-length article (1000 to 1500 words) is generated based on the outline, with analogies, citations, examples, and keyword density maintained. SEO metadata is produced including meta title, description, image alt text, slug, and a readability audit. An AI-generated image is created based on the blog theme using Leonardo AI, enhanced for emotional storytelling and visual consistency. The blog article, metadata, and image are uploaded to WordPress as a draft, the image is attached, Yoast SEO metadata is set, and the article is published. All outputs including article versions, metadata, generation steps, and final blog URLs are stored in MongoDB to allow for future analytics and feedback. Requirements To run this project, you need accounts and API access for the following: | Tool | Purpose | Notes | |--------------|------------------------------------------------------------------|-----------------------------------------------------------------------| | OpenAI | Used for blog classification, generation, summarization, SEO | Around $0.20 per day, using GPT-4o-mini. Estimated monthly: $6 | | MongoDB | Stores data flexibly including drafts, titles, metadata, logs | Free tier on MongoDB Atlas offers 512 MB, enough for 64,000 articles | | Leonardo AI | Generates featured images for blog articles | $9 for 3500 credits, $5 monthly top-up needed for 300 images | | WordPress | Final publishing platform via REST API | Hosted on Hostinger for $15/year including domain | Setup Instructions Import the provided JSON file into your n8n instance. Configure these credentials in n8n: OpenAI API key MongoDB Atlas connection string HTTP Header Auth for Leonardo AI WordPress REST API credentials Modify the classifier and prompt nodes to reflect your preferred content themes. Adjust scheduler nodes if you want to change post frequency or publishing times. Run the n8n instance continuously using Docker, PM2, or hosted automation platform. Cost Estimate | Component | Daily Usage | Monthly Cost Estimate | |---------------|------------------------------|------------------------| | OpenAI | 10 posts per day | ~$6 | | Leonardo AI | 10 images per day (15 credits each) | ~$14 (9 base + 5 top-up) | | MongoDB | Free up to 512 MB | $0 | | WordPress | Hosting and domain | ~$1.25 | | Total | | ~$21/month | Observations and Learnings This system can scale daily article publishing with zero manual effort. However, current limitations include inconsistent blog length and occasional coherence issues. To address this, I plan to build a feedback loop within the workflow: An SEO Commentator Agent will assess keyword strength, structure, and discoverability. An Editor-in-Chief Agent will review tone, clarity, and narrative structure. Both agents will loop back suggestions to the content generator, improving each draft until it meets human-level standards. The final goal is to consistently produce high-quality, readable, SEO-optimized content that is indistinguishable from human writing.
by Naveen Choudhary
Who is this for? Marketing, content, and enablement teams that need a quick, human-readable summary of every new video published by the YouTube channels they care about—without leaving Slack. What problem does this workflow solve? Manually checking multiple channels, skimming long videos, and pasting the highlights into Slack wastes time. This template automates the whole loop: detect a fresh upload from your selected channels → pull subtitles → distill the key take-aways with GPT-4o-mini → drop a neatly-formatted digest in Slack. What this workflow does Schedule Trigger fires every 10 min, then grabs a list of YouTube RSS feeds from a Google Sheet. HTTP + XML fetch & parse each feed; only brand-new videos continue. YouTube API fetches title/description, RapidAPI grabs English subtitles. Code nodes build an AI payload; OpenAI returns a JSON summary + article. A formatter turns that JSON into Slack Block Kit, and Slack posts it. Processed links are appended back to the “Video Links” sheet to prevent dupes. Setup Make a copy of this Google Sheet and connect a Google Sheets OAuth2 credential with edit rights. Slack App: create → add chat:write, channels:read, app_mention; enable Event Subscriptions; install and store the Bot OAuth token in an n8n Slack credential. RapidAPI key for https://yt-api.p.rapidapi.com/subtitles (300 free calls/mo) → save as HTTP Header Auth. OpenAI key → save in an OpenAI credential. Add your RSS feed URLs to the “RSS Feed URLs” tab; press Execute Workflow. How to customise Adjust the schedule interval or freshness window in “If newly published”. Swap the OpenAI model or prompt for shorter/longer digests. Point the Slack node at a different channel or DM. Extend the AI payload to include thumbnails or engagement stats. Use-case ideas Product marketing**: Instantly brief sales & CS teams when a competitor uploads a feature demo. Internal learning hub**: Auto-summarise conference talks and share bullet-point notes with engineers. Social media managers**: Get ready-to-post captions and key moments for re-purposing across platforms.
by Jimleuk
This n8n template demonstrates one approach to achieve a more natural and less frustration conversations with AI agents by reducing interrupts by predicting the end of user utterances. When we text or chat casually, it's not uncommon to break our sentences over multiple messages or when it comes to voice, break our speech with the odd pause or umms and ahhs. If an agent replies to every message, it's likely to interrupt us before we finish our thoughts and it can get very annoying! Previously, I demonstrated a simple technique for buffering each incoming message by 5 seconds but that approach still suffers in some scenarios when more time is needed. This technique has no arbitrary time limit and instead uses AI to figure out when its the agent's turn based on the user's message, allowing for the user to take all the time they need. How it works Telegram messages are received but no reply is generated for them by default. Instead they are sent to the prediction subworkflow to determine if a reply should be generated. The prediction subworkflow begins by checking Redis for the current user's prediction session state. If this is a new "utterance", it kicks off the "predict end of utterance" loop - the purpose of which is to buffer messages in a smart way! New users message can continue to be accepted by the workflow until enough is collected to allow our prediction classifier to determine the end of the utterance has been reached. The loop is then broken and the buffered chat messages are combined and sent to the AI agent to generate a response and sent to the user via the telegram node. The prediction session state is then deleted to signal the workflow is ready to start again with a new message. How to use This system sits between your preferred chat platform and the AI agent so all you need to do is replace the telegram nodes as required. Where LLM-only prediction isn't working well enough, consider more traditional code-based checking of heuristics to improve the detection. Ideally you'll want a fast but accurate LLM so your user isn't waiting longer than they have to - at time of writing Gemini-2.5-flash-lite was the fastest in testing but keep a look out for smaller and more powerful LLMs in the future. Requirements Gemini for LLM Redis for session management Telegram for chat platform
by Mihai Farcas
Who is this for? This workflow is for everyone who wants to have easier access to their Odoo sales data without complex queries. Use Case To have a clear overview of your sales data in Odoo you typically needs to extract data from it manually to analyse it. This workflow uses OpenAI's language models to create an intelligent chatbot that provides conversational access to your Odoo sales opportunity data. How it works Creates a summary of all Odoo sales opportunities using OpenAI Uses that summary as context for the OpenAI chat model Keeps the summary up to date using a schedule trigger Set up steps: Configure the Odoo credentials Configure OpenAI credentials Toggle "Make Chat Publicly Available" from the Chat Trigger node.
by Femi Ad
"Ade Technical Analyst" is a dual-workflow AI system combining conversational intelligence with visual chart analysis through Telegram. The system features 11 primary nodes for conversation management and 8 secondary nodes for chart generation and analysis. Core Components: Telegram Integration: Message handling with dynamic typing indicators AI Personality: "Ade" - a financial analyst with 50+ years NYSE/LSE experience using Claude 3.5 Sonnet Chart Generation: TradingView integration via Chart-IMG API with MACD and volume indicators Visual Analysis: GPT-4O vision for technical pattern recognition Memory System: Session-based conversation context retention Target Users Individual traders seeking professional-grade analysis without subscription costs Financial advisors wanting 24/7 AI-powered client support Investment educators needing interactive learning tools Fintech companies requiring white-label analysis solutions Setup Requirements Critical Security Fix Needed: Remove hardcoded API key from Chart-IMG node immediately Store all credentials securely in n8n credential manager Required APIs: OpenRouter (Claude 3.5 Sonnet) OpenAI (GPT-4O vision) Chart-IMG API Telegram Bot Token Technical Prerequisites: n8n version 1.7+ with Langchain nodes Webhook configuration for Telegram Dual-workflow setup with proper ID referencing Workflow Requirements Security Compliance: Never hardcode API keys in workflow JSON files Use n8n credential manager for all sensitive data Implement proper session isolation for user data Include mandatory financial disclaimers Performance Specifications: Model temperature: 0.8 for balanced responses Token limit: 500 for optimized performance Dark theme charts with professional indicators Session-based memory management Need help customizing? Contact me for consulting and support or add me on LinkedIn
by Oneclick AI Squad
Transform your meetings into actionable insights automatically! This workflow captures meeting audio, transcribes conversations, generates AI summaries, and emails the results to participants—all without manual intervention. What's the Goal? Auto-record meetings** when they start and stop when they end Transcribe audio** to text using Vexa Bot integration Generate intelligent summaries** with AI-powered analysis Email summaries** to meeting participants automatically Eliminate manual note-taking** and post-meeting admin work Never miss important discussions** or action items again Why Does It Matter? Save 90% of Post-Meeting Time**: No more manual transcription or summary writing Never Lose Key Information**: Automatic capture ensures nothing falls through cracks Improve Team Productivity**: Focus on discussions, not note-taking Perfect Meeting Records**: Searchable transcripts and summaries for future reference Instant Distribution**: Summaries reach all participants immediately after meetings How It Works Step 1: Meeting Detection & Recording Start Meeting Trigger**: Detects when meeting begins via Google Meet webhook Launch Vexa Bot**: Automatically joins meeting and starts recording End Meeting Trigger**: Detects meeting end and stops recording Step 2: Audio Processing & Transcription Stop Vexa Bot**: Ends recording and retrieves audio file Fetch Meeting Audio**: Downloads recorded audio from Vexa Bot Transcribe Audio**: Converts speech to text using AI transcription Step 3: AI Summary Generation Prepare Transcript**: Formats transcribed text for AI processing Generate Summary**: AI model creates concise meeting summary with: Key discussion points Decisions made Action items assigned Next steps identified Step 4: Distribution Send Email**: Automatically emails summary to all meeting participants Setup Requirements Google Meet Integration: Configure Google Meet webhook and API credentials Set up meeting detection triggers Test with sample meeting Vexa Bot Configuration: Add Vexa Bot API credentials for recording Configure audio file retrieval settings Set recording quality and format preferences AI Model Setup: Configure AI transcription service (e.g., OpenAI Whisper, Google Speech-to-Text) Set up AI summary generation with custom prompts Define summary format and length preferences Email Configuration: Set up SMTP credentials for email distribution Create email templates for meeting summaries Configure participant list extraction from meeting metadata Import Instructions Get Workflow JSON: Copy the workflow JSON code Open n8n Editor: Navigate to your n8n dashboard Import Workflow: Click menu (⋯) → "Import from Clipboard" → Paste JSON → Import Configure Credentials: Add API keys for Google Meet, Vexa Bot, AI services, and SMTP Test Workflow: Run a test meeting to verify end-to-end functionality Your meetings will now automatically transform into actionable summaries delivered to your inbox!
by Julian Kaiser
This automated workflow scrapes and processes the monthly "Who is Hiring" thread from Hacker News, transforming raw job listings into structured data for analysis or integration with other systems. Perfect for job seekers, recruiters, or anyone looking to monitor tech job market trends. How it works Automatically fetches the latest "Who is Hiring" thread from Hacker News Extracts and cleans relevant job posting data using the HN API Splits and processes individual job listings into structured format Parses key information like location, role, requirements, and company details Outputs clean, structured data ready for analysis or export Set up steps Configure API access to [Hacker News](https://github.com/HackerNews/API ) (no authentication required) Follow the steps to get your cURL command from https://hn.algolia.com/ Set up desired output format (JSON structured data or custom format) Optional: Configure additional parsing rules for specific job listing information Optional: Set up integration with preferred storage or analysis tools The workflow transforms unstructured job listings into clean, structured data following this pattern: Input: Raw HN thread comments Process: Extract, clean, and parse text Output: Structured job listing data This template saves hours of manual work collecting and organizing job listings, making it easier to track and analyze tech job opportunities from Hacker News's popular monthly hiring threads.
by Mutasem
How it works This workflow adds a priority to each Todoist item in your inbox, based on a list of projects that you add in the workflow. Setup Add your Todoist credentials Add your OpenAI credentials Set your project names and add priority