by Panth1823
Keep your job listings database clean without manual checks. Every three days, this workflow fetches all active jobs from your Postgres database, runs each application URL through a validation check, identifies dead links via HTTP status codes and soft-404 redirect detection, then marks failed entries as inactive in both Supabase and Google Sheets simultaneously. Who it's for Teams running a job aggregator, career platform, or internal hiring tracker who store job listings in Postgres and want stale or broken apply links removed automatically — without waiting for user reports. How it works A Schedule Trigger fires every 3 days All active jobs are fetched from your Postgres (Supabase) database via a SQL query A Prepare URLs node filters out any rows with missing, malformed, or non-HTTP URLs before they're checked An HTTP Request node sends a HEAD request to each apply_url A Find Dead Jobs code node analyzes each response and flags a job as dead if: Status code is 404 or 410 DNS resolution fails (ENOTFOUND) Connection is refused (ECONNREFUSED) A 301/302/307 redirect points to a different path — indicating the job was removed and the ATS is silently redirecting (soft-404 detection) If dead jobs are found, an IF node routes them to both update nodes in parallel: Supabase (Postgres) — status set to inactive via parameterized SQL Google Sheets — row updated to reflect the new status If no dead jobs are detected, the workflow exits cleanly with no writes Setup Connect your Postgres credentials and confirm the query in the Fetch Active Jobs node matches your table and column names (apply_url, job_hash, job_title) Connect your Google Sheets credentials and set the Resource ID and Sheet Name in the Mark Inactive node Confirm the inactive status value in the Postgres update query matches what your app expects (Optional) Adjust the soft-404 redirect detection logic in the Find Dead Jobs node if your ATS platforms use non-standard redirect patterns Database columns expected job_hash (unique identifier), apply_url, job_title, status Requirements Self-hosted or cloud n8n instance Supabase (or any Postgres-compatible) database with an active jobs table Google Sheets with a matching jobs log
by Yaron Been
Description This workflow automatically scores B2B leads by fetching and analyzing real company website data. It helps marketing and sales teams qualify inbound leads without manually researching each company. Overview A marketing employee submits a company name, LinkedIn URL, website URL, and scoring criteria through a Tally form. The workflow fetches the company's actual homepage, strips it to readable text, and runs it through a chain of three AI agents: one to normalize the input, one to extract intelligence from the real website content (tech stack, industry, company size signals), and one to score the lead against the submitted criteria. The scored, enriched lead is logged to a Google Sheet with a grade and recommended next action for the sales rep. Tools Used n8n**: The automation platform that orchestrates the workflow. Tally**: Free form builder used as the input layer where marketing employees submit companies to research. Create forms at tally.so. OpenAI (GPT-5.4)**: Powers three chained AI agents for data normalization, website content analysis, and lead scoring. Google Sheets**: Logs every scored lead with timestamp, company details, tech stack, score, grade, reasoning, and recommended action. How it works A marketing employee fills out a Tally form with the company name, LinkedIn URL, website URL, and which scoring criteria matter (company size, industry fit, tech stack, geography, funding stage, hiring signals). The Tally Trigger fires and sends the raw submission to Agent 1, which normalizes the payload into clean structured fields. An HTTP Request node fetches the company's homepage HTML directly. A Code node strips the HTML down to plain readable text (removes scripts, styles, navigation, footers) and truncates to 6,000 characters. Agent 2 reads the actual homepage content and extracts structured intelligence: what the company does, their tech stack, company size signals, industry, and signals relevant to the buyer's criteria. Agent 3 scores the lead 1-10 against the submitted criteria using the real website intelligence, assigns a grade (A-D), and writes a recommended action for the sales rep. The enriched lead is appended to a Google Sheet. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Tally: Add your Tally API key in n8n credentials (get it from Tally > Settings > Integrations > API). Select your Tally form in the Tally Trigger dropdown. Configure OpenAI: Add your OpenAI API key in n8n credentials. Configure Google Sheets: Add Google Sheets OAuth2 credentials. Create a Tally Form: Create a lead scoring form in Tally with these fields: Company Name (short text) LinkedIn URL (URL field) Company Website URL (URL field) Scoring Criteria (checkboxes: Company Size, Industry Fit, Tech Stack Match, Geography, Funding Stage, Hiring Signals) Additional Notes (long text, optional) Set Up Google Sheets: Create a spreadsheet with a tab named exactly leads and these column headers in Row 1 (all lowercase, copy-paste ready): | timestamp | company | website_url | linkedin_url | criteria | company_summary | tech_stack | score | grade | reasoning | recommended_action | |-----------|---------|-------------|--------------|----------|-----------------|------------|-------|-------|-----------|--------------------| Connect the Sheet: Paste your spreadsheet ID into the Append Lead to Sheet node. Test: Submit a test entry through your Tally form to verify data flows through all stages. Use Cases Sales Teams**: Qualify inbound leads instantly instead of spending 15 minutes researching each company manually. SDRs and BDRs**: Prioritize outreach by grade. Focus on A-leads first, batch-reject D-leads. Marketing Ops**: Score MQLs at the point of capture before they enter the CRM. Partnership Teams**: Evaluate potential partner companies against tech stack and industry fit criteria. Agency New Business**: Screen prospect companies submitted by account managers before investing in a pitch. Notes The HTTP Request fetches the company homepage with a 15-second timeout. If a site blocks the request, the workflow continues with whatever data is available. Homepage text is truncated to 6,000 characters to keep AI costs predictable. GPT-5.4 costs approximately $2.50 per million input tokens and $15 per million output tokens. A typical lead scoring run uses 3 API calls, costing approximately $0.02-0.05 per lead depending on homepage size. To adjust scoring thresholds or add your own criteria weighting, edit the Score the Lead agent's system prompt. The workflow scores against whatever criteria the user submits in the Tally form, so different team members can use different criteria without editing the workflow. Tally's checkbox question type maps well to scoring criteria — each checked option arrives as a separate value in the API payload, making it easy for the AI to evaluate against specific dimensions. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ #n8n #automation #tally #openai #gpt5 #leadscoring #b2bsales #salesautomation #googlesheets #leadsqualification #aileadscoring #marketingautomation #n8nworkflow #workflow #nocode #salespipeline #prospecting #leadenrichment #techstack #tallyforms #inboundleads
by Santhej Kallada
In this tutorial, I’ll show how to create UGC (User Generated Content) videos automatically using n8n and Sora 2. This workflow uses OpenAI to generate detailed prompts and Sora 2 to produce realistic UGC-style videos that look natural and engaging. Who is this for? Marketers and social media managers scaling short-form video content Agencies producing branded or influencer-style content Content creators and freelancers automating their video workflows Anyone exploring AI-driven video generation and automation What problem is this workflow solving? Creating authentic, human-like UGC videos manually takes time and effort. This workflow automates the entire process by: Generating engaging scripts or prompts via OpenAI Sending those prompts to Sora 2 for automatic video generation Managing rendering and delivery inside n8n Eliminating manual editing and production steps What this workflow does This workflow connects n8n, OpenAI, and Sora 2 to fully automate the creation of short-form UGC videos. The steps include: Taking user input (topic, tone, niche). Using OpenAI to create a detailed video prompt. Sending the prompt to Sora 2 via HTTP Request to generate the video. Handling video rendering and storing or sending results automatically. By the end, you’ll have a complete UGC video pipeline running on autopilot — producing content for under $1.50 per video. Setup Create Accounts: Sign up for n8n.io (cloud or self-hosted). Get access to OpenAI API and Sora 2. Generate API Keys: Retrieve API keys from OpenAI and Sora 2. Store them securely in n8n credentials. Create Workflow: Add a Form Trigger or Webhook Trigger for input (topic, target audience). Add an OpenAI Node to generate script prompts. Connect an HTTP Request Node to send the prompt to Sora 2. Use a Wait Node or delay logic for video rendering completion. Store or send the output video file via Gmail, Telegram, or Google Drive. Test the Workflow: Run a test topic. Confirm that Sora 2 generates and returns a video automatically. How to customize this workflow to your needs Adjust OpenAI prompts for specific video styles (tutorials, product demos, testimonials). Integrate video output with social media platforms via n8n nodes. Add text-to-speech layers for voiceover automation. Schedule automatic content creation using Cron triggers. Connect with Notion or Airtable to manage content ideas. Notes You’ll need valid API keys for both OpenAI and Sora 2. Sora 2 may charge per render (approx. $1–$1.50 per video). Ensure your workflow includes sufficient delay/wait handling for video rendering. Works seamlessly on n8n Cloud or self-hosted setups. Want a Video Tutorial on How to Set Up This Automation? 👉 Watch on YouTube
by Lachlan
Who’s it for This workflow is for: People who want to quickly launch simple landing pages without paying monthly fees to landing page creators. It’s ideal for rapid prototyping, generation of large amounts of landing pages, testing campaign ideas, or generating quick web mockups with AI. People launching products that compete in some way with the complete landing page solutions, and want to get an understanding of the basic building blocks of landing page creators How it works / What it does Retrieves or creates session data from n8n Tables Generates a vivid scene description for the hero image using GPT Creates a custom AI-generated hero image (using Gemini Palm or your preferred model) Builds a responsive landing page layout with GPT-4o-mini Saves the generated HTML to an n8n data table Deploys the landing page to Vercel automatically Returns the public live URL of the generated site This workflow combines OpenAI, Google Gemini,Cloudinary, Vercel, and n8n Tables to create, store, and publish your webpage seamlessly from a single prompt. How to set up Create an n8n Table with the following columns: sessionID (text) html (long text) Add your credentials: OpenAI (for text and image generation) Geminoogle Gemini (PaLM) - through the Google Cloud Platform (for text and image generation) Cloudinary (for image upload) Vercel (for live deployment) Update the placeholders as noted inside the workflow: Cloudinary cloud name and upload preset OpenAI model and API key n8n table name and column mapping (sessionID, html) Vercel Header Auth token Run the workflow. After configuration, it will generate, upload, deploy, and return the live landing page URL automatically. Inline notes are included throughout the workflow indicating where you must update values such as credentials, table names, or API keys to make the flow work end to end. Requirements OpenAI API key Google Gemini API key Cloudinary account Vercel account n8n Table with sessionID and html columns How to customize the workflow Modify the OpenAI model or prompt to change the tone, layout, or visual style of the generated landing page. Replace Vercel deployment with your preferred hosting platform (e.g., Netlify or GitHub Pages) if desired. Add extra input fields (e.g., title, CTA, description) to collect richer context before generating the page. Add ability to integreat with databases to turn into a full loveable/Base44 competitor Result After setup, this workflow automatically converts any idea into a fully designed and live landing page within seconds. It generates the hero image, builds the HTML layout, deploys it to Vercel, and provides the final shareable URL instantly. Optional Cleanup Subflow An additional utility subflow is included to help keep your Vercel project clean by deleting older deployments. It preserves the two most recent deployments and deletes the rest. Use with caution — only run it if you want to remove previous test pages and free up space in your Vercel account.
by Takumi Oku
Who is this for Space Enthusiasts & Music Lovers**: Discover new music paired with stunning cosmic visuals. Community Managers**: specific Slack channels with engaging, creative daily content. n8n Learners**: Learn how to chain Image Analysis (Vision), Logic, and API integrations (Spotify/Slack). How it works Schedule: The workflow runs every night at 10 PM. Mood Logic: It checks the day of the week to adjust the energy level (e.g., higher energy for Friday nights, calmer vibes for Mondays). Visual Analysis: OpenAI (GPT-4o) analyzes the NASA APOD image to determine its color palette, mood, and subject matter, converting these into musical parameters (Valence, Energy). Curation: Spotify searches for a track that matches these specific parameters. Creative Writing: OpenAI generates a short poem or caption linking the image to the song. Delivery: The image, track link, and poem are posted to Slack, and the track is automatically saved to a designated Spotify Playlist. Requirements NASA API Key** (Free) OpenAI API Key** (Must have access to GPT-4o model) Spotify Developer Credentials** (Client ID and Client Secret) Slack** Workspace and Bot Token How to set up Set up your credentials for NASA, OpenAI, Spotify, and Slack in n8n. Create a specific Playlist in Spotify and copy its Playlist ID. Copy the Channel ID from the Slack channel where you want to post. Paste these IDs into the respective nodes (marked with <PLACEHOLDER>) or use the Set Fields node to manage them globally.
by Oneclick AI Squad
This workflow creates a self-improving AI agent inside n8n that can understand natural language tasks, plan steps, use tools (HTTP, code, search, …), reflect on results, and continue until the goal is reached — then deliver the final answer. How it works Webhook or manual trigger receives a task description LLM creates initial plan + first tool call (or finishes immediately) Loop: • Execute chosen tool • Send observation back to LLM • LLM reflects → decides next action or finish When finished → format final answer, save result, send Slack notification Setup steps Connect OpenAI (or Anthropic/Groq/Gemini) credential (Optional) Connect Slack credential for notifications Replace the placeholder “Other Tools” Code node with real tool nodes (Switch + HTTP Request, Google Sheets, Code node, etc.) Test with simple tasks first: • “What is the current weather in Ahmedabad?” • “Calculate 17×42 and explain the steps” Adjust max iterations (via SplitInBatches or custom counter) to prevent infinite loops Activate the workflow and send POST request to webhook with JSON: {"task": "your task here"} Requirements LLM API access (gpt-4o-mini works well for testing) Optional: Slack workspace for alerts Customization tips Upgrade to stronger reasoning models (o1-preview, Claude 3.5/3.7 Sonnet, Gemini 2.0) Add real tools: browser automation, vector DB lookup, file read/write, calendar Improve memory: append full history or use external vector store Add cost/safety guardrails (max iterations, forbidden actions) Contact Us If you need help setting up this workflow, want custom modifications, or have questions about integrating specific tools/services: 🌐 Website: https://www.oneclickitsolution.com/contact-us/
by Anas Chahid Ksabi
How it works Fetches all open sprint tickets daily from your Jira project Analyzes each ticket for overdue days and blocked status Routes to the right escalation level: assignee email → team Google Chat alert → manager escalation Set up steps Add your Jira Software Cloud credentials in n8n Add your Gmail OAuth2 credentials in n8n Open the ⚙️ CONFIG node and fill in your 4 values (Jira domain, project key, manager email, Google Chat webhook) Test with the Manual Trigger, then enable the Schedule Trigger
by Guillaume Duvernay
Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists:** Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts:** Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies:** Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. 💡 What problem does this solve? The Best of Both Worlds:** Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations":** Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility:* Automates the inclusion of source links from *both** your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage:** The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow:** Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! 🚀 Taking it further Automate Publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create draft posts in your CMS. Generate Content in Bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to generate a batch of articles from your content calendar. Customize the Writing Style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.
by Evgeny Agronsky
What it does Automates code review by listening for a comment trigger on GitLab merge requests, summarising the diff, and using an LLM to post constructive, line‑specific feedback. If a JIRA ticket ID is found in the MR description, the ticket’s summary is used to inform the AI review. Use cases Quickly obtain high‑quality feedback on MRs without waiting for peers. Highlight logic, security or performance issues that might slip through cursory reviews. Incorporate project context by pulling in related JIRA ticket summaries. Good to know Triggered by commenting ai-review on a merge request. The LLM returns only high‑value findings; if nothing critical is detected, the workflow posts an “all clear” message. You can swap out the LLM (Gemini, OpenAI, etc.) or adjust the prompt to fit your team’s guidelines. AI usage may incur costs or be geo‑restricted depending on your provider n8n.io. How it works Webhook listener:** A Webhook node captures GitLab note events and filters for the trigger phrase. Fetch & parse:** The workflow retrieves MR details and diffs, splitting each change into “original” and “new” code blocks. Optional JIRA context:** If your MR description includes a JIRA key (e.g., PROJ-123), the workflow fetches the ticket (and parent ticket for subtasks) and composes a brief context summary. LLM review:** The parsed diff and optional context are sent to an LLM with instructions to identify logic, security or performance issues and suggest improvements. Post results:** Inline comments are posted back to the MR at the appropriate file/line positions; if no issues are found, a single “all clear” note is posted. How to use Import the template JSON and open the Webhook node. Replace the REPLACE_WITH_UNIQUE_PATH placeholder with your desired path and configure a GitLab project webhook to send MR comments to that URL. Select your LLM credentials in the Gemini (or other LLM) node, and optionally add JIRA credentials in the JIRA nodes. Activate the workflow and comment ai-review on any merge request to test it. For each review, the workflow posts status updates (“AI review initiated…”) and final comments. Requirements A GitLab project with a generate Personal Access Token (PAT) stored as an environment variable (GITLAB_TOKEN). LLM credentials (e.g., Google Gemini) and optional JIRA credentials. Customising this workflow Change the trigger phrase in the Trigger Phrase Filter node. Modify the LLM prompt to focus on different aspects (e.g., style, documentation). Filter out certain file types or directories before sending diffs to the LLM. Integrate other services (Slack, email) to notify teams when reviews are complete.
by Moe Ahad
How it works User enters name of a city for which most current weather information will be gathered Custom Python code processes the weather data and generates a custom email about the weather AI agent further customizes the email and add a related joke about the weather Recipient gets the custom email for the city Set up instructions Enter city to get the weather data Add OpenWeather API and replace <your_API_key> with your actual API key Add your OpenAI API in OpenAI Chat Model Node Add your Gmail credentials and specify a recipient for the custom email
by Supira Inc.
💡 How It Works This workflow automatically detects new YouTube uploads, retrieves their transcripts, summarizes them in Japanese using GPT-4 o mini, and posts the results to a selected Slack channel. It’s ideal for teams who follow multiple creators, internal training playlists, or corporate webinars and want concise Japanese summaries in Slack without manual work. Here’s the flow at a glance: YouTube RSS Trigger — monitors a specific channel’s RSS feed. HTTP Request via RapidAPI — fetches the video transcript (supports both English & Japanese). Code Node — merges segmented transcript text into one clean string. OpenAI (GPT-4o-mini) — generates a natural-sounding, 3-line Japanese summary. Slack Message — posts the title, link, and generated summary to #youtube-summary. ⚙️ Requirements n8n (v1.60 or later) RapidAPI account + [youtube-transcript3 API key] OpenAI API key (GPT-4o-mini recommended) Slack workspace with OAuth connection 🧩 Setup Instructions 1.Replace YOUR_RAPIDAPI_KEY_HERE with your own RapidAPI key. 2.Add your OpenAI Credential under Credentials → OpenAI. 3.Set your target Slack channel (e.g., #youtube-summary). 4.Enter the YouTube channel ID in the RSS Trigger node. 5.Activate the workflow and test with a recent video. 🎛️ Customization Tips Modify the OpenAI prompt to change summary length or tone. Duplicate the RSS Trigger for multiple channels → merge before summarization. Localize Slack messages using Japanese or English templates. 🚀 Use Case Perfect for marketing teams, content curators, and knowledge managers who want to stay updated on YouTube content in Japanese without leaving Slack.
by Robert Breen
🧑💻 Description This workflow integrates Slack with an OpenAI Chat Agent to create a fully interactive chatbot inside your Slack workspace. It works in a bidirectional loop: A user sends a message in Slack. The workflow captures the message and logs it back into Slack (so you can monitor what’s being passed into the agent). The message is sent to an OpenAI-powered agent (e.g., GPT-4o). The agent generates a response. The response is formatted and posted back to Slack in the same channel or DM thread. This allows you to monitor, test, and interact with the agent directly from Slack. 📌 Use Cases Team Support Bot**: Provide quick AI-generated answers to FAQs in Slack. E-commerce Example**: The default prompt makes the bot act like a store assistant, but you can swap in your own domain knowledge. Conversation Monitoring**: Log both user and agent messages in Slack for visibility and review. Custom AI Agents**: Extend with RAG, external APIs, or workflow automations for specialized tasks. ⚙️ Setup Instructions 1️⃣ OpenAI Setup Sign up at OpenAI. Generate an API key from the API Keys page. In n8n → Credentials → New → OpenAI → paste your key and save. In the OpenAI Chat node, select your credential and configure the system prompt. Example included: “You are an ecommerce bot. Help the user as if you were working for a mock store.” You can edit this prompt to fit your use case (support bot, HR assistant, knowledge retriever, etc.). 2️⃣ Slack Setup Go to Slack API Apps → click Create New App. Under OAuth & Permissions, add the following scopes: Read: channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, users:read. Write: chat:write. Install the app to your workspace → copy the Bot User OAuth Token. In n8n → Credentials → New → Slack OAuth2 API → paste the token and save. In the Slack nodes (e.g., Send User Message in Slack, Send Agent’s Response in Slack), select your credential and specify the Channel ID or User ID to send/receive messages. 🎛️ Customization Guidance Change Agent Behavior: Update the system message in the **Chat Agent node. Filter Channels**: Limit listening to a specific channel by adjusting the Slack node’s Channel ID. Format Responses: The **Format Response node shows how to structure agent replies before posting back to Slack. Extend Workflows**: Add integrations with databases, CRMs, or APIs for dynamic data-driven responses. 🔄 Workflow Flow (Simplified) Slack User Message → Send User Message in Slack → Chat Agent → Format Response → Send Agent Response in Slack 📬 Contact Need help customizing this workflow (e.g., multi-channel listening, advanced AI logic, or external integrations)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com