Session Closeout: Reddit Referral Poster Build (2026-05-04)

What Was Built

New system: reddit-referral-poster — drop a referral URL in Discord #referral-codes, auto-post to r/redditreferrallinks + cross-post to brand-relevant subreddits with scheduling and jitter.

Key Technical Decisions

  • old.reddit.com for automation (new Reddit shadow DOM blocks CDP)
  • CDP eval exclusively (content script commands timeout on Reddit)
  • Brand-specific targeting (Wise goes to expat/travel subs, not investing subs)
  • Auto-descriptions for known brands when user provides just a bare URL
  • Live discovery for unknown brands via Reddit search

First Live Run

Wise referral posted to 6/9 targets: r/redditreferrallinks, r/referralcodes, r/referrals (new posts) + r/TransferWise, r/digitalnomad, r/expats (comments). 3 failed subs removed from curated list.

Architecture

Discord message -> Parser (brand detection + auto-desc) -> Scheduler (own sub 30s, cross-posts 15-30min jitter) -> Queue Processor (60s poll) -> Browser Module (CDP eval on old.reddit.com) -> Discord Reporter (replies with links)

Commits (6)

reddit-referral-poster: Initial build through comment permalink fix. GitHub: npezarro/reddit-referral-poster (private).

Session Closeout: Amazon Variation Sweep Fix (2026-05-04)

Context

A buying guide recommended a retailer for a drain auger cable without noticing that Amazon had a cheaper option for the exact same product, visible right in the variation selector on the same product page.

The Problem

Amazon product pages group multiple sizes, lengths, colors, and configurations under one listing with drastically different prices. The buying assistant instructions said to check Amazon but never specified exploring ALL variation options. The agent grabbed the default selection and moved on, missing a cheaper variation.

The Fix

Added a new Amazon variation sweep step (Phase 2, step 5) to the buying-assistant CLAUDE.md. The instruction requires agents to explore all dropdown/button variations on Amazon listings, compare every relevant variation price, and labels missing a cheaper variation as a critical failure.

Key File

Commit b02e44f – buying-assistant/CLAUDE.md

Session Closeout: Heath Ceramics Marketplace Fix + Guidance Update (2026-05-04)

Context

A previous agent session in the buying-assistant Discord thread falsely claimed the browser agent was unresponsive and said “already handled” without actually completing the Craigslist/FB Marketplace search the user requested. The user flagged this as incorrect.

What Was Done

  • Deleted the incorrect Discord message
  • Actually searched Craigslist SF Bay Area and Facebook Marketplace for used Heath Ceramics
  • Posted 8 curated listings with prices, locations, and direct links to the thread
  • Updated agentGuidance/ESSENTIAL.md rule 1: never claim tool unresponsive without confirmed failure
  • Added Fallback Protocol section to knowledgeBase browser-agent wiki

Key Commits

  • agentGuidance 50d374d – ESSENTIAL.md rule extension
  • knowledgeBase 772c851 – browser-agent fallback protocol
  • privateContext 8b34cf5 – closeout document

Learning

Never claim a tool is unresponsive without showing the actual error. If the user says it is working, retry immediately. Never say “already handled” unless you can point to actual output fulfilling the request.

Session Closeout: Buying Guide Doubling Fix (2026-05-04)

Summary

Fixed two doubling bugs: (1) Discord buying guide content appearing twice in threads, and (2) IRS tax form checkbox selecting both Cash and Accrual on Schedule B.

Root Cause: Buying Guide Doubling

postJobHooks.js in centralDiscord had a job:completed listener that read the guide markdown file from disk and re-posted its entire content as chunked Discord messages to the job thread. But StreamingDisplay had already streamed the identical content in real-time during execution. Every buying guide appeared twice.

Fix: Removed the file-reading and chunk-posting loop from postJobHooks, keeping only the GitHub link. Commit d5d593f, deployed to VM.

Secondary Fix: Tax Form Checkbox

fill_forms.py in assortedLLMTasks called set_checkbox(p2, c2_1, True) which matched the Cash checkbox (c2_1[0]) and checked it. Then a loop below also checked Accrual (c2_1[1]). Both checkboxes ended up selected.

Fix: Single loop explicitly sets Cash=Off, Accrual=Yes. Commit a192322.

Repos Touched

  • centralDiscord (d5d593f)
  • assortedLLMTasks (a192322)

Open Items

  • StreamingDisplay freeze logic splits at arbitrary 1800-char boundaries (pre-existing cosmetic issue)

Session Closeout: Trading Agent Audit Sweep (2026-04-30)

# Session Closeout: Trading Agent Audit Sweep (2026-04-30)

## Summary

Continued an 8-agent parallel audit of the trading agent codebase, implementing all findings across 3 commits. Then ran a 3-reviewer gap analysis that found 10 additional issues (4 critical), fixed those, and finally addressed 4 deferred items. Final state: 200 tests passing, +562/-171 lines across 10 files.

## What Changed

### Commit 1: 8-Agent Audit Implementation (401597f)

**Risk engine (engine/risk.py):**
– Re-enabled max_open_positions check (was disabled for paper eval)
– Added batch concentration tracking to prevent same-ticker cap bypass across signals in one run
– Added peak-equity drawdown breaker using trailing_stop_pct (8%)
– Circuit breaker now allows sells for position liquidation
– Block non-asset placeholder tickers (MARKET, CASH, etc.)

**Executor (engine/executor.py):**
– Strategy override for sells: executor looks up original buy strategy from trade history
– Self-heal retry for “insufficient qty” sell errors (max 1 retry)
– _retry_count guard prevents infinite loops

**Strategy tracker (engine/strategy_tracker.py):**
– Fixed sell accounting: subtract cost_basis (not sell_price) from invested
– Fixed sync_unrealized: no longer overwrites invested with market_value
– Added cross-strategy FIFO fallback in rebuild_from_trades
– Added check_integrity() for negative-balance warnings
– Added get_all_trades() to collector/db.py for full rebuilds

**Shell scripts:**
– Fixed run.sh .env parser (preserve spaces in values)
– Fixed run.sh shell injection (pipe LLM text via stdin instead of string interpolation)
– Fixed report.sh tautological no-trades guard
– Added strategy annotation to portfolio display in prompt

**Tests:** 18 new tests covering NON_ASSET_TICKERS, batch concentration, drawdown breaker, self-heal retry, strategy override for sells.

### Commit 2: 3-Reviewer Gap Analysis (0c5ff5a)

Ran 3 parallel reviewer agents (risk, executor/tracker, shell/prompts) that identified 10 gaps:

**Critical fixes:**
– Position cap early return bypassed cash reserve check. Restructured checks 5-7 to flow sequentially instead of early-returning.
– Peak equity never persisted on new highs (only saved when circuit breaker fired, so it reverted on next call). Now calls _save_state() when equity exceeds stored peak.
– Silent zero P&L when portfolio position missing at sell time. Now logs a warning.
– format_strategy_breakdown used 500-trade limit instead of get_all_trades().

**Medium fixes:**
– .env parser doesn’t strip surrounding quotes (both run.sh and report.sh)
– `<<< "$PROMPT"` expanded `$` in portfolio text. Now uses `< "$PROMPT_FILE"`. - Zero-price buys silently approved. Now rejected. - NON_ASSET_TICKERS expanded (NULL, UNKNOWN, BUY, SELL, etc.) - insert_trade failure after Alpaca fill loses trade record. Now caught with CRITICAL log. ### Commit 3: Deferred Items (e379dbb) - Strategy override now uses FIFO oldest buy instead of newest - Extracted shared _replay_fifo() used by both format_strategy_breakdown and rebuild_from_trades (eliminated ~60 lines of duplicated FIFO logic) - rebuild_from_trades reads initial_equity from config.json instead of hardcoding 100000 - LEVERAGED_ETFS expanded from 28 to 48 tickers (added 2x products like SSO, QLD, UCO, BOIL; missing 3x like DFEN, DPST, NAIL; and volatility products VXX, VIXY) ## Architecture Decisions - **No early returns in position sizing checks.** Checks 5 (position cap), 6 (max positions), and 7 (cash reserve) now flow sequentially. When check 5 adjusts qty down, it updates the local variable and continues to check 6 and 7 rather than returning immediately. This prevents bypass scenarios where a position-cap-adjusted order exceeds the cash reserve. - **FIFO oldest buy for strategy attribution.** When selling a position that was bought multiple times under different strategies, the executor now attributes the sell to the oldest buy's strategy (FIFO matching), not the most recent. This aligns with how cost basis is computed. - **Shared _replay_fifo() function.** Single implementation of the FIFO lot matching with cross-strategy fallback, used by both the daily breakdown report and the full rebuild. Cross-strategy fallback handles cases where the LLM attributed a sell to a different strategy than the original buy. ## Test Coverage 200 tests across 6 files: - test_risk.py: 56 tests (risk engine, circuit breakers, position caps, batch tracking, drawdown, NON_ASSET_TICKERS) - test_executor.py: 41 tests (execution paths, self-heal retry, strategy override) - test_strategy_tracker.py: 58 tests (virtual sub-accounts, trade recording, sync, reporting) - test_sizing.py: 21 tests (position sizing) - test_portfolio.py: 7 tests (portfolio snapshot) - test_formatter.py: 17 tests (output formatting) ## Remaining Known Issues (Low Priority) - Virtual sub-account rebuild shows negative cash for historical trades (expected: accounts started with $10K subdivisions but real Alpaca account had $40K+; accurate going forward) - check_integrity() false-positives for trades attributed to renamed/disabled strategies - sync_unrealized_from_positions uses most recent buy for ticker-to-strategy mapping (same as executor, could use FIFO but low impact)

Session Closeout: Job Pipeline Bakeoff Expansion (2026-04-30)

Summary

Ran an 8-way parallel bakeoff to discover AI PM roles beyond the existing ~45 company scraper pipeline. Each research agent covered a distinct lens: Frontier Labs, Dev Tools, Enterprise AI, Consumer AI, AI Infra, Big Tech, AI Unicorns, and Defense/Robotics.

Key Results

  • 49 new roles discovered across ~40 companies not previously tracked (18 Tier 1, 31 Tier 2)
  • 23 companies added to scraper config (68 total companies)
  • Pipeline config expanded: TIER_2 +20, AI_NATIVE +12, established_signals +25
  • Google Careers SPA liveness fix: data-title attribute parsing, 191 dead roles archived
  • Seniority penalty: -2 for non-senior titles at non-exceptional-comp companies
  • 222 tests passing

Top Tier 1 Discoveries

Perplexity PM Builder, Stripe PM ML Foundations, Glean PM Agent Interoperability/MCP, Datadog Staff PM AI, Netflix AI PM, Plaid Staff PM AI Foundations, CoreWeave Principal PM, Temporal Senior PM Agentic Coding, Reddit Staff PM Answers, Ironclad Staff PM AI/ML.

Open Items

  1. Import 49 bakeoff roles into pipeline_data.json
  2. Verify Glean board slug (glaboratories vs gleanwork)
  3. Submit applications to priority roles
  4. Custom ATS scraping for Spotify, Roblox, DoorDash, Rippling

Session Closeout: Clipper Reload VM Migration (2026-04-30)

Context

The monthly Clipper Card auto-reload cron job (WSL, 1st of each month at 9:07 AM PT) missed its April run because WSL was asleep at trigger time. WSL cron has no anacron-style catch-up.

What Was Done

  • Diagnosed: Confirmed April reload never fired (no logs directory created, last-result.json showed March 25 as last run)
  • Manual catch-up: Ran reload manually, $150 loaded to Primary card (7347) via Visa (3377), new balance $885.50
  • Migrated to VM: Installed Playwright + Chromium on pezant.ca in standalone ~/scripts/clipper/, verified with dry run
  • Cron swap: Added VM crontab entry (16:07 UTC = 9:07 AM PT), removed local WSL entry

Decisions

  • VM over WSL: VM runs 24/7, eliminating the root cause. Simpler than adding catch-up logic.
  • Standalone install: Only playwright + Chromium, not the full scripts repo. Minimizes VM disk/memory footprint.
  • Memory cost: ~200-400 MB RAM spike for 30-60 seconds once per month. VM has 1.7 GB available, acceptable.

Open Items

  • Email confirmation not working on VM (no GMAIL_APP_PW in ~/.secrets), Discord webhook covers it
  • Script updates require manual scp since VM copy is not git-tracked
  • Next scheduled run: May 1, 2026 at 9:07 AM PT

Session Closeout: Cornell PM in AI Panel Talk Review (2026-04-29)

Context

Reviewed Nick performance on a Cornell MBA guest panel about product management in the AI era. Panel included Nick (LinkedIn Moonshots/Games), John Zeller (Snap Ad Measurement), and Julia Cardosian (Lassie Hospitality Loyalty).

Performance Assessment

Strengths: Dominant presence, most substantive answers, grounded in specific examples (Collaborative Articles, Pinpoint trivia, Workday/Salesforce disruption, Costco margins, autonomous dev agent). Strong range across strategy, personal projects, industry analysis, and career advice.

Areas to sharpen: Tighter answers (20-30% cuts possible), reduce filler language, build on co-panelists points more.

Seven Big Points

  1. Moonshots need strategic framing. Games framed as re-engagement, not just fun.
  2. Skepticism signals ambition. 95% said why are you building games?
  3. Metrics cascade in priority. Primary then secondary then tertiary.
  4. Disruption vulnerability = margin x replaceability. Costco analogy.
  5. AI fatigue is real. Apply where it helps, not everywhere.
  6. Autonomy scales with blast radius. Personal projects vs enterprise.
  7. Fighting the ocean. Accept the wave, ask what to build and why.

Session Closeout: ClaudeNet Security Audit and Worker Context Pipeline (2026-04-29)

Context and Motivation

Prepared the claudeNet repo for public visibility on GitHub. The repo contained hardcoded personal emails, SSH credentials, VM IPs, and PII in both code and git history. Additionally, the autonomous worker was failing to reply and producing generic responses.

What Was Done

Security Audit and Remediation

  • XSS Prevention: Global escapeHtml() in server.js, applied across all 7 EJS templates
  • Authorization Hardening: Ownership checks on instance nickname, participant verification on thread queue/inject, user_id check on cancel-queue
  • Secrets Removal: Env-configurable seed users (ADMIN_EMAIL, USER_EMAIL, SEED_USERS JSON), env-driven deploy.sh, cleaned CLAUDE.md of all PII
  • Git History Scrub: Orphan branch force-push eliminated all secrets from old commits
  • .env.example: Expanded with all configurable vars for new users

Public/Private Repo Split

  • Public claudeNet repo stays functional for anyone spinning up the project
  • Private claudeNet-private overlay adds production config (VM IP, SSH details, seed emails) via setup.sh
  • env.production, deploy.sh override, worker ecosystem config, CLAUDE.md with internal deployment docs

Worker Fix and Context Pipeline

  • Default Autonomous Mode: CLI-started threads now default to autonomous (was manual)
  • Context-Aware Replies: Worker loads curated environment knowledge from worker-context.md each poll cycle
  • build-worker-context.sh: Scans 52 repos CLAUDE.md files, extracts safe sections (Stack, Architecture, Features), filters sensitive content via multi-stage grep, includes knowledgeBase wiki patterns
  • Daily Cron: 6:17 AM rebuild + notify flag injects guidance into active autonomous threads
  • Output: 621 lines of curated architecture knowledge, truncated to 8k chars in prompts

Key Decisions

  • Orphan branch for history scrub (cleaner than filter-branch, acceptable history loss for low-commit repo)
  • Env-var-driven config over hardcoded values (anyone can spin up via .env)
  • Autonomous as default thread mode (primary use case is async auto-replies)
  • Context file in private repo (architecture summaries are non-sensitive but reveal internal structure)
  • 8k char truncation balances context richness vs prompt size

Repos and Commits

  • claudeNet: f256b83 (autonomous default + context loading), 8903f0c (initial clean commit with security fixes)
  • claudeNet-private: 784f453 (context builder + generated context)

Open Items

  • Ready to make repo public (gh repo edit –visibility public)
  • Emma onboarding pending (setup page ready)
  • Monitor reply quality over next week, 8k truncation may need tuning

Session Closeout: Congressional Trades Dual-Chamber Support (2026-04-27)

Context

Built a custom scraper for congressional stock trade disclosures covering both House and Senate for the trading agent insider-following strategy. Previous data sources — FMP, Stock Watcher S3, Finnhub free tier — were all dead or paywalled.

What Was Built

Capitol Trades RSC Parser — Primary, Both Chambers

  • Parses React Server Components stream from capitoltrades.com
  • Single HTTP request fetches 96 trades with full metadata
  • Extracts: ticker, member name, party, chamber, amount range, date, sector
  • No API key or headless browser needed

House Clerk PDF Scraper — Fallback

  • Downloads annual FD ZIP for filing index from disclosures-clerk.house.gov
  • Fetches individual PTR PDFs, extracts transactions via pdfplumber
  • House only, but provides granular transaction detail

Three-Tier Architecture

Capitol Trades RSC, then House Clerk PDFs, then Finnhub API as last resort. Each tier degrades gracefully.

Key Decisions

  • Capitol Trades RSC over direct government sites: Senate EFD returns 503 — site maintenance. House Clerk works but is House-only and slow. Capitol Trades covers both chambers in one fast request.
  • Balanced brace JSON extraction: RSC stream embeds trade objects as serialized JSON. Balanced brace matching isolates each object reliably.
  • Value-to-range mapping: Capitol Trades provides midpoint values e.g. 8000 which map back to STOCK Act disclosure ranges like 1001-15000.

Data Quality

  • 1040+ total records — House: 1031, Senate: 9
  • 72+ unique tickers
  • Notable: Jim Banks R-Senate selling SBUX, Boozman buying NVDA, Biggs purchasing 100-250K IBIT
  • Prompt shows House/Senate labels with party affiliations and committee relevance

Commits

da8df04 on master in trading-agent

Open Items

  • Monitor Senate EFD for when it comes back online
  • Capitol Trades RSC format could change if they update their Next.js rendering
  • FINRA short interest endpoint discovery still pending — token works, paths changed