AI Agents Development

Automation casefile for turning research intake, monitoring, retrieval, and publishing into faster analyst operations with explicit human review.

Origin AI as a productivity multiplier

The work started from using AI heavily in daily tasks, then pushing toward more reliable research workflows.

Stack Local RAG and workflow tooling

Ollama, Postgres or Supabase, Docker, browser automation, and webhooks tied into one analyst-support stack.

Guardrail Human review stays in the loop

The point is better throughput and cleaner monitoring, not blind autonomy for investment-facing work.

Summary

Role

Workflow designer, integrator, and evaluator for research-support systems.

Use cases

Monitoring, retrieval, drafting, publishing, and local knowledge-base support.

Focus

Reduce research drag and context switching while keeping confidence and failure modes visible.

Why this started

  • AI became a real productivity multiplier in daily work, especially for coding, drafting, and structuring messy tasks.
  • The first lesson was that generic chatbots were useful but not persistent or contextual enough for recurring professional workflows.
  • The next step was local and semi-local systems: retrieval, monitoring, and task-specific agents that fit an analyst desk better than one general chat window.

Public versus local AI

Public AI agents

Easy to access, lightweight, and usable without local compute, but often constrained by older models and paywalled features.

Local RAG stack

Open-source models, local knowledge bases, and no per-call token cost, with better control over prompts and retrieval.

Tradeoff

Local systems require setup skill and hardware, but they support far more tailored research workflows.

Practical outcome

The useful middle ground is targeted automation with a human gate, not a fantasy of full self-running investment research.

Local workflow stack

Ingest Pull notes, websites, screenshots, and structured data into a consistent collection layer.
Retrieve Use vector storage and database-backed context to fetch the smallest useful slice for the task.
Draft or monitor Generate summaries, detect page changes, or scaffold memo-ready output with task-specific prompts.
Review and publish Keep a human checkpoint before anything decision-relevant leaves the workflow.

Repo cluster

Local stack

deepseek-local-ai-starter-kit

Local-first base for model serving, retrieval, and self-hosted AI experiments, updated around newer open models.

Monitoring

gemini-vision-AI-website-monitor

Website and visual-change monitoring workflow for recurring corporate or coverage checks.

Retrieval

n8n-template-and-documentation-for-RAG

Reusable retrieval and automation templates for research support tasks.

Publishing

obsidian-post-webhook

Bridge between working notes and public or shared research surfaces.

Research workflow applications

Monitoring

Browser and screenshot workflows for recurring website or company-update checks.

Retrieval

Local-first context retrieval from notes, docs, and structured databases.

Drafting

Memo scaffolds and first-pass summaries that reduce blank-page time without replacing judgment.

Publishing

Workflow support for moving completed notes into cleaner public or shared surfaces.

Operating constraints

  • High-end local models depend heavily on GPU VRAM and setup discipline.
  • Retrieval quality, confidence, and failure visibility matter more than raw prompt cleverness.
  • Anything investment-facing still needs explicit human review before it is trusted.

Outcomes

  • Lower context-switching overhead during research-heavy periods.
  • Faster first-pass synthesis on recurring market or company topics.
  • Cleaner pipeline from monitoring to notes to public documentation.

See also