OpenCode has become part of my day-to-day engineering workflow, not as a replacement for judgment, but as a high-leverage assistant that helps me move faster with better consistency.

What made it actually useful was wiring it into the way I already work — especially the conventions in my dotfiles.

Why this setup works

Most assistant tools fail when they have no context for your environment:

  • command aliases
  • editor behavior
  • shell defaults
  • project bootstrap habits

My dotfiles encode those defaults, so OpenCode can reason with the same constraints I use every day.

Dotfile patterns I reference with OpenCode

I keep the assistant aligned with four practical context layers.

1) Shell conventions

I rely on shell aliases/functions for routine actions like status checks, clean builds, and navigation shortcuts.

When I ask OpenCode to run checks or suggest commands, I frame prompts around my shell conventions so output maps cleanly to my terminal habits.

2) Git workflow defaults

My dotfiles include a consistent Git style (branch naming, short status checks, commit hygiene).

I use OpenCode for:

  • commit message drafting
  • concise PR summaries
  • release note scaffolding

The key is to keep the assistant inside existing team guardrails rather than inventing a new process.

3) Editor ergonomics

I keep formatting and linting expectations predictable through editor and CLI tooling.

OpenCode helps with refactors best when it understands:

  • preferred file structure
  • formatting conventions
  • the validation commands I run before commit

4) Reproducible command history

Instead of one-off AI suggestions, I convert useful commands into repeatable scripts/aliases where appropriate.

That creates a feedback loop: OpenCode helps discover improvements, and dotfiles preserve what works.

My current OpenCode workflow

  1. Start with repository context (what changed, what failed, what outcome I want).
  2. Constrain commands to project-safe checks first (build, tests, lint, targeted verification).
  3. Request structured output (summary, patch rationale, and validation list).
  4. Promote repeatable wins into dotfiles/scripts if they save time repeatedly.

What improved for me

  • Faster first-pass implementation on repetitive tasks.
  • Better consistency in PR quality and technical communication.
  • Less context switching during triage and maintenance work.
  • More time for architecture decisions and mentoring.

Final thought

The real value is not "AI writes code." The value is creating a system where your assistant can operate inside your engineering standards.

For me, dotfiles are the bridge between personal workflow and assistant reliability.

If you're trying OpenCode, start there: make your environment explicit, then let the assistant amplify it.

Share this article