About Me

What happens after the merge, and why the invisible half of software is the part I keep coming back to.

Minh Pham

For a while I only cared about what showed up on the screen. Slowly I started asking what happens underneath: deployments, containers, pipelines, what we monitor, what we log, what breaks when nobody is watching, and why reliability is hard on purpose.

That curiosity turned into a habit. I started reading runbooks, staring at dashboards, and caring about the gap between a green build and a calm production week.

I'm Minh Pham. I'm a Computer Science and Mathematics student at the University of Minnesota, Twin Cities. The work I want to grow into sits in DevOps, DevSecOps, MLOps, and the infrastructure side of things: how software is kept alive, secured, and observable once it leaves the repo.

Over time I cared less about “did I ship a feature” and more about how software is tested, packaged, deployed, watched, patched, scaled, and handed off. A feature that cannot be deployed safely is not really done. That is the shift for me: not abandoning building, but widening the frame to the whole system around the code.

I like dependable systems. I like automation because it removes fragile manual steps people forget under stress. I like observability because it tells the truth when assumptions are wrong. Infrastructure pushes me to think past one file or one service: limits, dependencies, failure domains, and how pieces fail together. Reliability, to me, means designing for conditions that are messy, concurrent, and real.

School helped with that. Operating systems and networking gave me a feel for what the machine is actually doing. Algorithms and mathematics trained me to notice trade-offs, costs, and edge cases. I think about failure modes and efficiency more than I used to, and I want explanations that are clear enough to hold up when something breaks at night.

Speed without safety is brittle. If we ship fast, we should know what we are trusting.

When I work, I think about the full loop: build, test, deploy, observe, improve. I work in small steps because big surprises rarely help anyone. I expect failures to happen; the point is to learn from them. Logs, a broken build, a long debugging session, a noisy metric: those are often where I learn the most, even when they are painful.

To me, DevSecOps is where policy meets the pipeline: scans, secrets, and review have to travel with the same rhythm as the deploy. MLOps sits nearby: models need reproducibility, deployment discipline, and monitoring like anything else we run in production, sometimes with harder data problems on top.

I was a soccer captain for a while. It was less about being loud and more about communication, trust, and responsibility when the game was tight. That carries over: teams, clear signals, and owning outcomes together.

I try to build in public in a simple way. My GitHub, projects, and blog are where I share the work in progress, not only the tidy ending. I would rather someone see what I tried than pretend I never had a wrong turn.

On a personal level, I stay curious about how systems fail. I have spent hours debugging something small because I needed to know why. I still sketch ideas on paper before I touch a terminal when the problem feels fuzzy. I want to understand what is happening under the hood, not just the surface.

Right now I am exploring CI/CD automation, Kubernetes, infrastructure as code, observability, secure delivery pipelines, and MLOps workflows. Long term I want to be the kind of engineer who builds systems that are resilient, maintainable, and actually useful when things go wrong.

I do not have everything figured out. I am still learning, still building, still trying to get better at the parts that matter in production. If you care about DevOps, systems, infrastructure, or how things run when they leave the laptop, I would love to hear from you.