Thinking about how I take notes all all of my learnings make me feel that an LLM could be trained fairly easily to replicate how I think.

But then, I think about how many situations I am exposed to.

Also, we still write security groups manually. An AI should do this. The prompt is easy: I only want X, Y and Z to be able to talk to A on port 8888.

All of infrastructure design should be automated. It’s a solved problem. It doesn’t require much “deep thought”. Heck, even writing applications should be automated. The prompt for these is: “Run an application X in Rust, using this web framework, on Kubernetes on AWS and expose these ports. Make sure it can talk to these other applications.”

Then, what is software engineering?

Finally, maybe we would be free enough to actually think.

We could be done shaving yaks. We would be solving the actual first-level problems instead of shaving yaks many layers deep worrying about how to manage network connections, how to structure our code, etc.

Or, maybe we could stop solving problems and enjoy life.