I looked up what the term names, there are new terms and names coming up faster than I bother to keep up with these days. Local-first software is software that users retain control and ownership of their software, that software works offline, enables real-time collaboration (presumedly when not offline) and interoperability (can be ported to different services and you are not tied to one service).
It speaks of the moral and ethical integrity that software used to be way back when, when you bought something, you HAD it until you chose to upgrade or port away, the choice always lies with the consumer. There is no parasitic tendencies of entrapment, it was a free market. Technology is a mens to an end, it is not supposed to be a new form of entrapment.
I loved what Adam Wiggins stood for in that he wanted to make life better and easier for developers.
I did try Heroku before their new AI facelift. It was a good idea, I just was not sure that it was cost optimal for scaling. I also did not trust sufficiently to give up that much control, especially when I could set up the permissions and environment the way I wanted it. Excessively reducing friction does gimp growth - we all choose the where we place our pivots.
I do like the way he thinks, I agree that there is a bigger frontier to using AI. One idea I got from watching
that podcast is all the overhead admin work that is based on data we enter - to track our lives. Business
requirements and why. The issue though is comparing the paper cost of using LLMs vs a human being, the cost of
the LLM (cost of the datacenters, the cooling taking away from water that communities do need, power that
communities do need, the rare earths being mined that we could have used in other more urgent needs, the
economic impact to soiety in terms of jobs), it is not worth it. But it is one good use case for the use of
AI that humans would love to relinquish. There is also the realization (drawing from my implementation of the
reddit web crawler) that the methodologies that are used to associate meaning to English (one language) is that
a lot of the algorithms are used to associate human intereptations of definitions to language, which will
change with the passing of time (trends with generations and new technology, everyone should be programming the
definition of “67” that arose with Gen Alpha as a response), so we are endlessly programming to keep up with
human trends. Ignoring all of human recorded (and debated) historical facts of the origin of how things came to
be and why, ignoring philosophical origins and the progression of how definitions and human beliefs came to be,
ignoring religion and what it means to each human individual, impacts on human psychology, they all compact and
are meshed into a single definition of one word, what it means to the person recieving that word. We trust the
20+, 30+ phD student who has singularly dedicated their lives to one vein of pursuit of academia to associate
accurate definitions to language (as data scientists, as ML engineers, as AI engineers, as test engineers) for AI
to generate an answer. It’s really my only gripe about AI and how I choose to use it. I need to learn about it,
but I don’t choose to use AI unnecessarily out of respect for humanity, not every use case is suited to AI-use.
I like Adam Wiggins on “light weight” reasoning for AI-use, I totally agree.
One thing AI does cleanly and flawless do, it is a clean interface, free of human biases and emotion and whatever baggage of life that impacts a clean mind focused singularly on a task. When one works with an AI, we know we are not thinking about AI’s feelings, emotional state, impact to future interactions based on this one (nice to have a small context window compared to a human mind), we don’t have to be considerate, we can be utter assholes if our mental talk is that of one. That does free up quite a bit of mental work. That is also lazy, and is it best for everyone to be in pursuit of that vein?