Why AI Proves That the Liberal Arts Are More Important Than Ever
6 March 2026
Why AI Proves That the Liberal Arts Are More Important Than Ever
I started learning Swift when I was 13. I have spent the years since then building apps, submitting to Apple's Swift Student Challenge, getting rejected, learning from the rejections, and eventually winning. I run a software company called Tetrix Technologies. I teach kids how to build iPhone apps. By most measures, I am a technical person.
But the app that won me a place at WWDC was not a technical achievement. Not really. Tremor Check uses on-device machine learning to track vocal tremors in Parkinson's patients. I built it because I watched my grandfather carry a binder to every medical appointment, manually logging his symptoms in handwriting that was becoming harder to control. The technology in the app is real and it matters. But the reason the app exists has nothing to do with technology. It exists because I love my grandfather and I wanted to give him something more dignified than a binder.
That distinction, between the capacity to build something and the reason for building it, is what I want to write about. Because I think we are living through a moment where the technology industry is so consumed by what it can build that it has largely stopped asking why it should. And I think the liberal arts, the disciplines most people consider irrelevant in a technical age, are actually the tradition best equipped to answer that question.
The Bicycle for the Mind
In 1990, Steve Jobs described the personal computer as "a bicycle for the mind." It is one of the most cited phrases in the history of the technology industry, and it deserves more careful attention than it usually receives.
Jobs was drawing on a specific piece of data. In a study comparing the locomotion efficiency of various species, humans ranked poorly. We are not efficient movers. But a human on a bicycle becomes the most efficient moving creature on Earth, surpassing even the condor. The bicycle does not replace human locomotion. It does not move for you. What it does is take the energy you already produce through your own effort and make it dramatically more effective. Jobs saw the personal computer in exactly these terms: a tool that would take your existing cognitive effort and multiply its reach.
This was not the dominant vision of computing at the time. A lot of people in the early personal computing era imagined the computer as something closer to an oracle. A machine that would think for you. A machine that would solve the problems you could not solve yourself. A machine that would, eventually, make human thought unnecessary in certain domains. Jobs rejected that framing entirely. He was not interested in replacing human cognition. He was interested in extending it.
What made Jobs capable of that distinction was not his engineering ability. He was not the strongest engineer, and he never claimed to be. What made him capable of it was that he had a philosophy of technology that preceded and informed his practice of it. He had studied calligraphy at Reed College, not because it had any vocational application but because the forms interested him. He had spent time with Zen Buddhism, with the Bauhaus design tradition, with the history of typography and graphic communication. When the original Macintosh shipped with proportionally spaced fonts and elegant typography, at a time when every competing personal computer displayed monospaced characters on a black screen, that decision was not the product of engineering optimisation or market research. It was the product of a man who had studied how the presentation of information shapes the way people process it. He had learned that from letterforms, not from code.
Jobs understood something that I think the technology industry has largely forgotten: the purpose of a tool determines its value. A hammer is not inherently good or bad. A hammer designed to build a house serves a fundamentally different function than a hammer swung at a window, even though the physics of both actions are identical. The liberal arts gave Jobs his sense of purpose. They told him what the hammer was for.
The absence of that purpose is visible everywhere in contemporary technology. Consider the infinite scroll, which is probably the most widespread design pattern of the last fifteen years. From an engineering perspective, it is elegant. It loads content dynamically as the user reaches the bottom of the page, eliminating the friction of manual pagination and keeping the experience seamless. But from a human perspective, the infinite scroll is a mechanism designed to eliminate the natural stopping points that would otherwise allow a person to decide they have seen enough. Its explicit function is to override human judgment about attention. It captures effort without producing forward motion. No one who had seriously studied psychology, or philosophy, or even basic rhetoric would describe that as a bicycle for the mind. It is closer to a treadmill.
I think about this a lot when I am building software. I think about it because the line between augmentation and exploitation is not always obvious from the engineering side. You have to bring something else to the table. You have to have a reason for building that exists outside the code itself.
AI as the Ultimate Mirror
Large language models are a particularly interesting case study in this dynamic, because they force a confrontation with what we actually mean when we talk about knowledge and understanding.
Here is what a large language model actually is, stripped of the marketing language. It is a statistical model trained on an enormous corpus of human-generated text. The training data includes literature, philosophy, scientific papers, legal documents, religious texts, journalism, forum posts, technical manuals, poetry, screenplays, and essentially every other category of written expression that has been digitised and made available for training. The model learns to predict the probability distribution of the next token in a sequence, given all the tokens that preceded it. Through this process, it develops internal representations that capture patterns in how humans use language to express ideas, construct arguments, tell stories, and reason about the world.
What this means, stated plainly, is that a large language model is a compressed representation of the liberal arts tradition. Not a perfect one. Not a conscious one. But a functional distillation of how humanity has expressed itself across centuries of written thought. When you prompt a language model and it produces a coherent response about the ethical implications of surveillance technology, it is not reasoning from first principles. It is drawing on patterns derived from thousands of human thinkers who have wrestled with questions of privacy, power, autonomy, and the relationship between the individual and the state. The model has never read Foucault or Bentham in any meaningful sense of the word "read." But it has absorbed the statistical fingerprints of their influence on the discourse that followed them.
This creates a significant irony. The technology most frequently cited as evidence that the humanities are obsolete is itself a product of the humanities. Without centuries of philosophical inquiry, literary experimentation, legal argumentation, and scientific writing, there would be nothing for these models to train on. The raw material of artificial intelligence is human thought. And the richest, most structured, most carefully reasoned human thought has historically been produced within the liberal arts tradition.
But the irony does not stop there. Understanding what these models actually do, and understanding what they get wrong, requires precisely the skills that a liberal arts education develops.
Evaluating the accuracy of an AI-generated historical claim requires historical literacy. You need to know enough about the period, the sources, and the historiographical debates to recognise when the model is producing a plausible-sounding but fabricated narrative. Recognising when an AI's summary of a philosophical argument distorts the original requires familiarity with the source material and the ability to read critically, to notice where a paraphrase has subtly shifted the meaning of a claim. Noticing when an AI produces text that is fluent but logically incoherent requires training in argumentation and logic, the ability to follow the structure of a claim and identify where the reasoning breaks down even when the sentences flow smoothly. Identifying when an AI's output reflects systematic bias requires awareness of how bias operates in language, culture, and institutional practice, an awareness that comes from studying history, sociology, and critical theory.
The people who work most effectively with AI tools are not necessarily the most technically sophisticated users. They are the people who can think clearly about what they are asking, evaluate the quality of what they receive, and place the output in a broader context of meaning and purpose. These are fundamentally liberal arts competencies. The ability to read carefully, write precisely, reason logically, and consider perspectives other than your own is not a supplement to technical skill. In the context of AI, it is the primary mechanism by which humans remain in meaningful control of the tools they use.
There is a practical dimension to this that I find worth emphasising. Prompting an AI effectively is, at its core, an exercise in communication. You have to know what you want. You have to articulate it in a way that is specific enough to be useful and flexible enough to allow for unexpected results. You have to anticipate how your words might be interpreted, and you have to refine your approach based on what comes back. This is a writing skill. It is the same skill you develop when you learn to construct a thesis, tailor an argument to an audience, or revise a draft until it says what you actually mean. Rhetoric, which has been a foundational liberal arts discipline since Aristotle first systematised it, is suddenly one of the most practically relevant skills in the AI toolkit. I do not think most people have fully absorbed how strange and significant that is.
Replace or Excel: Two Competing Visions of Technology
There is a fork in the road that every technology eventually reaches, and artificial intelligence has arrived at it now. The fork presents two paths. The difference between them is philosophical not technical.
The first path is substitution. This is the vision of technology as labour replacement. The machine does what the human used to do, but faster, cheaper, and at a scale no human workforce could match. The human is removed from the process. The value proposition is efficiency: fewer people, lower costs, higher throughput. In this model, the ideal endpoint is full automation. The human is a bottleneck to be engineered away.
The second path is augmentation. This is the vision of technology as capability extension. The machine does not replace the human. It makes the human more capable. A writer working with an AI assistant does not become unnecessary. The writer becomes a writer who can research faster, explore more variations of an idea, and iterate more rapidly on structure and argument. A doctor working with an AI diagnostic tool does not become redundant. The doctor becomes a doctor who can consider a wider range of differential diagnoses and catch patterns that might otherwise be missed in a time-pressured consultation. In this model, the ideal endpoint is not automation but amplification. The human remains at the centre. The technology extends what they can do.
These two visions produce fundamentally different products, even when the underlying technology is identical. Consider two hypothetical AI writing tools built on the same language model, with the same capabilities, the same training data, the same technical architecture. The substitution tool accepts a topic and produces a finished article. The user's role is to click a button and receive output. The augmentation tool reads your draft, identifies the places where your argument is weakest, suggests counterarguments you have not addressed, flags unsupported claims, and helps you find better evidence. The user's role is to think, to engage, to make the final decisions. The first tool makes the writer unnecessary. The second tool makes the writer better. The difference between them is not a difference of engineering. It is a difference of philosophy.
This connects directly to what the liberal arts actually do. The purpose of studying literature is not to memorise plots or accumulate reading lists. It is to develop the capacity to read with depth, to inhabit perspectives that are not your own, and to recognise the ways that narrative shapes understanding. The purpose of studying philosophy is not to memorise the positions of historical thinkers. It is to develop the capacity to reason carefully, to identify hidden assumptions in an argument, and to construct and evaluate chains of reasoning that hold up under scrutiny. The purpose of studying history is not to memorise dates and battles. It is to develop the capacity to recognise patterns across time, to understand how present conditions were shaped by past decisions, and to appreciate that the world as it currently exists is not inevitable but contingent.
None of these disciplines exist to give you answers. They exist to make you better at asking questions. And better questions are exactly what the current AI moment demands.
I think about this in the context of my own work. When I am deciding what to build next at Tetrix, the most important phase of the process is not the engineering. It is the phase where I decide what the product is for. Who is it serving? What problem does it solve? Does it make the person using it more capable, or does it just automate something they used to do themselves? Those are not technical questions. They are questions about values, about what kind of tool I want to put into the world. They are, whether I use the term or not, liberal arts questions. One of the biggest examples of this self questioning is when I was considering adding an Agent Mode into Scribe. I initially was quite skeptical of adding this because in a sense it attempted to replace the writer. However in another sense it amplified the ability for the writer to play with ideas, do research faster and to get instant feedback from a differing perspective.
What Happens When Engineers Lack the Why
The consequences of building without a clear sense of purpose are quite well documented and in many cases still actively unfolding.
The most instructive example is probably social media recommendation algorithms. The engineering problem these systems were designed to solve was straightforward: given a user's history of interactions on the platform, predict which pieces of content they are most likely to engage with, and surface that content in their feed. The engineers who built these systems were technically excellent. The algorithms performed exactly as specified. Engagement metrics went up. Time spent on platform went up. Advertising revenue went up. By every measure the system was designed to optimise, it succeeded.
But engagement, it turned out, was not a neutral metric. The content that generated the highest engagement was not the content that informed users, connected them with people they cared about, or enriched their understanding of the world. It was the content that provoked the strongest emotional reactions: outrage, moral indignation, anxiety, tribal solidarity, and fear. The algorithms learned this pattern rapidly and began surfacing content that was increasingly extreme, divisive, and emotionally manipulative. Not because anyone wrote a line of code instructing the system to radicalise its users, but because radicalising content reliably produced the engagement signal the system had been built to maximise. Engagement was the only value the system understood, and the system optimised for it with extraordinary efficiency.
The downstream effects have been documented by researchers in psychology, sociology, political science, and public health. Adolescent mental health outcomes have deteriorated significantly in populations with high social media usage, with correlations between platform engagement and rates of anxiety, depression, and self-harm that are strong enough to have attracted regulatory attention in multiple countries. Political polarisation has intensified in patterns that track closely with algorithmic content distribution, as users are progressively shown material that reinforces and radicalises their existing positions. Misinformation has proliferated because false claims tend to be more novel and more emotionally charged than accurate ones, and novelty and emotional charge are precisely what engagement-optimised algorithms reward.
None of this was a technical failure. The engineering was excellent. What failed was the absence of any serious inquiry into whether the optimisation target was compatible with human wellbeing. No one with training in behavioural psychology raised the alarm about variable-ratio reinforcement schedules, which have been understood since B.F. Skinner's research in the 1950s to be the most effective mechanism for producing compulsive, addictive behaviour in humans. No one with training in moral philosophy pointed out that a system optimising for engagement without any constraint on the quality or effects of that engagement is functionally equivalent to a system optimising for addiction. No one with training in history noted that every major new communication technology, from the printing press to broadcast radio to television, has gone through a period of destabilising social disruption before institutions and norms develop to manage it, and that social media was exhibiting exactly this pattern.
The same structural failure appears in AI hiring tools. Several large companies deployed machine learning models to screen job applicants, training those models on historical hiring data from within their organisations. The models learned the patterns in the data faithfully. They also learned the biases embedded in that data, because decades of historical hiring decisions reflected systemic patterns of discrimination. The models did not introduce bias into the hiring process. They automated existing bias. They gave it the appearance of computational objectivity. They scaled it to thousands of decisions per day. A student of institutional history or sociology would have predicted this outcome immediately, because the idea that historical data is neutral and objective is one of the most persistent and dangerous myths in quantitative social science. Data reflects the world that produced it, including the inequities of that world.
Facial recognition systems deployed in law enforcement contexts present a similar pattern. Independent audits by researchers at MIT and the National Institute of Standards and Technology have repeatedly demonstrated that commercial facial recognition systems exhibit significantly higher error rates for darker-skinned individuals and for women than for lighter-skinned men. The technical explanation is that the training datasets used to build these systems were disproportionately composed of lighter-skinned male faces, which means the models learned the distinguishing features of that demographic with much greater fidelity than others. The practical consequence is that a technology marketed as objective and impartial produces systematically discriminatory outcomes when deployed in the real world. A student of the history of policing and technology would have recognised this pattern instantly, because the promise of technological objectivity masking and legitimating structural bias has been a recurring feature of criminal justice innovation for over a century.
In every one of these cases, the core problem was not that the engineers lacked technical ability. The problem was that the development process did not include the kinds of questions that a liberal arts education trains people to ask. Questions like: What are the second-order consequences of optimising for this particular metric? Whose experience and whose perspective is absent from this training dataset? What historical patterns does this technology reproduce? What does it even mean for a system to be "fair" when the data it learns from was generated by an unfair world?
These are not engineering questions. They are questions drawn from ethics, sociology, history, philosophy, and political theory. And the failure to ask them has produced some of the most consequential and damaging technological outcomes of the past two decades.
The Builders Should Be the Thinkers
If the argument I have been making holds, then the practical implication is more radical than simply adding ethicists to product teams. The builders themselves need to become the thinkers. Not in some superficial sense of taking a philosophy course as an elective, but in the fundamental sense that engineering education and practice must integrate the capacity for non-technical reasoning as a core competency rather than an optional supplement.
The problem with the phrase "the builders need the thinkers" is that it preserves a false dichotomy. It suggests that technical minds and reflective minds belong to separate categories of people, that we need to import humanistic thinking into engineering contexts because engineers themselves are constitutionally incapable of it. This is both historically inaccurate and practically counterproductive. The history of technology is filled with engineers who were also philosophers, designers who understood aesthetics not as decoration but as integral to function, builders who understood that the question of why something should exist precedes and determines how it can be built.
What we need is not collaboration between separate domains of expertise. What we need is a generation of builders who have developed the specific capacity to think in ways that are explicitly not technical. To hold multiple competing values in tension without immediately resolving them into optimisation targets. To recognise that human problems rarely have single correct answers, that the richness of human experience cannot be captured by efficiency metrics, that the purpose of a tool is not self-evident from its capabilities but must be deliberated within frameworks that account for human flourishing in all its complexity.
This requires developing a different kind of engineering mind. One that can move fluidly between the precision required to build reliable systems and the ambiguity required to understand human contexts. One that can hold both the engineer's commitment to solving problems and the philosopher's commitment to questioning whether those problems should be solved in the first place. One that understands technology not as a neutral force that can be directed toward any end, but as a set of choices that encode particular values about what kinds of human activity are worth amplifying and what kinds are worth resisting.
This is also an argument about education, and it is one I feel strongly about as someone who is still in school. Computer science education at most institutions treats the humanities as distribution requirements. Something you endure to check a box. A student can earn a degree in computer science from a highly ranked university without ever engaging seriously with ethical theory, rhetorical analysis, historical method, or psychological research beyond an introductory survey. This produces graduates who are technically skilled and contextually illiterate. They can build virtually anything. They cannot always tell you whether they should.
I am not arguing that every engineer needs a double major in philosophy. I am arguing that the engineering disciplines need to treat the liberal arts as genuine partners in the work of building technology. Not as cultural enrichment for well-rounded individuals, but as essential bodies of knowledge for building technology that works for human beings in all the ways that actually matter.
The Why Endures
Technology changes quickly. The specific technical skills that the market values today will be partially obsolete within a decade. The programming languages, the frameworks, the platforms, the paradigms that define this particular moment will give way to new ones, as they always have. This is not a flaw. It is simply the nature of technical knowledge. It is powerful and it is perishable.
But the capacity to think critically about the relationship between technology and human life does not expire. The ability to read carefully, write with precision, reason rigorously, and take seriously the perspectives of people whose experience differs from your own is as valuable today as it was when the printing press upended the information landscape of fifteenth-century Europe. These capacities are not tied to any specific technological paradigm. They are tied to the permanent features of human existence: the need to communicate, to cooperate, to make decisions under uncertainty, and to construct some shared understanding of what a good life looks like.
The liberal arts have never been opposed to technology. They have been the tradition through which societies decide what technology is for. Every major technological revolution has eventually demanded the kinds of questions that the liberal arts are trained to ask. The printing press raised questions about authority, the democratisation of knowledge, and the nature of truth in an environment of information abundance. The industrial revolution raised questions about the relationship between productivity and human dignity, about what happens when efficiency is pursued without regard for the people who bear its costs. The internet raised questions about identity, privacy, and the architecture of public discourse.
Artificial intelligence raises all of these questions simultaneously, and introduces new ones that do not have clean precedents. What does it mean to understand something if a machine can produce a convincing simulation of understanding? What does it mean to create something if a machine can generate novel outputs by recombining patterns from existing human work? What does authorship mean when the boundaries between human and machine contribution become blurred? What does expertise mean when a system trained on the writing of experts can replicate the surface patterns of expertise without possessing any of the underlying comprehension?
These are not technical questions. They are among the deepest questions the liberal arts tradition has ever been asked to address. And the way we answer them will determine whether artificial intelligence becomes what Steve Jobs imagined the computer could be, a bicycle for the mind, a tool that makes humans more capable and more creative and more fully themselves, or whether it becomes something else entirely. A treadmill for the attention. A mechanism that captures human effort and converts it into engagement metrics and advertising revenue without producing anything that makes anyone's life meaningfully better.
I built Tremor Check because I wanted to help my grandfather. That motivation came from love, not from engineering. The engineering was the means. The love was the reason. And I think that distinction, between the means and the reason, between the how and the why, is the most important distinction in technology today.
The liberal arts are where the why lives. They always have been. And in the age of artificial intelligence, they have never been more necessary.