LX Is the New UX

AIUXLXagenticinterfacesdesigndeveloper-toolsMCP

The nascent computer industry of the late 80s and throughout the 90s tried to make personal computers more accessible to less technical people by making them easier to use. The advent of the GUI, with its simple, familiar office metaphors (files, folders, desktops, etc) was so entirely successful, we do not even notice them.

UX is not simply about making an effective GUI, it’s about letting the user work as fluidly as possible. There is probably no more demanding a cohort than text based interface users; developers and systems admins.

There’s a new user emerging, and they thrive on text interfaces read blazingly fast…but it’s not actually reading, it’s language processing, and the distinction is critically important.

The new user is AI.

I think I will always remember the first time I used Github’s Copilot, watching code stream forth, in the middle of my project, was nothing short of magic, then. Now I have three agents writing software for me as I’m writing this, and another teaching me to write. It’s really moving very fast.

Right now the frontier of AI use is digital assistants, taking action on our behalf, in pursuit of our goals. We’re using them to operate our computers for us, we want them to do more, have more access, be more useful. To help them we created crude single use tools. Get the weather, download a webpage, read a file. This primitive period lasted barely a few months.

Before long the need to organise, share and install these tools drove the development of MCP, a way to format instructions which makes it easier for us, to explain to AI what tools are available and how to use them. MCP thrives because it allows quick repeatable interfaces from AI to other systems. However it’s a significant overhead for procedural knowledge, which has been addressed almost overnight by a technique that’s breathtakingly simple, skills. Skills are literally just text files containing instructions about how to do stuff. Finally developers write documentation, for AI users.

There’s a clear progression, tell AI how to do something -> provide AI knowledge about how to do something -> enable AI to find the knowledge it needs. Each step improves the experience of the AI. We get out of the way and let AI be productive by making the interface better suited to this new user type, one optimised for consuming text.

LX is the deliberate crafting of affordances that optimise AI’s capacity to discover how to operate something, in ways that minimize their inherent tendency to guess.

I have been working on a project called Ductile. It started as a lightweight job scheduler, something I can use to queue automation tasks, and grew into an entire toolbox for AI. Its interface is a simple CLI, but there’s something different about this CLI, it’s optimised for AI operation.

Everything I can do with Ductile, an AI can also do, by design. All Ductile’s affordances and capabilities are discoverable by an AI, in a format that an AI can understand with minimal ambiguity. AI resonant instructions.

For an AI to be a user of Ductile is contextual. An AI can use Ductile plugins to carry out a task, and each tool comes with its own AI resonant how-to. An AI can be the admin operator of Ductile, and the information needed to configure it is baked into the application. An AI can read the plugins, and understands how to build a new plugin. Use, Operate, Extend. In each context, the LLM needs just enough information.

AI native apps need to produce AI optimised information which should have structure, be hierarchical, and deterministic. AI cannot value colourful, good looking interfaces, any interface for an AI is collapsed to a stream of tokens. Where humans use visual cues and design motifs, to connect their intent with their interface, we have to provide AI strong semantic anchoring through the operational information, which connects to the AI task or goal.

Ductile has two capability interfaces, --help for humans, --skills for AI.

system watch      Real-time diagnostic monitoring TUI
system skills     Export capability registry as LLM-readable Markdown
- system.status tier=READ mut=0 out=human|json flags="[--json]" d="Check gateway health and PID lock."
- system.watch tier=READ mut=0 out=tui d="Open real-time diagnostic dashboard (Overwatch)."

These examples have the same goals, but cater to two different information processing styles. The human --help conforms to normal cli styling, uses space to create structure, and leads the user to their next action. The AI optimised version is dense, definitional, and explicit. It’s deterministic and crystalised expectations, if you use this, this is what will happen. That’s what it’s designed to do, tighten the inputs to temper the probabilistic nature of AI outputs. The options are mapped out early for the AI, it can take it all in in one pass. I find it harder to parse, but it’s not written for me. Slower for me, faster for AI. That’s LX.

In a few years, it may be mainstream for software to be designed with AI operation as an expected mode of use. Web pages could very well have embedded instructions hidden from traditional users, directing AI to APIs. Developers will very likely expect their CLI utilities to be used by AI agents. Rich application clients could have adjacent AI interface services. As agentic operation becomes more common, more the default, software that provides these interfaces should be preferred.

LX as a mainstream movement may not happen in any obvious way (and we won’t see anyway, it’s not for us). There will be pockets of effort in areas such as automation, where tool choices matter more and reliability is important, failure has a cost.

The new class of user is already here, and we’re increasingly going to design with them in mind.