LPC: Life after X [LWN.net]
This article brought to you by LWN subscribersSubscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.
Keith Packard has probably done more work to put the X Window System onto our desks than just about anybody else. With some 25 years of history, X has had a good run, but nothing is forever. Is that run coming to an end, and what might come after? In his Linux Plumbers Conference talk, Keith claimed to have no control over how things might go, but he did have some ideas. Those ideas add up to an interesting vision of our graphical future.
We have reached a point where we are running graphical applications on a wide variety of systems. There is the classic desktop environment that X was born into, but that is just the beginning. Mobile systems have become increasingly powerful and are displacing desktops in a number of situations. Media-specific devices have display requirements of their own. We are seeing graphical applications in vehicles, and in a number of other embedded situations.
Keith asked: how many of these applications care about network transparency, which was one of the original headline features of X? How many of them care about ICCCM compliance? How many of them care about X at all? The answer to all of those questions, of course, is "very few." Instead, developers designing these systems are more likely to resent X for its complexity, for its memory and CPU footprint, and for its contribution to lengthy boot times. They would happily get rid of it. Keith says that he means to accommodate them without wrecking things for the rest of us.
Toward a non-X future
For better or for worse, there is currently a wide variety of rendering APIs to choose from when writing graphical libraries. According to Keith, only two of them are interesting. For video rendering, there's the VDPAU/VAAPI pair; for everything else, there's OpenGL. Nothing else really matters going forward.
In the era of direct rendering, neither of those APIs really depends on X.
So what is X good for? There is still a lot which is done in the X server,
starting with video mode setting. Much of that work has been moved into
the kernel, at least for graphics chipsets from the "big three," but X
still does it
for the rest. If you still want to do boring 2D graphics, X is there for
you - as Keith put it, we all love ugly lines and lumpy text. Input is
still very much handled in X; the kernel's evdev interface does some of it
but falls far short of doing the whole job. Key mapping is done in X;
again, what's provided by the kernel in this area is "primitive." X
handles clipping when application windows overlap each other; it also takes
care of 3D object management via the GLX extension.
These tasks have a lot to do with why the X server is still in charge of our screens. Traditionally mode setting has been a big and hairy task, with the requisite code being buried deep within the X server; that has put up a big barrier to entry to any competing window systems. The clipping job had to be done somewhere. The management of video memory was done in the X server, leading to a situation where only the server gets to take advantage of any sort of persistent video memory. X is also there to make external window managers (and, later, compositing managers) work.
But things have changed in the 25 years or so since work began on X. Back in 1985, Unix systems did not support shared libraries; if the user ran two applications linked to the same library, there would be two copies of that library in memory, which was a scarce resource in those days. So it made a lot of sense to put graphics code into a central server (X), where it could be shared among applications. We no longer need to do things that way; our systems have gotten much better at sharing code which appears in different address spaces.
We also have much more complex applications - back then xterm was just about all there was. These applications manipulate a lot more graphical data, and almost every operation involves images. Remote applications are implemented with protocols like HTTP; there is little need to use the X protocol for that purpose anymore. We have graphical toolkits which can implement dynamic themes, so it is no longer necessary to run a separate window manager to impose a theme on the system. It is a lot easier to make the system respond "quickly enough"; a lot of hackery in the X server (such as the "mouse ahead" feature) was designed for a time when systems were much less responsive. And we have color screens now; they were scarce and expensive in the early days of X.
Over time, the window system has been split apart into multiple pieces - the X server, the window manager, the compositing manager, etc. All of these pieces are linked by complex, asynchronous protocols. Performance suffers as a result; for example, every keystroke must pass through at least three processes: the application, the X server, and the compositing manager. But we don't need to do things that way any more; we can simplify the architecture and improve responsiveness. There are some unsolved problems associated with removing all these processes - it's not clear how all of the fancy 3D bling provided by window/compositing managers like compiz can be implemented - but maybe we don't need all of that.
What about remote applications in an X-free world? Keith suggests that there is little need for X-style network transparency anymore. One of the early uses for network transparency was applications oriented around forms and dialog boxes; those are all implemented with web browsers now. For other applications, tools like VNC and rdesktop work and perform better than native X. Technologies like WiDi (Intel's Wireless Display) can also handle remote display needs in some situations.
Work to do
So maybe we can get rid of X, but, as described above, there are still a number of important things done by the X server. If X goes, those functions need to be handled elsewhere. Mode setting is going to into the kernel, but there are still a lot of devices without kernel mode setting (KMS) support. Somebody will have to implement KMS drivers for those devices, or they may eventually stop working. Input device support is partly handled by evdev. Graphical memory management is now handled in the kernel by GEM in a number of cases. In other words, things are moving into the kernel - Keith seemed pleased at the notion of making all of the functionality be somebody else's problem.
Some things are missing, though. Proper key mapping is one of them; that cannot (or should not) all be done in the kernel. Work is afoot to create a "libxkbcommon" library so that key mapping could be incorporated into applications directly. Accessibility work - mouse keys and sticky keys, for example - also needs to be handled in user space somewhere. The input driver problem is not completely solved; complicated devices (like touchpads) need user-space support. Some things need to be made cheaper, a task that can mostly be accomplished by replacing APIs with more efficient variants. So GLX can be replaced by EGL, in many cases, GLES can can be used instead of OpenGL, and VDPAU is an improvement over Xv. There is also the little problem of mixing X and non-X applications while providing a unified user experience.
Keith reflected on some of the unintended benefits that have come from the development work done in recent years; many of these will prove helpful going forward. Compositing, for example, was added as a way of adding fancy effects to 2D applications. Once the X developers had compositing, though, they realized that it enabled the rendering of windows without clipping, simplifying things considerably. It also separated rendering from changing on-screen content - two tasks which had been tightly tied before - making rendering more broadly useful. The GEM code had a number of goals, including making video memory pageable, enabling zero-copy texture creation from pixmaps, and the management of persistent 3D objects. Along with GEM came lockless direct rendering, improving performance and making it possible to run multiple window systems with no performance hit. Kernel mode setting was designed to make graphical setup more reliable and to enable the display of kernel panic messages, but KMS also made it easy to implement alternative window systems - or to run applications with no window system at all. EGL was designed to enable porting of applications between platforms; it also enabled running those application on non-X window systems and the dumping of the expensive GLX buffer sharing scheme.
Keith put up two pictures showing the organization of graphics on Linux. In the "before" picture, a pile of rendering interfaces can be seen all talking to the X server, which is at the center of the universe. In the "after" scene, instead, the Linux kernel sits in the middle, and window systems like X and Wayland are off in the corner, little more than special applications. When we get to "after," we'll have a much-simplified graphics system offering more flexibility and better performance.
Getting there will require getting a few more things done, naturally.
There is still work to be done to fully integrate GL and VDPAU into the
system. The input driver problem needs to be solved, as does the question
of KMS support for video adaptors from other than the "big three" vendors.
If we get rid of window managers somebody else has to do that work; Windows
and Mac OS push that task into applications, maybe we should too. But,
otherwise, this future is already mostly here. It is possible, for
example, to run X as a client of Wayland - or vice versa. The
post-X era is beginning.
Index entries for this article | |
---|---|
Conference | Linux Plumbers Conference/2010 |