People sometimes complain about my attempts at being
compatible with GNU C, so I'll share some of the reasons
behind what I do, as well as a few factors that led me
to my current point of view.
Software binary and source code compatibility often is a
subject of heavy debate. The question is how to deal with
existing software: Is it worthwhile to go out of our ways
to support - or to "be compatible with" - crude existing
software, or should we instead only bless software that
was, to our minds, sensibly written?
As you can see, the introducing paragraph already implies
that the question of compatibility only arises
when we are confronted with code that is bad, arcane,
kludged, wrong. However, that is not always the case.
Here are some reasons that can hinder compatibility:
- The code is unambiguously buggy. It relies on
compiler, library, and operating system semantics that were
never correct, but used to work the way they did by chance
or by accident
- The code is old. It uses compiler, library and
OS features that have been succeeded by more modern and
(hopefully) superior replacements
- The code was written for a particular older
- The code was written for a completely different
environment. For example, it may be tied to the mindset
of a Windows, VMS, or Unix system
- The code was written for an environment which
only differs in some - possibly minor - respects, such as
the programming language dialect or system library
Genuinely buggy code may be worth supporting if there is
no way around it. The overwhelming majority of Windows
applications is distributed in binary-only form. You
cannot port, fix, and recompile such an application, so
most new Windows versions try to support as many such
existing apps as possible.
Old code has in many cases been replaced with new
versions, or simply does not fit into today's computing
environment anymore. That kind of code can be ignored.
However, many organizations depend on outdated software
infrastructures that cannot realistically be replaced now,
and must therefore continue to be maintained using
compatible hardware, operating systems, and programming
Code that was written for a particular processor, or that
is only available in binary form, makes a strong case for
processor vendors to support old binaries even on new
architectures. Chipmakers like Intel, AMD, HP, IBM, and
Sun all do this, as have many others before. Architectures
with little or no compatibility - such as the Itanium -
tend not to be received well by the market.
Code for a completely different environment often isn't
feasible to support fully. However,
emulation and compatibility layers - usually added
on top of the operating system - may still be sufficient
to run a considerable number of such applications. For
provides some Windows compatibility using Wine and
ndiswrapper, and Windows provides some Linux/Unix
compatibility using Cygwin and Internix.
Code for a slightly different environment is where a
handful of changes and additions to the target environment -
which may require less than a few hundred or thousand lines of
code to implement - is where we may be in business without
an insane amount of work. This is the type of code that
mostly affects questions about nwcc's compatibility. I'll
describe the issues at length later on.
But before I go into details, I'll lose a few words
about two different mindsets: Implementor and user.
Implementor vs. user
Plain users of development tools are most likely to
fix source code problems by changing the offending
code itself. If you confront them with an application
that uses a nonportable (e.g. C99 or GNU C) language
feature, they'll say
Fix the app! It's broken!
There is some truth to this statement, and it can
well be the correct approach to handle a few individual
applications under one's own control.
For third party applications, they may propose to
"fix" it and send a patch to the original
author. This approach is unlikely to yield a lot of
success in general, but let's not get into that.
As implementors, we are not usually concerned
with that particular application which was first
found to use such a nonportable feature. Even if we
are able to fix a particular application easily, and
to convince the authors to accept our proposals not
to use those features, then it still isn't feasible
to do this because:
- It will probably be someone else who finds
that the app doesn't work with our tools
- For any application we "fix" to conform to
our point of view, there will be 100 or 1000 more
that also have to be fixed
In short, implementors are more concerned with fixing
incompatibilities in a general way, so that their stuff
works as well as possible with unknown code "in the
wild". Think of it as being analogous to treating an
injury so that it heals instead of merely suppressing
the pain with medicine; Patch 100 apps which use a
particular feature to not use it, or implement the
feature in your compiler to solve the problem once
and for all.
To me, it is obvious that I cannot change the entire
world of C software around me. Instead, I recognize
that there is a huge amount of applications which
range from "good" over "normal" and
"slightly difficult" to "nasty" and
where I, like everyone else, have my own definition of
what these terms mean.
Naturally I try to get along with as many of these
as reasonably possible. Because responding to every
single complication with "this code sucks, fix the
code" will not get us anywhere. It's a non-starter and
a cop-out. It does not advance our technology, nor
does it help the users.
We have to find a middle way between purity and chaos, as
far as compatibility with established compilers, libraries
and operating systems is concerned.
nwcc and its environment
I'll probably write some stuff about language
dialects, open-source systems and ABI compatibility between compilers here,
but not now!