Are language minutiae necessary?

Why clog a programming language spec with minutiae that don't offer any technical benefits? On the other hand, is an "anti-minutiae revolution" really necessary?


I am slowly but surely working my way through learning Ruby in-depth. The book that I am reading (The Ruby Programming Language) does a very good job at describing the various nuances of the language.
Along the way, I noticed that Ruby is filled with minutiae--those little parts of a programming language that few people use on a regular basis, and many developers can go years without seeing. Ruby is hardly alone in this; in fact, I think that its level of these details is relatively low (the king is probably Perl). But I am curious if it would be beneficial to streamline languages by removing these items, and simply have the developer write more verbose code on occasion.

My favorite example of these types of minutiae is the ternary operator. Many developers are not familiar with it (or maybe have never heard it called that), so let's take a look at it.
In most C-style languages, there are unary operators (such as ++) that only take one argument: the variable they are working on. Most operators are binary: They require two items to be effective (+ and - to name two). Because most languages only have one operator that requires three arguments, it is known as the ternary operator, although in theory there could be more. This is a conditional expression in which if the first argument evaluates to true, the second operator is executed; otherwise, the third argument is executed.
Typically, it looks like this:
(condition) ? truestatement : falsestatement;
In a nutshell, the ternary operator is a one-line if-then-else construct. I am not a big fan of the ternary operator for a few reasons. Every time I use it, I find myself returning to the line a few minutes later and turning it into an if-then-else construct so that I can add additional statements to one of the blocks. Secondly, I feel that it reduces readability; some could argue that by reducing needless clutter, it increases readability. It's purely a matter of personal opinion and cannot be measured.
Ruby has a lot of methods that are aliased to different names. In some ways, this makes sense. After all, maybe the method foobar does the same thing as fizzbuzz, but foobar is a better description of the operation than fizzbuzz in some circumstances. That has a certain logic to it; at the same time, it is highly dependent upon appropriately chosen names.
In many cases, I would much rather know that an expression seems out of place but is functionally correct than have to remember that there are multiple names for the same operation, and which name I use depends on the context. Again, this is purely a matter of personal preference; please do not think that I am knocking Ruby for having this approach. If you don't like the aliases, you do not need to know them unless you are looking at someone else's code.
All of that being said, is it necessary to clog the language spec and add one more item to the "features to learn" list, particularly for something that does not offer any technical benefits? The ternary operator is hardly the only example of this, and I don't mean to turn this into a debate regarding its merits. But this is exactly the point: Most languages are filled with these kinds of items. Many are leftovers from their philosophical roots in C or some other language that was created in another day and age. Others are just cruft that developed along the way.
These minutiae are rarely removed from languages once they are in there, since it would break backwards compatibility for just enough people to cause complaint, and a few folks would insist that these items make them more efficient programmers in one way or another.
Do language minutiae harm a language? Not usually. Since you rarely see anyone using them, they are not a terribly big deal. In addition, they rarely perform major tasks, so it is not like you are missing out on major functionality by not using them.
And yet, I find it very distracting to learn a language and to be tied up in these things. Maybe it is a documentation issue, because these minutiae are mixed in with really important things, and it is difficult to identify what is something you need to know and what is simply nice to know.
My personal take on this is that, if I were designing a new language from the ground up, I would not include the minutiae. I just don't think that these items add much value, and they can cause a distraction. But for existing languages, I do not think we need any kind of "anti-minutiae revolution" or anything along those lines, but perhaps a slow phasing out of these items may be in order.
Justin James is an employee of Levit & James, Inc. in a multi-disciplinary role that combines programming, network management, and systems administration. He has been blogging at TechRepublic since 2005.