Walter: Walter Bright (via Skype) Andreas: Andreas Zwinkau (table host) Other: other participants
Andreas: Thank you for joining us.
Walter: Thank's for inviting me.
Andreas: As far as I know, you are the only person who can claim to have written a C++ compiler all by yourself.
Walter: Yes, I think I am the only one. Everyone else has collaborators or they use somebody else's code generator or something like that.
Andreas: Ok. The last topic we were talking about were the sanitizers that the llvm people built, or google built. The address sanitizer, thread sanitizer, UBSan [UndefinedBehaviorSanitizer] -- are you familiar with them?
Walter: I have never looked at them. I have used PVS studio to check my C++ code. And of course I use clang's warning systems, but I haven't actually used their sanitizers.
Andreas: Well, we had some pretty big fans here, because the sanitizers scale much better to large code bases than what was previously or is commercially available.
Walter: Yeah. Are you talking about Coverity?
Andreas: I guess so, but those fans are at another table now.
Walter: I told the guy from Coverity that my job as designer of the D programming language was to put Coverity out of business. So what I do is I look at the kinds of problems that people have in C++, and C, and other programming languages, and I try to find a way to design those problems so they are impossible to happen, by changing the language semantics. Things like the Joint Strike Fighter C++ Coding Standards: That was a very interesting document to read about things to avoid, things known to cause problems. So I try to just simply design those things out of the language.
(At 3:00)
Andreas: Another topic we were talking about was performance optimization. And one thing we pretty much agreed on was: Even if you have a high level language, you still have to benchmark very precisely and you still have to look at the assembly. Do you think that a good compiler could change that?
Walter: No. I actually wrote an article about that several years ago, that there are A programmers, and B programmers, and C programmers. The C programmers really haven't mastered the language yet and are still working on it. The B programmers have mastered the language, but the A programmers, the people who are able to get the most out of their compiler and generate the fastest code, are people who look at the assembly coming out of their compiler. And although many may not actually write any assembly code, they are very familiar with what constructs generate what kinds of code and so they are able to select the constructs that work best. For example, I knew a guy who figured out a faster way to convert integers to strings. He was very excited about this technique and he even wrote an article about it with benchmarks and all proving that his technique was faster. Well, if you disassembled his code, it turns out that what he found was a way of writing the code that caused the compiler to do one instead of two divide instructions. Normally if you convert integers to strings, you have two divide operations: you do a divide by 10 and then you do a modulo by 10 to get the remainder. And you repeatedly divide and modulo. Well, it turns out that most compilers can recognize doing a divide and modulo with the same operands, and will only do one divide instruction because the divide instruction in the CPU will give you both the modulus and the quotient. They will do this optimization for you. His rewriting trick inadvertently missed that optimization in the first version and tripped that optimization in his optimized version. And it really had nothing to do with his algorithm whatsoever. It was simply that he happened to write it in a form that the compiler recognized that it would only generate one divide instruction for him. And this was immediately obvious if you look at the assembly code. But he had never looked at the assembly code, so he wrote an entire article about it. I have seen this happen many times, where, if you are not looking at the assembler code, you just are missing expensive operations that you don't realize you are doing. And you don't really develop an intuition about how expensive certain constructs are and on how efficient other constructs are without looking at the assembler. People are often shocked when they do look at the assembler, and are like: "Oh, I didn't know this was happening". You will never be one of those A programmers if you are unwilling to look at the assembler output in the bottlenecks of your code. I have just seen that over and over again.
Other: So, do you think that it would be a good idea to support programmers by showing them the assembler code that their code is generating? Or to help them understand what their code structure generally is doing?
Walter: That's a good question. Well, we do ship a disassembler. And the assembly is the ultimate truth teller. And if you set a breakpoint at a function, most debuggers have a command to show you the source code and assembly mixed.
Other: Yes.
Walter: Those are extremely useful. So, it's not like it's difficult to find the assembler code for a function, it's just that there is this peculiar resistance people have to looking at it. For example, I had a friend years ago, and she taught remedial algebra at the local university because a lot of students would come into the university with an insufficient math background to take the normal university classes. She said she would write things on the board like "x + 3 = 5" and asked the students to solve for x. And they couldn't do it. They would see "x" and they would think "algebra" and "I don't understand algebra" and then they would freeze up. She found if she wrote instead "_ + 3 = 5" and asked what goes into the underline, everyone goes "2"! So it wasn't that they did not know algebra, it was that the word "algebra" and having something called "x" would trigger this "I don't get it" response and they couldn't resolve it. Assembler language is actually really simple. I mean how simple can you get -- take two registers and add them. But a lot of programmers seem to think that it's this really difficult thing and so they kind of avoid looking at it and will try all kind of things to avoid dealing with assembler. It just winds up hurting them in the long run to do that. But I am kind of a competitive programmer, I like the things I write to be faster than any body else's, and being willing to look at the assembler and use a profiler gives me an edge that is consistent. I am consistently able to write faster code and I think that's why: I use a profiler and I look at the assembler and I adjust the code until the produced assembly is the result I want.
Andreas: But we had one guy that said that the tooling is not perfect yet. He had the problem that he knew which function to optimize and he repeatedly wanted to look at the assembly, but to use the debugger, you have to run the program. Which is probably overcomplicated because you just want to compile it and look at the assembly. But to use object dump, or something, to use a disassembler, you still need to find the function and it's usually not that easy to guess the name if it is a template function or something. So there might be a little bit of tooling which could improve this; I ideally could integrate it into the IDE and give the assembly with one click.
Walter: That sounds actually pretty cool. Of course I am kind of an old fashioned programmer and I still use an Emacs clone for my editing, so I can't really do these advanced things.
(At 11:25)
Andreas: We had another discussion. It was cryptography related, where they often have this constant time requirement: They want the compiler to produce a function that always takes the same time to compute, so you can't do a timing attack on it. As far as I know, there is no compiler that would support this, via annotation or something.
Walter: I think that is definitely a situation where you want to look at the assembler, because the compiler can generate branches even for things you might not expect. For example, if you are comparing longs and they do not fit into one, but only two registers, you need to generate compare instructions with a conditional branch in between them. It isn't obvious from looking at the code that you are going to be generating branches, so if you want to have no branch code, you are definitely going to have to look at the assembler.
Andreas: Yes. But I mean the compiler could support it, if you annotate a function to have constant time and the programmer writes an "if" in it, then the code generator could try really hard not to have any compare or branches inside the function.
Walter: That's right, it could, and that's an idea I have never thought of doing. That is kind of a cool idea.
(At 13:10)
Other: Why do we have so many different compilers? Wouldn't it be better if instead of Microsoft and clang and gcc, everybody worked on the same compiler and make a really, really good one? Are there any chances on that?
Walter: Well, I agree totally with you and I think everyone should get behind the Digital Mars C++ compiler. But on a serious note, even with free compilers, if there is only one, then it stagnates. Like I've said, I am a competitive programmer, and the existence of competition spurs me to make my products better; and when I make something better, then the other competitors are spurred to make their product better, too. If there was only one, complacency will set in, like: "Oh that's the way it is. Even if somebody submits patches to improve it, there is no real urgency to get it in, because what are people going to do? They have to use it anyway."
Other: So do you think it is an advantage that we have so many different compilers?
Walter: Absolutely. And also, even the competition aspect aside, there is the language standard. And the style and what it does and how it works is not specified by the standard and can be rather personal. Like this compiler is more suitable for this person's style. It's just like why there are so many different sedans on the road. You can get in all of them and drive from A to B, but the color is different, whether one has a cupholder, how quiet it is inside, whether the doors sound nice when you close the door, or whether it sounds kind of raggedy. They have different price points. I think that people don't really want to have all the same things. People are different, their styles are different, what they like is different. None of that is covered by the standard. Like compile speed, that is not covered by the standard. Quality of the code generator, that is not covered by the standard; or which variants of the CPU are supported; or the quality of error messages. So all those things vary widely from compiler to compiler.
Other: So would it maybe be a good idea to use multiple compilers at the same time, and compare their results.
Walter: You know, I know people who do that, and they have had good success with it. I often compile my code with a different compiler and it actually finds bugs that I didn't know existed, because it compiles it differently. Both compilers are standard conforming, but a different compiler exposes different problems. It also helps to make your code more portable if you do that. The warnings emitted by clang and gcc and Digital Mars are all different. Sometimes the warnings are invalid on one compiler, but often times they'll warn about something, and you look at it and go "you know, I really should write that in a nicer way. Even though it is correct, it is not that nice". Like clang's analysis and warnings: Oftentimes you are like "yeah, yeah, even though the code is correct, that is really kind of a nasty piece of work there and I should fix it". Or if you have a memory corruption problem: compiling something in a different way can expose the memory corruption in a way that was hidden using another compiler. So yes, definitely use multiple compilers. Your code will be more robust if you get it working successfully with multiple compilers.
Other: Yes, I have noticed that too. I have multiple projects now and I have to run them on both the Visual Studio C compiler and the MinGW version of gcc. And I was astonished on how many different things can go wrong although gcc was happy with my code.
Walter: Yes. It also protects people against extensionalism. I have seen that a lot when people only use one compiler; they tend to think that the extensions on that compiler and the way it works are the standard, and they are not. Their code becomes very difficult to port to another compiler because of that. That happened a lot in the old days, in the DOS world, where a lot of compilers invented their own extensions to handle the weird memory models on the DOS boxes and so it became pretty hard to convert the code to another compiler or move it to another system because of that. And this was all kind of unnecessary. It's just that programmer didn't understand what was standard and what was not.
(At 24:50)
Walter: So what are you guys working on? You were doing all the questions, now it's my turn.
Other: Currently, I am developing a library for virtual reality to track devices in virtual reality, basically just using leap. That is the part of my student job -- I am still a student here at KIT.
Walter: And which compilers are you using?
Other: For that, actually just Windows Visual Studio C compiler because the openVR library we are using just supports the Visual Studio compiler and just runs under Windows. It is a bit of a hassle. I'd like to use it under Linux as well, and maybe try gcc, but it's not running there. So I don't have a choice.
Walter: Oh, ok.
Other: Yes, that's a bit said.
Walter: Are you satisfied with the quality of compiler and tools that VisualStudio gives you?
Other: No, not really. I really hate it.
Walter: Really? Most people say they really like VisualStudio.
Other: Well, the IDE is great, but I don't like the compiler, so that's a problem. I am starting to use CLion and that's using gcc on my Linux, I think, and I have MinGW with gcc compiler on Windows -- that is basically what I am running. That is better in some ways; I felt like the error messages and the general handling of what my code does, and debugging, is in some ways better. That is why I asked for multiple compilers because I recently started to do it, and I actually like to use more than one.
Walter: I think the D and DMD compilers are currently compiled with three different compilers, and that definitely helps improve the quality of the code base. No, actually with four different compilers: Digital Mars C++, Visual Studio C++, g++, and clang++. Between the four of them, I think it is pretty portable code. So I am satisfied with that. So you are looking at each other -- you can jump right in!
Other: We also use a lot of different compilers and make a distributed system for signal analysis. It means someone is messaging over the air interface (military or other organization). We try to receive and decode the message. That is the problem domain. And we make software for Windows and Linux and we use Microsoft Visual Studio; also in the current version we use Linux gcc compiler, clang compiler and for Windows also the gcc with MinGW compiler. So a lot of compilers, a lot of different jobs, a lot of different virtual machines with different compilers and different operating systems and a lot of overhead because of that. That's why I asked the question above that it would be so much easier for everybody if we just had one compiler and one version of it. But that is the current state. But it is true, the Microsoft compiler compiles a lot more software, with less warnings; it is almost always true that a warning from gcc or other compiler may lead to real problems, which would stay undetected if we only used some Microsoft compilers.
Walter: Yes. Well, all that work they put you through is why you get paid the big bucks. They can't allow just anybody to do that kind of work; it can't be too easy to use.
Andreas: Well, I don't actually use C++, I am fortunate enough to use D in my free time.
Walter: Oh, cool. So you are ahead of the group here.
Andreas: I am using C++ 2007 or something.
Walter: I always find it interesting how many D features have crept into the C++ standard.
Andreas: Yes. We had one thing actually here about asserts and that they don't print the values that's going in there. And I remember from the D forum that they actually have a library which does it now.
Walter: Yes, it does pretty much the same thing that assert does in C.
Andreas: But the thing is, if an assert fails, you not only want to see the message that you gave it, but see the full expression and what values went into it, without running the debugger first.
Walter: Yeah, that's a commonly asked for thing. I tend to be a bit more like: well, the debugger works fine. That isn't a terribly popular view. Most people want it to give the assert as much diagnostic information as possible, without needing to do any extra work to get that information. That's actually something I have been working on right now, because we have found in one project that the asserts were so expensive that people were just turning them off to do the build, and to build a release version. And then in practice there were these weird bugs that showed up in the field that should have been caught by asserts, but weren't because the asserts were being turned off. So I was looking currently at a way to make the asserts a lot less expensive; because ideally the asserts would have zero costs at runtime, to encourage people to use them as much as reasonable, as opposed to being parsimonious on how one uses asserts -- which kind of defeats the point of having them. So there is this tug of war: how much information does the assert put out vs. how costly is the assert to have in your code.
(At 26:55)
Andreas: One interesting discussion was about the difference between compilation and linking. If you get a linker error, which is just some "undefined reference foo", you have the problem that it is not very helpful, because you don't know where the problem came from -- or anything else. I mean the linker doesn't have the information, because it doesn't know much about the language going into it, and the compiler doesn't know about it, because, well linking is not my job, what do I care. Do you have any ideas how this could be improved?
Walter: Well, the linker usually will say which module it was undefined in. So at least you have a starting point.
Andreas: It will give you the name of the thing, yeah.
Walter: It will give you the name of the module, or the name of the object file were it was being referenced from.
Andreas: Yeah.
Walter: That is usually big help. What I usually do is just simply take the name of the reference and I grep for it. And, well, it is kind of an old school fashion way of doing things, but it does work. But ideally, and I thought this many times, the linker should be part of the compiler. And as part of the compiler, it would have access to where it is being referred from, and then diagnostics would be easy. It would also compile a lot faster if the linker was part of the compiler. So yeah, I have definitely thought about that, but it's kind of a daunting problem. Not that the idea of linking is complicated, it's not. The whole reason that linkers exist is because trying to compile on machines with not enough memory: You couldn't compile the whole program at once because there wasn't enough memory to. Well, you can these days, so there is a definite possibility that the compiler could go straight from source code to putting out an executable, with no intermediate linking step. And that sure would be fun to do. The main difficulty is: Although linking is conceptionally a simple process, it has been larded up with all kinds of other things, and special cases, and undocumented things. Trying to build that into the compiler is a major project at this point; so that's why it isn't done. And also, when you do that, you run the risk of people adding more extensions to the compiler. And all of a sudden, you can't build your project any more because it doesn't have feature x that was added by the compiler, that some other piece of code depends on. I have actually worked on linkers in the bad old DOS days, and there is a lot of lore in the linker about how things work, that isn't in any of the specifications or any of the documents. And you can't successfully build a linker or link without all that arcane lore in there. So, it's a much harder problem than it should be, unfortunately. But it is definitely a cool idea, and it would definitely make your compiling-linking-step much faster, and it would also get you a lot better diagnostics. So it's a big win to do that. Nobody's done it, though. Unless maybe, I am not sure, the early versions of Turbo Pascal actually did that, and that was the reason why they were so fast.
Andreas: You could turn on link time optimization with gcc and stuff, but I don't know how much that improves the situation.
Walter: Well, you don't need the link time optimization switch if the compiler is doing the linking. The link time optimization is essentially inter-module optimizations because the compiler doesn't have the information. It's the same thing the compiler does inside if you take a module and the intra-module optimizations the compiler does. So, things like inlining functions across modules, that would be a link time optimization, simply because the compiler doesn't know anything about the other modules. But if you give all the modules at once to the compiler, it will know about that and you don't need link time optimizations any more. Also, the D compiler has an option where, unlike C, you can throw in all the modules at once on the command line, and the compiler will generate one giant object file with all of those modules compiled into it. Which is actually pretty cool because a lot of the link time optimizations are already done. It also makes for much faster compile times and link times because, instead of connecting multiple objects, it essentially goes through what's know as a pre-link process, where a lot of the linker's work is already done if you present it with one big object file that all the stuff you put on the command line is already inserted into.
(At 33:05)
Other: So, talking about error messages and combining compilers and linkers, I often find that the error messages that are generated by my compiler are somewhat cryptic and have way too much information for what I really need. It's often not that helpful, but just pattern matching: if I have this specific error message, it means I have something wrong in this part of my code and I need to do step x and y to solve it; and if it is the other error message I need to do maybe step z. Do you think there could be an improvement in generating error messages, maybe making compilers more intelligent or just sorting the information, providing better information for the coder afterwards?
Walter: You know, absolutely! Error messages get better and better and better over time. If you run a compiler from 30 years ago, I think you would be appalled from the error messages. But it is like cars -- have you ever driven a car built in the 1960s? Oh, you guys probably haven't. You know, if you drive a car today and then drive a car from the '60s, you realize how much better they have gotten, how much better they drive, just everything is better about it. Year by year, as new car models come out, you don't really notice anything particularly better about it. But over time, the difference is dramatic. It is the same with compiler error messages: They are getting better and better. Stuff that used to be perfectly fine 10 or 20 years ago is not acceptable today in the quality of error message. That's one thing. Another thing is that error messages are a kind of an art form, they are not really engineering. When you are originally writing the code, you diagnose an error and print a message. You think that's perfectly fine because you understand exactly why that error occurred because you wrote the code that detects it. It's only a while later, when you sort of forget why you wrote that error message or where it came from, that you are compiling a piece of code and that error message comes out and you are looking at it and are going: "That doesn't make any sense. I don't really know what is actually wrong here". And then you go back and look at the code an you go "Oh!". Then you rewrite the error message so that it is clearer. So it's an art form and an iterative process, and I don't really know of any organized way to improve that. Aside from a few things: One of the innovations that clang had (I think they were the first to do that), is they put a spell checker in the error message. If you had an undefined identifier, it would go looking around and see if there is a similar identifier that is defined and then it would put out a message saying: did you mean this one? That actually was a nice improvement, a general improvement, and all compilers have since adopted that.
Andreas: It is obvious in hindsight.
Walter: In hindsight, you are thinking: "Why didn't I think of that? Idiot." Another thing that's been a large improvement is colorizing the error messages: the text of the message is in one color and the symbols and pieces of code are shown in a different color. That's turned out to make the error messages much more readable. Especially if you have keywords that look like normal English words. They can make the sentences in the error message rather confusing. But if you colorize code differently from the regular text, then it becomes much more legible. So, there's that. There's an innovation I did in the '80s: you would actually print out the offending line and draw a little arrow under where it went wrong. That was kind of before its time, so it disappeared for years because nobody cared about it. And then clang started doing it and revived the whole thing. And now everybody is doing it: print out the offending line and put a caret under what went wrong. Oftentimes, that's worth more than the actual error message. If you combine the error message, the colorized syntax, printing out the line where it went wrong, it's actually gotten a lot better -- the combination of all those. One of them by itself is really inadequate.
(At 38:50)
Walter: One of the classic problems with error messages is with templates: when you have a deeply nested template expansion and it fails, famously, you can get pages and pages of the weirdest thing you have ever seen. That's gradually getting better. Like the D compiler: it will actually print out the stack of templates that's being instantiated, all the way back up to the guy who originally called the template in the first place. And that's very helpful.
Other: So it's constantly work in progress actually for finding a good error message or providing good error messages in general?
Walter: The process? Yes, it's a work in progress. And it also helps when people post bug reports saying here is my code and here is the ridiculous error message that came out for it. Those things are actually helpful. Since it is an art form and an iterative process, it helps us a lot having people submit when they find an error message unhelpful, and the context that generated it. I know it's fun to come onto news groups and grumble about error messages, but it's not helpful. What is helpful, is: here is my piece of code and here is its ridiculous error message. Can you guys do better? Those are very helpful. Yeah, in this case, yeah I can do better and I agree, that is a stupid, unuseful error message. And that applies to any other compiler you use: If the error message is unhelpful, let the developer know about it, they love that kind of feedback.
(At 41:15)
Andreas: Ok. So I think it's time to wrap up. Any last comment from you?
Walter: Well, you are using D, Andreas. So are you guys more interested in using D or mostly just interested with sticking with C++?
Other: I have never used D. What's the main purpose of the language?
Walter: Well, it is hard to tell what the main purpose of the language is by reading the spec. The main purpose is your code just goes together faster and has fewer bugs in it. That's kind of the bottom line: You get the same result, but with less time developing the code and debugging it. And over time, that adds up, and that matters.
Andreas: Yes. I would say D is a more modern language, like Rust, and Go, and others, but if you are working with a C++ code base, then D is the best choice to gradually migrate to a better language, because it has the best C++ interoperability among the modern languages.
Walter: Yes. And we are certainly trying to improve that interoperability, to make it easier and easier to have a mixed D and C++ program, but that's a work in progress because C++ is a very complex language and interfacing to it is complicated. You know, having written a C++ compiler myself, I am sort of in an ideal situation to figure out ways to be more compatible to C++. So that's one instance where my experience pays off. Well, I guess your beer is finished, so ... I guess that means the meeting is over.
Andreas: Ok. Thank you for joining us. It was an honor to have you here.
Walter: It was my pleasure and an honor to talk to you guys.
Other: Thank you.
Andreas: Thank you, good bye.