• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Still worth it to learn C?

  • Thread starter Thread starter Zeuge
  • Start date Start date
Especially the support for Higher Order Functions is crucial.
I don't understand the emphasis here for the case of F#. Don't all functional languages (q, Lisp, Haskell, OCaml to name a few) support Higher Order Functions?

I guess it's the only Microsoft supported functional .NET language...
 
... stuff about D language...
I like this line of thinking - a better C. Still low level and fast, but without the C baggage

One poster, who doesn't know much about compiler optimization and code performance, suggested that C should be used for performance reasons. What this poster doesn't realize is that C is very difficult to optimize because it has pointers, not arrays. The D language was designed by one of the best compiler designers I know of (Walter Bright). D can be optimized much more efficiently than C or C++. So if you really want code that is as fast as possible, D is, again, a better choice.
I don't think most people think about compiler optimizations when they think about C. More likely they are thinking about hand optimization.

I'm curious - what's your take on the "sufficiently smart compiler" meme? Myth?
 
Hopefully, after the debacle at Knight Capital people will start to think about how important software quality is. You can design more reliable software in Java or in D.

Maybe, maybe not...
Software quality has less to do with a compiler and more with developer and organisational quality.

No one uses D, yes?
 
I don't understand the emphasis here for the case of F#. Don't all functional languages (q, Lisp, Haskell, OCaml to name a few) support Higher Order Functions?

I guess it's the only Microsoft supported functional .NET language...

It seems that Haskell has influenced F# (Eric Meijer).
 
I'm curious - what's your take on the "sufficiently smart compiler" meme? Myth?

I spent many years working on compiler design and implementation. Modern compilers, given the right language, can do a much better job than "hand optimization". One reason for this is that compilers can optimize across functions, they can unwind loops and do branch optimization. Modern processors support parallelism and the compiler can take advantage of this in a way that a hand coder cannot.

The problem with C and C++ is that the pointers obscure many of the possible optimizations. Java is a better language for optimization, as is D.
 
Maybe, maybe not...
Software quality has less to do with a compiler and more with developer and organizational quality.

No one uses D, yes?

Software quality has a lot to do with memory usage and whether memory errors can be avoided. Large C++ programs tend to have memory errors, even if the only data structure used is objects. Allocating memory by hand is error prone.

The fact that D is not widely used doesn't mean that it should not be used. The syntax of D is closer to C++ than some languages, it compiles to native and it can interface to C libraries (e.g., any POSIX library or the windows libraries). The fact is, we should use the best tools that are practical.

Certainly there will always be legacy code and we can't just get rid of C++ by fiat. But if there is an opportunity to develop a new code base, then Java or D are good choices. Neither of these languages has the memory issues of C++.

There are no silver bullets. D will not be the solution to bad software design. However, I have years of experience developing object oriented software. Unfortunately I make mistakes. I want tools to catch these mistakes (this is why we have type checking, for example). I can't catch as many mistakes in C++.
 
Software quality has a lot to do with memory usage and whether memory errors can be avoided. Large C++ programs tend to have memory errors, even if the only data structure used is objects. Allocating memory by hand is error prone.

The fact that D is not widely used doesn't mean that it should not be used. The syntax of D is closer to C++ than some languages, it compiles to native and it can interface to C libraries (e.g., any POSIX library or the windows libraries). The fact is, we should use the best tools that are practical.

Certainly there will always be legacy code and we can't just get rid of C++ by fiat. But if there is an opportunity to develop a new code base, then Java or D are good choices. Neither of these languages has the memory issues of C++.

There are no silver bullets. D will not be the solution to bad software design. However, I have years of experience developing object oriented software. Unfortunately I make mistakes. I want tools to catch these mistakes (this is why we have type checking, for example). I can't catch as many mistakes in C++.

Software quality is much broader than _just_ memory. I would say that 99% of all embedded systems are written in C/C++. There are tools to find leaks and smart pointers are in C++ 11.

D might be great, but there are many great languages. The ISO languages (Cobol, Fortran, C, C++) are the ones that have/will survive it seems.


Critical, hard real-time applications tend to be written in C++.
 
Critical, hard real-time applications tend to be written in C++.
depends... Critical, hard real-time applications (and I'm talking about real-time) tend to be written in a subset of C++ that is closer to C than to C++.

Of course, I'm not talking about aeronautics. That's still Ada territory.
 
I spent many years working on compiler design and implementation. Modern compilers, given the right language, can do a much better job than "hand optimization". One reason for this is that compilers can optimize across functions, they can unwind loops and do branch optimization. Modern processors support parallelism and the compiler can take advantage of this in a way that a hand coder cannot.

The problem with C and C++ is that the pointers obscure many of the possible optimizations. Java is a better language for optimization, as is D.

Of course the daddy for optimisation is FORTRAN. The most valuable thing you can learn in HPC is to unroll loops by hand and structure code so that it gives the compiler the best chance of optimising. Unfortunately, due to pointer aliasing in C the compiler has little chance. However, there is now the possibility in C/C++ to do strict pointer aliasing and tell the compiler that array bounds/pointer memory addresses are non-overlapping. Apparently you can get C to go as fast as FORTRAN in that case, but I've never tried it myself.
 
I find it puzzling that someone mentions Java here as easier to optimize due to lack of pointers; as opposed to C++, it's a language with reference semantics (and if you think there are no pointers look here), so there's even more of an indirection (and all the related implications apply, including but not limited to aliasing). If anything, Fortran is indeed a good example.

Naturally, pointer aliasing is more of a problem for another language, called "C". C++ is a different language, with different typing discipline, which enables type-based alias analysis, with obvious implications for compiler optimization.

Lastly, folks who mention GC in the context of memory leaks/releasing memory seem to be somewhat misinformed as to what are GC advantages and what it's for:

But it also has advantages of being more convenient for the user that they don’t have to worry about object ownership (not “releasing the memory” – if you hear that GC solves the problem of “releasing the memory” it means you talk to an idiot) and it’s also faster than manual (heap-based) memory allocation, as well as can lead to less memory fragmentation and stronger object localization.


I haven't seen "delete" (or "delete[]" for that matter) used "by hand" in a modern C++ code-base for well over a decade and "forgetting to delete" was never really a problem outside of last-century marketing materials for the managed languages. And if one really wants to go that route, then C++ with unified treatment of resources (be it memory, files, locks, network connections, etc.) via RAII is by far easier and automatic compared to having to remember to use try-catch-finally or try-with-resources constructs "by hand" (I mean, if the C++ alternative for one resource is a problem for a programmer when coding in C-style (and it really isn't when writing C++ in C++), then having to manually remember about various try constructs for all kinds of resources must be an insurmountable challenge for that same programmer ;]).
 
I would probably blame the programmer, not Java.
As I understand it, he was talking about the unpredictability of GC (when does it happen, how long does it take) in the context of soft/hard realtime tasks.

Java has "stop the world" GC. But I see there are Real Time libraries that have threads which are protected from this.

Still, it demonstrates why one might look at manual memory allocation as desirable.
 
BTW I agree with Polter's post.

Discussions on memory management is old hat at this stage. It is a remnant of 90's marketing blurb
 
AFAIK GC in Java and C# is non-deterministic, so how is it the fault of the progammer?
Simply because you need to know your tools. If you decide to do RT programming in Java, you better use a subset that can be deterministic or a Java implementation that is suitable for RT.
 
I don't understand the emphasis here for the case of F#. Don't all functional languages (q, Lisp, Haskell, OCaml to name a few) support Higher Order Functions?

I guess it's the only Microsoft supported functional .NET language...

No emphasis intended :)
 
Simply because you need to know your tools. If you decide to do RT programming in Java, you better use a subset that can be deterministic or a Java implementation that is suitable for RT.
Some developers claim than RT Java is impossible. Can GC be made deterministic in Java?
 
Some developers claim than RT Java is impossible. Can GC be made deterministic in Java?
I can't comment either on RT Java development nor RT C++ since I have never done it. I have done C and Assembly. However, there is a Real-Time specification for java and there are implementations for VxWorks and QNX. From a quick google search you can find they are used by the usual suspects (Boing, BAE, Raytheon, etc).
 
Back
Top