What programming languages should I study?

I want to work for Novell some day, so which programming languages should I study up on? Probably C, C++ and Java?

Good way to start, you’ll also need to gain a lot of experience. The more experience the better you earn :wink:

If you are new to programming I suggest you start with learning PHP, which is a scripting language. Learning this will give you the basic knowledge of programming.

Later on you can switch to a real programming language such as Java, Python, C and C++

EarthMind wrote:
> Good way to start, you’ll also need to gain a lot of experience. The
> more experience the better you earn :wink:
>
> If you are new to programming I suggest you start with learning PHP,
> which is a scripting language. Learning this will give you the basic
> knowledge of programming.

Unless the Op is (going to be) a web dev programmer, PHP is definitely
not a good choice for a first time experience.
See http://www.codinghorror.com/blog/archives/001119.html why not.

“PHP isn’t so much a language as a random collection of arbitrary stuff,
a virtual explosion at the keyword and function factory.”

> Later on you can switch to a real programming language such as Java,
> Python, C and C++

There’s no such thing as “real” or “toy” programming languages. Every
language has its niche, strong points and uses, even e.g. (.*)BASIC or LOGO.
A good programmer isn’t someone who knows one language inside and out,
but someone who knows how to translate requirements for input, output
and the stuff in between into the most efficient logic.
This logic can than be translated in the most efficient programming
language for that particular job. Sometimes that language is a compiled
language like e.g. C, other times a interpreted language e.g. AWK or
Python is more appropriate. Or something in between, like JAVA.

I wish I had Python available 25 years ago, learning to apply efficient
logic would have been a lot easier with Python than it was with gwbasic
on a TRS-80 clone.

For the OP: Python is my recommendation for a first start. It has

  • some good, free, IDEs (I use WingIDE),
  • (almost) all the libraries you’ll ever need,
  • great support in usenet groups, maiinglists, IRC etc.
  • it lets you concentrate on the logic of your problem instead of
    housekeeping like variable type setting or memory allocation.
  • using more and more of Python’s complicated uses comes gradually, it
    ‘grows’ on you.
  • It’s a very general language, it can be used in a broad area of
    applications.
  • it’s not platform dependent, the same code (most of the times) runs on
    Unix-like, Windows or OS-X PCs without changes.

After gaining experience, C, C++, JAVA, .NET (or MONO) etc are just
‘more of the same’, with some new tricks and new gotchas.

…I suggest you start with learning PHP…

Sorry, but that is extremely poor advice. PHP is nowadays almost only used as a web scripting language. As such, it has limited Object-Oriented support and is bundled with libraries aimed at primarily web developers. For beginnning programmers, I recommend either Python or C++. Python is the more basic of the two, and bypasses many problems associated with compilation, linking, and header files. C++ is a more realistic language used by professionals, but is correspondingly more difficult to learn and requires a bit more patience. I personally first began programming with C++, and enjoyed the extra challenge. Either way, both are great languages with large and helpful communities.

Note: If you use both Windows and/or Linux, you might want to consider using C# as an alternative to C++ or Python. Mono has reached a high degree of stability, and the MonoDevelop IDE is very user-friendly for beginners. The downside of Mono is that it is still rather unstable, with limited support for .NET 3.x. While it might be a good idea to hold off using Mono right now, this is definitely another option worth considering in the near future. :wink:

Start with C++: Learn object oriented programming from the start.

Then learn C: Learn pointers, memory management, and data structures.

Then learn Java: Master a full featured API and hone your object oriented skills with large programs.

Learn scheme or lisp to understand functional programming.

Finally learn a scripting language such as python, ruby, or scala.

  1. (optionally) do BASIC/PHP to get the basic idea of programming
  2. do Perl to get to know the basic idea of programming and the notion of pointers (Perl and Java wrongly call these references)
  3. do C to get to know of the char*-based string handling
  4. do Perl again to get an introduction to namespaces, and incidentally, objects
  5. do C++ to get the rest of OO (C++/Java-style objects, templates, polymorphism, overloading, upcasting, multiple inheritance)

There is not really a need for Java. When done right, once you have the concepts at heart, any (non-esoteric) programming language is almost directly usable; a bit of extra time on each language may still be necessary to get used to its peculiarities.

You have to blame C++ for that terminology. And indeed they are closer to C++ references (&) than to C/C++ pointers, as you cannot create a null reference.

As for the original question, you don’t so much study programming languages as practice them. After a while you’ll see the similarities but also appreciate their peculiarities.

Sorry, I have to correct myself here. Java does allow you to assign null to a reference, and therefore null pointer exceptions can happen. But compared to C pointers, Java references are tame.

That is a totally bogus argument. Java references can be null. And actually, C++ references can be null too, but that is something very implementation-specific and hence not the point here now. But:

Why do I think they are pointers? Because pointers you have to dereference, whereas references (aliases) you can just use…


int n = 5;
int &r = n;
int *p = &n;
printf("%d %d
", r, *p);

The same in Perl…


$n = 5;
# NOTE: You should not use *r on a daily basis, as globs can be deadly
*r = \$n;
$p = \$n;
printf "%d %d
", $r, $$p;

See? I have to derefernce $p first before getting the value out of it.
And for great justice, in Java too:


int n = 5;
int r = 

Oh wait, Java is too inferior to even support this.

Now you still need to explain what makes Java pointers “tame”.

Re Java “pointers”: No pointer arithmetic. No correspondence of arrays with pointers. No pointers into structure elements. No pointers on primitive types. And so forth.

“Those who do not understand it are doomed to reinvent it”—or to leave it out entirely. Sucks to be Java, does not even have memcpy for use with an array of primitive types.

It’s interesting that to explain pointers you have to use the r-word, as in dereference. Which probably indicates that our distinctions of pointer and reference are probably more linguistic than any hard and fast definitions of what a pointer is and what a reference is. I suspect the inventors of Perl and Java were trying to distance themselves from the connotations of C-style pointers in the adoption of the word reference. Probably also a nod to the pass by value/reference distinction for function arguments.

Speaking of calling conventions, only the C++ &references and Perl in some contexts pass by reference, all others actually use pass-by-value. (One more reason to not call it references but pointers :wink: )


static void foo(Object p)
{
    p = null; /* does not affect caller */
}

Back to the OP’s question, I suggest learning enough C and C++ to know the basic differences between the two and then pick some source code to read and try to understand.

In industry, being able to figure out existing code is almost more important than writing new code.

From when you start do get in the habit of doing good commenting with a consistent format. Either pick an existing format or make one yourself as you learn, it doesn’t matter.

Try to get in the habit of commenting every variable declared, even int loop count variables. This may seem like overkill and a bit of a drag at first, but this habit will save your bacon in the future.

Consider this simple example (my syntax might be a bit off - my most recent coding was Fortran about a year ago):


(int)fib(int i)
int j=1,t=0;
{while (j<=i,t=t+j,j++);return t;}

vs (underscores used for leading whitespace):


(int)FibonnacciSum(int Index) // function to calculate Fibonacci sum of [Index] repetitions
   int i=1,sum=0; // i is loop counter, sum is value to be returned
   {
      while (i<=Index,sum=sum+i,i++); // calculate Fibonacci sum
      return sum;
   } // end of function

In 5 years it will be much easier to read the second example. Remember, in industry, you aren’t writing code for yourself, but for the poor guy coming into the project fresh to find a bug no one noticed for 5 years.

Your second example is as bad as the first. Both have a three-fold ridiculous expression as the condition for the ‘while’ clause, and both actually run in an endless loop. Well, I can congratulate you on showing twice how to NOT do it. The second one does not add anything, expanding i to Index and t to sum hardly buys something in a function this small. Oh, and mixing styles (“Index” vs “index”) is a quite bad habit. Obvious comments like “i is loop counter” are redundant, as is one of the two “calculate fibonacci sum”.

For great justice, a proper version. Maybe not the fastest, but one of the cleanest.


static int fib(int i)
{
        if (i <= 1)
                return i;
        return fib(i - 1, i - 2);
}

/* See second identity of Finbonacci numbers on Wikipedia */
static int fibsum_identity2(int i)
{
        return fib(i + 2) - 1;
}

/* fibsum_identity2(7) = 33. Q.E.D. */

But in fact in Java there isn’t even a distinction between referer and referent. There is no other way to get to the object except by that “handle”. So you can’t do this, which even PHP can:

function foo(&$result) {
  $result = 42;
}

This inability to pass back a result in arguments leads to the creation of auxiliary classes for results (or some ugly tricks with arrays), making code more drawn out.

You define fib as getting one variable passed to it and then as part of this definition you return a call to fib passing two variables. You claim this logic is better than what I used? I at least disclaimed that my syntax might be wrong. Turns out my logic is wrong too. Good thing I was talking about the importance of commenting and not answering a question on how to calculate Fibonacci numbers.

My example was an effort to stress the importance of good commenting, which you have, ironically, further reinforced by providing another example that appears to have errors but with no documentation whatsoever. At least my example made it easy to figure out what I had done wrong.

One could read what I was trying accomplish with my code and realize that the function was wrong. This is the very definition of a bug - that syntax and or logic do not agree with what was intended. Easier to find and fix with good comments.

From your example, I have no idea why you are defining the fib function to accept one variable but then as part of it’s own definition calling it with two. Perhaps if “obvious comments” were added to your example this disagreement between accepted and passed arguments would have been noticed. For that matter, isn’t the way you return from a function with a call to itself an endless loop also?

To a point I’ll agree that in such a small snippet of code, perhaps it is intuitively obvious what is going on. However, good commenting habits need to start somewhere. It is better to comment extremely well, bordering on overkill, all the time than to be in the habit of not commenting well and then have a tangled mess for a source as a deadline approaches and you still can’t find what is wrong.

BNG22908 wrote:

>
> I want to work for Novell some day, so which programming languages
> should I study up on? Probably C, C++ and Java?

A powerful programming language is more than just a means for instructing a
computer to perform tasks. The language also serves as a framework within
which we organize our ideas about processes. – Structure and
interptrtation of computer programs.

I did not need any comments because it is obvious what a fibsum is defined as*— this was not clear in your code at all. (I might note that I had to lookup what a fibsum actually is because noone told me about it in university.)

From your example, I have no idea why you are defining the fib function to accept one variable but then as part of it’s own definition calling it with two.

Something must have slipped horribly during compilation. :smiley:
But at least the compiler well tell you; your previous code compiled, ran, and output a wrong result and the unclueful mathematician would not know why.

For that matter, isn’t the way you return from a function with a call to itself an endless loop also?

if (i <= 1) return; is the stop condition.