2
$\begingroup$

I know that most computer science students learn things like they are assumed some basic knowledge of programming and they are learning gradually more accurate reasoning and ways to produce more reliable code. But on the other hand, I studied myself to be a mathematician and found it very interesting to learn axioms and build the whole theory to just a few assumptions, rigorous definitions and I was sure that everything works if there is no contradiction between axioms.

Is there anything similar learning curve for studying computer science for math majors? Can one learn first Turing machines, then to build core of OS, then hex editor, then C compiler, the some theorem proving software like Coq and then prove that the OS is free from bugs etc.

$\endgroup$

4 Answers 4

4
$\begingroup$

Yes, it is possible to do that, but it is terribly inefficient. It will take you approximately forever, partly because it is no longer true that any single person can now understand all of Computer Science. The boundary was passed somewhere around the turn of the century, just as in mathematics it happened around the turn of the previous century.

If you start from the very beginning, ignoring everything that came later, you will deny yourself the possibility to benefit from the big ideas that developed along the way that the history followed. One of the biggest ideas in computing is the idea of abstraction and understanding that buys you a lot in understanding and also in guiding your learning.

A Turing Machine, just like pure Machine Language, has no abstraction facilities whatever. Assembly language adds a bit. Fortran and Cobol added more. But it was the Algol family of languages that really solidified the importance of abstraction and it was languages like Lisp and Smalltalk that extended it to modern levels of understanding. But if you have a good understanding of abstraction, such as you would learn from programming in Java or Python, then you can more easily move down to simpler (less abstraction) languages and move upwards to higher levels (database, AI, etc).

I think you have a misunderstanding, actually, and are conflating Computer Programming with Computer Science. They are not the same. Programming at a high or low level is just a tool for understanding more important and more interesting things. (I'm not claiming you have this misunderstanding, but your question leaves hints of it, at least.)

Most students of computer science start with some high(er) level language, often Java or Python. From there they move up and down the abstraction scale. I've built (finite) Turing Machines and explored them deeply when studying Algorithms. But I didn't start there. I'm old enough that Fortran and Basic were my first languages. But then Pascal taught me about a certain kind of abstraction (procedural) and later Smalltalk taught me object abstraction.

I've never built a full OS, but have built a kernel. A micro kernel is really only a few lines of code, actually, but to understand that you have to take a layered view of an OS (abstraction again).

I've built a lot of compilers, but always in higher level languages (C and higher). I've built editors, but not text editors. Graphical drag and drop editors are more interesting in today's world. Think Scratch, though that wasn't me. But my higher level view of what constitutes a program (abstraction) enables me to build any of those things as well as future things for which the need might arise.

So, again, while a purely historical, low level to high level, approach is possible, it isn't, in my well-formed and long-held opinion, the best way. In particular, you don't have to independently re-develop the idea of abstraction to become a good programmer and to use abstraction effectively to get to the "interesting" things.

But that is just the programming part. There is much more in Computer Science and you won't get to that soon enough if you recapitulate the entire history of computing.

$\endgroup$
3
$\begingroup$

On high and low level

First let me say that low -> high level in programming is very different (and often opposing) to low -> high level in education.

Educationally you want to start with what you know and move one step at a time to new lands. You want to start concrete and move to the abstract.

Programmingly you should not move from low level to high level (hardware -> machine code -> OS -> ...). The technically low level, is educationally abstract (high level).

A suggestion

From reading your background, but depending on your age, I would recommend: structure and interpretation of computer programs. Both the videos and the book. You will learn many many things, including but not limited to: writing compilers, OS stuff (streams, concurrency, etc) and low level machines.

https://mitpress.mit.edu/sites/default/files/sicp/index.html

https://www.youtube.com/watch?v=2Op3QLzMgSY

I would also consider other languages such as Snap. If there are good resources on teaching it well.

$\endgroup$
2
$\begingroup$

About "Computer Programming from Bottom to Top", I recommend the book By Nisan and Shoken.

The Elements of Computing Systems: Building a Modern Computer from First Principles

See https://www.nand2tetris.org/

(Computer Science is a rather different story)


Actually mathematics were never built by laying definitions + axioms and then deducing theorems.

Look at numbers:

  • Complex numbers appeared in 1545 as tricks to deal with radicals of cubic equations. The correspondence with points in the 2D space R^2 became clear in 1799.
  • Speaking about R, the first formal definition of reals number was given by Cantor in 1871
  • and (apparently not so) natural numbers 0, 1, 2 ... got an axiomatization only in the 1860-1890 (Peano etc.)

Anyway, it didn't prevent mathematicians to play with questions about numbers since centuries / millenaries. Like showing the square root of 2 is not a rational number.

Definitions, axioms and formal theories are introduced when they help to clarify things and solve problems.

$\endgroup$
3
  • $\begingroup$ True, the axiomatic method was mostly a product of late 19th and early 20th century thought. It unified, rather than created (some) mathematics. But today, a lot of students learn with methods informed by that development. $\endgroup$
    – Buffy
    Commented Oct 19, 2019 at 20:25
  • $\begingroup$ -1 for the unnecessary and debatable frame challenge. “The concept of axiomatic development in mathematics must be ranked as one of the very greatest of the Great Moments in Mathematics.” – Howard Eves, Great Moments in Mathematics Vol. 1, in regards to the development in Greece 350 BC. $\endgroup$ Commented Oct 20, 2019 at 18:46
  • $\begingroup$ @DanielR.Collins this is not a contradiction with the fact that mathematicians mostly work from top to bottom. You start from a conjecture (often false as you'll discover later), try to reduce to sub-problems (lemmas and properties) and down to commonly accepted statements. Axioms if you are encline to formality, or "proof left to the reader". Or "obviously". $\endgroup$ Commented Oct 21, 2019 at 13:24
1
$\begingroup$

Can one learn first Turing machines, then to build core of OS, then hex editor, then C compiler, the some theorem proving software like Coq and then prove that the OS is free from bugs etc.

Can you learn anything in any arbitrary order if you put in enough effort? Sure. But there are things you're glossing over.

You never really mentioned what your learning goal is, but I'm going to assume your goal is to learn skills that you could build a career out of, rather than just having a look around the history of CS as a recreational exercise.

If it is a recreational exercise, then you can just do what you want. It's the equivalent of freely walking around in a museum and looking at whatever you're interested in.


0. Computer science vs computer science

The answer depends on what you mean by "computer science". If you're purely focus on the algorithmic theory of it all, then this follow the same general approach as math and therefore you can/should learn it the same way.

But the examples you use in your question seems to be much more focused on practical applications of CS, rather than the theoretical algorithmic. Here, the same logic does not hold true, and the rest of the answer is written to address "software development" rather than theoretical computer science.

1. Obsolete means useless

For example, if you want to learn how to build a car, you could start by learning to build a cart, and you could take a deep look at the intricacies of minimizing wear and tear on a wooden axle. But it'd be a lot of effort to learn something that is no longer relevant today, and it won't meaningfully contribute to your experience as a car mechanic.

Similarly, if you want to be a C# developer, there's no point in learning Assembler first. Just because Assembler came before C# and somewhat inspired the C# creators does not mean it's beneficial to learn it now that C# is a well established platform on its own.

2. Catching up to current technology

Don't forget how fast the industry moves. If you spend time and effort learning the old and outdated things, by the time you're done learning about them, some things that were current back when you started may be outdated by now, so you'd have to learn them too.

The software development field moves at a breakneck speed and it's already hard for professionals who are already at the forefront to keep up with new developments. As a learner, you will be slower (due to inexperience) and you have more ground to cover (because you're not at the forefront).

Sure, if you focus on a particular specialization of software development, you're going to have less ground to cover, but I'd advise against starting out with tunnel vision. Programmers are incredibly sensitive to the "when you have a hammer, everything looks like a nail" adage and if you teach yourself a limited skillset, you're likely to end up trying to use this skillset in inefficient ways because you don't know of a better way.

3. Conventionalism over universalism

Math is different. Math is universal, and the core principles have not changed for millennia. You can't really learn the newer things without understanding what they are based on.

Good mathematics are a matter of universal correctness. You can independently rediscover math and end up with the exact same end result.

Programming paradigms, however, are fashionable. It's not about what's universally correct, but rather about what works. In a different ecosystem, different paradigms prevail.

Good programming is a matter of convention. If you rebuild programming paradigms from the ground up, you are likely to end up with completely different paradigms. Even if they are as correct, they will still be fundamentally different from the current paradigms and you won't be a compatible employee for most (if not all) companies.

Way back when processing power was the biggest bottleneck, low-level programming reigned supreme as it gave developers control over every minor aspect of the application in order to squeeze every bit of performance out of it.

Nowadays, with processors being everyday powerhouses, the bottleneck has shifted towards maintainability of the codebase. The prime focus now is human-readable and change-friendly code.

The rift between the old and new lines of thinking are massive. They share little to no common ground anymore, and this means that learning one way is mostly going to teach you things that the other way labels as "bad practice". If you err on the side of learning the old way first, you're going to have to make every (nowadays) bad practice mistake.

4. Divide and conquer

Essentially, every mathematician needs to understand math from the ground up, from the axioms to the specialized applications thereof. You can't be a mathematician if you don't understand 1 + 1 = 2.

But software development specifically aims towards ensapsulation of responsibilities. I don't know how a compiler works internally, but it doesn't matter because someone already made it. I don't need to know how it works for me to do my job, I just need to use it.

What you're suggesting is akin to saying that you need to understand chemical combustion, oil refining, the vulcanization of rubber, ... and car manufacturing before you can properly drive a car, and that's simply not the case. Knowing how something is built and knowing how to use it are two different things, often important to different people.

Therefore, you don't need to know how things like the C compiler or your OS have been built in order for you to use them (for software development), and it's much better to not look under the hood until you are experienced enough to look under the hood and actually understand what's happening.

5. Historic knowledge is a nice-to-have

Don't get me wrong, I still like looking at older technologies once in a while, but I do not rely on that for understanding modern technologies. It's always interesting to know more about the evolution of a field and how things have changed.

But if you're focusing on current-day-employability (which is what I assume your learning goal is), then obsolete technologies are not relevant for acquiring the needed skills.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.