Not really an answer to your question, but I feel it fits here to give some technical background of what the book is talking about:
Most language implementations nowadays (C++ is the most important counterexample1) store objects not right "in place" where they are used (i.e. in the function-call stack), but in some random place on the heap. All that's stored on the stack is a pointer/reference to the actual object.
One consequence of this is that you can, with mutability, easily switch a reference to point to another object, without needing to meddle with that object itself. Also, you don't even need to have an object, instead you can consider the reference a "promise" that there will be an object by the time somebody derefers it. Now if you've learned some Haskell that will heavily remind you of lazy evaluation, and indeed it's at the ground of how lazyness is implemented. This allows not only Haskell's infinite lists etc., also you can structuce both code and data more nicely – in Haskell we call it "tying the knot", and particularly in OO languages it's quite detrimental to the way you construct objects.
Imperative languages can't do this "promise" thing nicely automatic as Haskell can: lazy evaluation without referential transparency would lead to awfully unpredictable side-effects. So when you promise a value, you need to keep track of where it'll be soonest used, and make sure you manally construct the promised object before that point. Obviously, it's a big disaster to use a reference that just points to some random place in memory (as can easily happen in C), so it's now standard to point to a conventional spot that can't be a valid memory location: this is the null pointer. That way, you get at least a predictable error message, but an error it is nevertheless; Tony Hoare therefore now calls this idea of his "billion dollar mistake".
So null references are evil, Haskell cleverly avoids them. Or does it?
Actually, it can be perfectly reasonable to not have an object at all, namely when you explicitly allow empty data structures. The classic example are linked lists, and thus Haskell has a specialised null function that checks just this: is the list empty, or is there a head I can safely refer to?
Still – this isn't really idiomatic, for Haskell has a far more general and convenient way of savely resolving such "options" scenarios: pattern matching. The idiomatic version of your example would be
mySecond' (_:x2:_) = x2
mySecond' _ = error "list too short"
Note that this is not just more consise, but also safer than your code: you check null (tail xs), but if xs itself is empty then tail will already throw an error.
1C++ certainly allows storing stuff on the heap too, but experienced programmers like to avoid it for performance reasons. On the other hand, Java stores "built-in" types (you can recognise them by lowercase names, e.g. int) on the stack, to avoid redirection cost.
2Consider the words pointer and reference as synonyms, though some languages prefer either or give them slightly different meanings.
>>=) and can be explored this way