The SHA-3 functions are defined over arbitrary bit strings as input, whose length in bits might not be a multiple of eight. Implementations are allowed to skip support for this and restrict themselves to bit sequences that fit into a neat number of bytes, and in fact routinely choose to do this.
After the enc8 bytes are concatenated, are they transformed back into a number or binary or...?
Nope, although the definitions take great care that you could do so, unambigously. This particular NIST SP defines constructions that involve concatenating multiple inputs. For example, the second clause of the definition of CSHAKE (for non-empty $N$ or non-empty $S$) is:
$$\mathrm{KECCAK}[256](\mathrm{bytepad}(\mathrm{encode\_string}(N) \| \mathrm{encode\_string}(S), 168) \| X \| 00, L)$$
And one of the key concerns in these definitions is that the resulting bitstrings be unambigously parseable—that you could, from any of the bitstrings and knowledge of the context in which it was used, unambigously recover the values of $X$, $L$, $N$ and $S$ that went into it. This isn't because there's a practical requirement to parse these values after you've concatenated them, but rather because the designers want the encoding to be an injective function—meaning that every distinct combination of inputs will encode to a different bit strings. The advantage of this is that we never have to worry that two distinct input combinations will "collide" and produce the same encoded string. (Note that if an attacker could find such an encoding collision that'd count as a break of the derived function!)
What is meant by the base256 encoding of an integer?
The same thing that's meant by a base-2, base-8, base-10 or base-16 encoding of an integer, except with base 256. The integer $m$ is represented by a sequence of digits $x_1, \dots, x_n$ such that
$$m = 256^0 x_0 + 256^1 x_1 + 256^2 x_2 + \cdots + 256^n x_{n-1}$$
Do $x_1, x_2, \dots, x_n$ all contain the same bytes, or do they contain different bytes split across them?
The $x_i$'s are not bytes, they're base-256 digits, abstractly. These are of course in a one-to-one relationship bytes, but the definition doesn't assume any specific representation of the digits—however they are represented, the $enc_8$ function is going to encode them into bitstrings.
Does enc8 effectively mean little-endian?
Kelakala answers this with "no" based on a definition where endianness is about byte order. Not an unreasonable point to make, but I think the important thing to remark is that, again, SHA-3 is defined over arbitrary bitstrings, and the definition of enc8 is making sure that the base-256 digits are getting encoded into 8-digit bitstrings where lesser significant bits first, which is the same idea as little-endianness, except over bits instead of bytes.
Remember how I said above that many SHA-3 implementations will skip support for input bitstrings that don't fit evenly into bytes? All this business here is about catering to such implementations:
- Integers re encoded into bit strings with lengths divisible by 8, so that byte-oriented implementations aren't forced to cope with arbitrary bitstrings;
- The encoding takes care to specify the individual ordering of each bit of the input, whereas byte-oriented implementations will not expose any such order.