In a pipeline, commands run concurrently. That's the whole point, the output of one is fed to the other in real time.
You only know the exit status of a command when it returns. If you wanted awk to process the output of foo and also get access to its exit status, you'd need to run awk after foo after having stored foo's output somewhere like:
foo > file
awk -v "rc=$?" '{print rc, $0}' < file
Alternatively, you could have awk run foo by itself (well, still via a shell to interpret a command line), read its output (through a pipe via its cmd | getline interface to popen()) and get its exit status with:
awk -v cmd=foo '
BEGIN {
while ((cmd | getline) > 0) {
print
}
rc = close(cmd)
print rc
}'
However note that the way awk encodes the exit status varies from one awk implementation to the next. In some it's the status straight as returned by waitpid() or pclose(), in others it's that one divided by 256 (even when foo is killed by a signal)... though you should be able to rely on rc being 0 if and only if the command was successful.
In the case of gawk, it did change recently.
Or you could have the exit status fed at the end through the pipe:
(foo; echo "$?") | awk '
{saved = $0}
NR > 1 {
# process the previous line
$0 = prev
print "output:", $0
}
{prev = saved}
END{rc = prev; print rc}'
(assuming foo's output ends in a newline character when it's not empty (is valid text)).
Or fed through a separate pipe. For instance on Linux and with a shell other than ksh93:
{ : extra pipe | { (foo 3<&-; echo "$?" > /dev/fd/3) | awk '
{print}
END {getline rc < "/dev/fd/3"; print rc}'
} 3<&0 <&4 4<&-; } 4<&0