>>155i see it intuitively as a difference in input granularity. that is to say, any deterministic device will give exactly one lifetime-sum output for exactly one lifetime-sum input. that is to say, if you consider every bit of input a computer ever receives during its lifetime of operation as a single input (ignoring the effects of unintended inputs, like electronic interference or whatever), you could build an equivalent "conversion device" that produces the exact same output for the same input.
the difference comes when that input is broken down into smaller pieces. every continuous, start-aligned subset of this summed input could be considered in the same way, and thus emulated in the same way with a conversion device, but the complexity of a conversion device designed to perform all of these conversions would become the sum of the complexity of all those different subsets of the input (or you could optimise it down using currying, and maybe some other fancy things, sure, but that's not fundamentally different for what i'm getting at).
the point is that, as this summed input approaches infinite length over the lifetime of the machine, the complexity of an emulating conversion device would also necessarily approach infinity, while the complexity of the computation device never changes.
you can make a LUT which matches a function with infinite range, but only over some finite selection out of that range.
so yeh, that feels like an inherent difference to me.