The question has no general answer.
In interpreter mode, local variables and stack depth are probably interchangeable in terms of performance, but its of course up the the interpreters implementation.
In JIT mode, it depends majorly on the target architecture. If the target CPU uses a register file programming model (lets say x64/86 or PPC) there will probably not be any operand stack at all in the resulting machine code - it would have been transformed into register mapping (competing with local variable for the same register set). If its a stack oriented architecture (Sparc), the operand stack should be very fast anyway - after all, its built around the stack.
You will only get a definitve answer if you take a look at the JIT'd code for a particular byte code sequence. And the code could change with each VM version. Its probably a waste of time to worry about optimizing your byte code this way.
Make your byte code come out using the same idioms javac uses. That way, you have the opportunity that the JIT will recognize the idiom and optimize it with a special code path handcrafted into it for that javac idiom.