For good or ill, duplicate names in JSON are permitted by the spec.
The problem is that the behaviour of decoders when faced with such duplicates is undefined.
Some parsers will reject such JSON as invalid (this is the only behaviour that can really be said to be "wrong" imho). Most others will return the last one encountered. At least one that I know of (because I wrote it :)) treats JSON strictly as a data structure independent of any JavaScript parsing rules or execution results and allows access to each named value separately by ordinal index within the containing object (as an alternative to access via the key name in which case the first occurrence will be returned).
Although some people argue that a decoder should replicate the behaviour of a JavaScript parser and execution environment when constructing an object described by JSON (that is, the last named value will over-write any earlier values of the same name) the simple fact is that JSON is only a data structure standard and although inspired by and drawing on the syntax of JavaScript, does not demand JavaScript execution or behaviours that would reflect such execution.
Accordingly, neither the RFC nor the ECMA standard dictate how a decoder must or even should behave when faced with duplicates so with the exception of parsers that reject duplicate names entirely, none of the various behaviours that accept duplicates can be said to be the "correct" one.
If you are producing and consuming JSON between processes under your own control then it might be tempting to simply find a JSON encoder/decoder that works in the way that suits you and go with that. But I would advise against it.
Which brings me to the bottom line:
Although the JSON standard allows duplicates, it does not require you to use them, thus the wisest path is simply to avoid them and avoid creating or running into problems entirely. :)