Why are datatypes used that are larger than needed?
The number of line-of-business applications where you're doing a calculation in integers and can guarantee that the result will fit into a byte or short are vanishingly small. The number of line-of-business applications where the result of an integer calculation fits into an int is enormous.
Why does the specification have this rule for literals?
Because it is a perfectly sensible rule. It is consistent, clear and understandable. It makes a good compromise between many language goals such as reasonable performance, interoperability with existing unmanaged code, familiarity to users of other languages, and treating numbers as numbers rather than as bit patterns. The vast majority of C# programs use numbers as numbers.
What are the advantages since the huge downside is the away from future (SIMD) optimizations.
I assure you that not one C# programmer in a thousand would list "difficulty of taking advantage of SIMD optimizations" as a "huge downside" of C#'s array type inference semantics. You may in fact be the only one. It certainly would not have occurred to me. If you're the kind of person who cares so much about it then make the type manifest in the array initializer.
C# was not designed to wring every last ounce of performance out of machines that might be invented in the future, and particularly was not designed to do so when type inference is involved. It was designed to increase productivity of line-of-business developers, and line-of-business developers don't think of columnWidths = new [] { 10, 20, 30 };
as being an array of bytes.