This may be a lexer issue. The user-defined literal is to be tokenized as a single chunk - number plus suffix is one whole token. In the case of numeric literals the characters it is allowed to slurp up includes the decimal '.'. Lookup pp-number: section 2.10 - lex.ppnumber in the latest draft of the standard. Walk through how the preprocessor (lexer) would scan the token:
30_au.to_light_years()
digit
digit
identifier-nondigit
.
identifier-nondigit x 14
( breaks the spell
So the preprocessor sees 30_au.to_light_years
as a big freaky (floating-point) number. Then later, during the number parsing phase we see digit, digit, identifier-nondigit...
At that point, the remainder from '-' onwards is handed off as a suffix identifier.
Remember that number literals are interpreted after the preprocessor tokenizes.
I think this is actually not a defect.