Go ahead with option two. This is what can be considered a “best practice”.
The reason is that a compiler usually works in many passes (stages, phases). Parsing being the first one, type resolution another one. You can later add an optimization pass, a code generation pass etc. Usually, a single data structure, an abstract syntax tree (AST; or parse tree), is maintianed allong these passes.
The idea that “passing around objects with null fields invites NullReferenceExceptions” is just false worries. You have to handle invalid cases a introduce counter-measures to validate inputs / outputs anyway. Compilers, including simple expression processors, are pretty complex things driven by complicated rules, which involve high degrees of data structure complexity and application logic you can't simply avoid.
It is very normal for an AST to have uninitialized data. Each compilation pass, besides initial construction of the AST by the parser, then manipulates the AST, computes more information (like your type resolution phase). The AST may even change substantially, i.e. due to an optimization pass.
Side note: modern compilers, such as the latest C# compiler, employ a non-mutability policy over ASTs and other internal data structures. In that case each pass builds its own new data structure. You could then design a new set of data structures for each pass, but that may turn into an overly complex code to maintain. Someone from the C# compiler team could elaborate more on this topic.