I'm guessing you want to reduce the memory footprint during load. Right now you're loading everything into memory in an array, then copying everything into a dictionary. Until the original array goes out of scope and gets garbage collected there will be a period of time with roughly 2x the memory usage needed. If it's a very large file then that could be a lot... if it's only a few megabytes it's not a big deal.
If you want to do this more efficiently you can read the data from a stream like so:
string fileName = @"C:\...";
var dict = new Dictionary<string, int[]>();
using (var fs = new FileStream(fileName, FileMode.Open))
using (var reader = new StreamReader(fs))
{
string line;
while ((line = reader.ReadLine()) != null)
{
var values = line.Split(',');
dict.Add(values[0], values.Skip(1).Select(x => Convert.ToInt32(x)).ToArray());
}
}
Or you can use the shortcut Jim suggested:
string fileName = @"C:\...";
var dict = new Dictionary<string, int[]>();
foreach (string line in File.ReadLines(fileName))
{
var values = line.Split(',');
dict.Add(values[0], values.Skip(1).Select(x => Convert.ToInt32(x)).ToArray());
}
This makes some strict presumptions about the file format. Notably that each line is in the format key,int1,int2,int3,int4,...
and that the key doesn't contain a comma. Each line also must end in an Environment.NewLine
character.
Although it's worth noting that you should consider the fact that while your current code is not terribly efficient, it is not your main bottleneck. The file read speed is usually the biggest bottleneck. If you are actually experiencing performance issues with your code it most likely simply has to do with you reading from the file synchronously. Any file I/O should be done asynchronously in an application with a user interface.