If I recall correctly, IQ 15.x has known bugs where packetsize is effectively ignored for insert...location...select and the default 512 is always used.
The insert...location...select is a bulk tds operation typically, however we have found it to have limited value when working with gigabytes of data, and built a process to handle extract/Load Table that is significantly faster.
I know it's not the answer you want, but performance appears to degrade as the data size grows. Some tables will actually never finish, if they are large enough.
Just a thought, you might want to specify the exact columns and wrap in an exec with dynamic sql. Dynamic SQL is a no-no, but if you need the proc executable in dev/qa + prod environments, there really isn't another option. I'm assuming this will be called in a controlled environment anyways, but here's what I mean:
declare @cmd varchar(2500), @location varchar(255)
set @location = 'SOMEDEVHOST.database_name'
set @cmd = 'insert localtablename (col1, col2, coln...) ' +
''''+ trim(@location)+ '''' +
' { select col1, col2, coln... from remotetablename}'
select @cmd
execute(@cmd)
go