Edit: Thanks to Brad's comment, a more efficient solution without swapping the channel is replacing "gl_FragColor = vec4(vec3(mag), 1.0);" with "gl_FragColor = vec4(mag);" in the edge detection filter source.
You can play with the parameters in GPUImageColorMatrixFilter, and map the channels from one to another. Sample code:
GPUImagePicture *gpuImage = [[GPUImagePicture alloc] initWithImage:image];
GPUImageSobelEdgeDetectionFilter *edgeFilter = [[GPUImageSobelEdgeDetectionFilter alloc] init];
GPUImageColorMatrixFilter *conversionFilter = [[GPUImageColorMatrixFilter alloc] init];
conversionFilter.colorMatrix = (GPUMatrix4x4){
{0.0, 0.0, 0.0, 1.0},
{0.0, 0.0, 0.0, 1.0},
{0.0, 0.0, 0.0, 1.0},
{1.0,0.0,0.0,0.0},
};
[gpuImage addTarget:edgeFilter];
[edgeFilter addTarget:conversionFilter];
[gpuImage processImage];
return [conversionFilter imageFromCurrentlyProcessedOutputWithOrientation:orientation];
The values specified in colorMatrix basically says using the previous Alpha channel (all 255 for a non transparent image) to replace the RGB channels, and using the previous R channel (black and white images have the same values in RGB) to replace the Alpha channel.