Question

I am trying to implement a remap filter using GPUImage. It resemble the opencv remap function, which takes an input image, xmap and ymap. So, I subclass GPUImageThreeInputFilter and wrote my own shader code. When the input to the filter is still image, I got correct output image. The code is as follows:

    GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
    [remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
    UIImage *inputImage = [UIImage imageNamed:@"test.jpg"];
    GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
    [stillImageSource addTarget:remapFilter atTextureLocation:0];
    GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
    [stillImageSource1 processImage];
    [stillImageSource1 addTarget:remapFilter atTextureLocation:1];
    GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
    [stillImageSource2 processImage];
    [stillImageSource2 addTarget:remapFilter atTextureLocation:2];
    [stillImageSource processImage];
    UIImage *filteredImage=[remapFilter imageFromCurrentlyProcessedOutput];

However, when the input is switched to camera input, I got wrong output image. I did some debug and found that xmap and ymap are not loaded to the 2nd and 3rd textures. The pixel values of these 2 textures are all 0.

    videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];
    videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
    GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
    [remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
    [videoCamera addTarget:remapFilter atTextureLocation:0];
    GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
    [stillImageSource1 processImage];
    [stillImageSource1 addTarget:remapFilter atTextureLocation:1];
    GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
    [stillImageSource2 processImage];
    [stillImageSource2 addTarget:remapFilter atTextureLocation:2];

    GPUImageView *camView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
    [remapFilter addTarget:camView];
    [videoCamera startCameraCapture];

The Header file:

#import <GPUImage.h>
#import <GPUImageThreeInputFilter.h>

@interface GPUImageRemap : GPUImageThreeInputFilter
{
}

The main file:

#import "GPUImageRemap.h"


NSString *const kGPUImageRemapFragmentShaderString = SHADER_STRING
(

 varying highp vec2 textureCoordinate;
 varying highp vec2 textureCoordinate2;
  varying highp vec2 textureCoordinate3;

 uniform sampler2D inputImageTexture;
 uniform sampler2D inputImageTexture2;
  uniform sampler2D inputImageTexture3;

 /*
 x, y map orignally store floating point numbers in [0 imageWidth] and [0 imageHeight]
 then they are divided by imageWidth-1 and imageHeight-1 to be in [0 1]
 then they are converted to integer by multiply 1000000
 then an integer is put in the 4 byte of RGBA channel
 then each unsigned byte RGBA component is clamped to [0 1] and passed to fragment shader
 therefore, do the inverse in fragment shader to get original x, y coordinates
 */
 void main()
 {
     highp vec4 xAry0_1 = texture2D(inputImageTexture2, textureCoordinate2);
     highp vec4 xAry0_255=floor(xAry0_1*vec4(255.0)+vec4(0.5));
     //largest integer number we may see will not exceed 2000000, so 3 bytes are enough to carry our integer values
     highp float xint=xAry0_255.b*exp2(16.0)+xAry0_255.g*exp2(8.0)+xAry0_255.r;
     highp float x=xint/1000000.0;

     highp vec4 yAry0_1 = texture2D(inputImageTexture3, textureCoordinate3);
     highp vec4 yAry0_255=floor(yAry0_1*vec4(255.0)+vec4(0.5));
     highp float yint=yAry0_255.b*exp2(16.0)+yAry0_255.g*exp2(8.0)+yAry0_255.r;
     highp float y=yint/1000000.0;

     if (x<0.0 || x>1.0 || y<0.0 || y>1.0)
     {
         gl_FragColor = vec4(0,0,0,1);
     }
     else
     {
         highp vec2 imgTexCoord=vec2(y, x);
         gl_FragColor = texture2D(inputImageTexture, imgTexCoord);
     }
 }
 );

@implementation GPUImageRemap

- (id)init
{
    if (!(self = [super initWithFragmentShaderFromString:kGPUImageRemapFragmentShaderString]))
    {
        return nil;
    }
    return self;
}
Was it helpful?

Solution

I found the answer my self. GPUImagePicture can't be declared as local variable. Otherwise, it is released after function exits. That's why they are all 0 when uploaded to GPU. all the GPUImagePicture variables need to be global.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top