iPhone rgba to argb
-
08-10-2019 - |
题
我正在使用glreadpixels获取我的OpenGL场景的屏幕截图,然后在iOS 4上使用AvassetWriter将它们转变为视频。我的问题是我需要将Alpha频道传递到视频,该视频仅接受KCVPIXELFORMATTYPE_32ARGB和GLREADPIXELS和GLEADPIXELS在RETIEVES RGBA上。因此,基本上我需要一种将RGBA转换为ARGB的方法,换句话说,将Alpha字节放在首位。
int depth = 4;
unsigned char buffer[width * height * depth];
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL );
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault);
UIWindow* parentWindow = [self window];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess);
NSParameterAssert(pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, parentWindow.transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer; // chuck pixel buffer into AVAssetWriter
以为我会发布整个代码,因为我可能会帮助其他人。
干杯
解决方案
注意:我假设每个频道8位。如果不是这种情况,请相应调整。
为了最后移动alpha位,您需要执行 回转. 。通常,这是最容易表达的。
在这种情况下,您想向右移动RGB钻头8位,而A位剩下24位。然后,这两个值应该使用位或钻头组合在一起,以便 argb = (rgba >> 8) | (rgba << 24)
.
其他提示
更好的是,不要使用ARGB编码视频,请发送AvassetWriter bgra框架。正如我在 这个答案, ,这样做可以让您在iPhone 4上以30 fps和720p视频的20 fps进行编码640x480视频。 iPhone 4s可以使用此iPhone 4s以30 fps的速度达到1080p视频。
另外,您需要确保使用像素缓冲池,而不是每次重新创建像素缓冲区。从该答案中复制代码,您可以使用以下方式配置AvassetWriter:
NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
NSLog(@"Error: %@", error);
}
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput];
然后使用此代码使用 glReadPixels()
:
CVPixelBufferRef pixel_buffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
return;
}
else
{
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}
// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
使用时 glReadPixels()
, ,您需要浏览框架的颜色,因此我使用了屏幕上的FBO和一个带有以下代码的片段着色器来执行此操作:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra;
}
但是,iOS 5.0上有一条更快的路线可以比 glReadPixels()
, ,我在 这个答案. 。该过程的好处是,纹理已经以BGRA像素格式存储了内容,因此您可以将包裹的像素缓冲区直接馈送到AvassetWriter的情况下,而无需任何颜色转换,并且仍然可以看到出色的编码速度。
我意识到这个问题已经得到回答,但是我想确保人们知道Vimage,这是加速框架的一部分,并在iOS和OSX中使用。我的理解是,核心图形使用Vimage在位图上进行CPU结合的矢量操作。
您想要将ARGB转换为RGBA的特定API是Vimagepermutechannels_argb8888。还有API将RGB转换为ARGB/XRGB,翻转图像,覆盖通道等等。这是一种隐藏的宝石!
更新:布拉德·拉尔森(Brad Larson)为基本上同样的问题写了一个很好的答案 这里.
是的,每个频道8位,也像:
int depth = 4;
int width = 320;
int height = 480;
unsigned char buffer[width * height * depth];
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
for(int i = 0; i < width; i++){
for(int j = 0; j < height; j++){
buffer[i*j] = (buffer[i*j] >> 8) | (buffer[i*j] << 24);
}
}
我似乎无法正常工作
我确定可以忽略alpha值。所以你可以做 memcpy
随着像素 - 缓冲器阵列被一个字节移动:
void *buffer = malloc(width*height*4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
…
memcpy(pxdata + 1, buffer, width*height*4 - 1);
+ (UIImage *) createARGBImageFromRGBAImage: (UIImage *)image {
CGSize dimensions = [image size];
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * dimensions.width;
NSUInteger bitsPerComponent = 8;
unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height);
unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height);
CGColorSpaceRef colorSpace = NULL;
CGContextRef context = NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
for (int x = 0; x < dimensions.width; x++) {
for (int y = 0; y < dimensions.height; y++) {
NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel;
argb[offset + 0] = rgba[offset + 3];
argb[offset + 1] = rgba[offset + 0];
argb[offset + 2] = rgba[offset + 1];
argb[offset + 3] = rgba[offset + 2];
}
}
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(argb, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(rgba);
free(argb);
return image;
}