if(UIInterfaceOrientationIsPortrait(self.interfaceOrientation)) {
NSLog(@"Device is in portrait");
} else if(UIInterfaceOrientationIsLandscape(self.interfaceOrientation)){
NSLog(@"Device is in landscape");
}
NSLog(@"View bounds are %@",NSStringFromCGRect(self.view.bounds));
I call this code from viewDidLoad and then subsequently from a gesture recognizer. When in landscape mode, the output from viewDidLoad is:
2013-01-04 20:44:48.925 293 Calendar[2638:907] Device is in landscape
2013-01-04 20:44:48.926 293 Calendar[2638:907] View bounds are {{0, 0}, {748, 1024}}
When called from the swipe gesture recognizer the output is:
2013-01-04 20:44:58.002 293 Calendar[2638:907] Device is in landscape
2013-01-04 20:44:58.004 293 Calendar[2638:907] View bounds are {{0, 0}, {1024, 748}}
If the device is in landscape both times, why are the dimensions opposite? Should I be looking at view dimensions elsewhere than viewDidLoad? What event would that be?
I tried viewWillAppear, and it was the same as viewDidLoad.
I tried viewDidAppear, and it was the same as from the gesture recognizer. So I assume that's what I should use instead of viewDidLoad for the proper view dimensions. Correct?