문제

When I started using an object-oriented language (Java), I pretty much just went "Cool" and started coding. I've never really thought about it until only recently after having read lots of questions about OOP. The general impression I get is that people struggle with it. Since I haven't thought of it as hard, and I wouldn't say I'm any genius, I'm thinking that I must have missed something or misunderstood it.

Why is OOP difficult to understand? Is it difficult to understand?

도움이 되었습니까?

해결책

I personally found the mechanics of OOP fairly easy to grasp. The hard part for me was the "why" of it. When I was first exposed to it, it seemed like a solution in search of a problem. Here are a few reasons why I think most people find it hard:

  1. IMHO teaching OO from the beginning is a terrible idea. Procedural coding is not a "bad habit" and is the right tool for some jobs. Individual methods in an OO program tend to be pretty procedural looking anyhow. Furthermore, before learning procedural programming well enough for its limitations to become visible, OO doesn't seem very useful to the student.

  2. Before you can really grasp OO, you need to know the basics of data structures and late binding/higher order functions. It's hard to grok polymorphism (which is basically passing around a pointer to data and a bunch of functions that operate on the data) if you don't even understand the concepts of structuring data instead of just using primitives and passing around higher order functions/pointers to functions.

  3. Design patterns should be taught as something fundamental to OO, not something more advanced. Design patterns help you to see the forest through the trees and give relatively concrete examples of where OO can simplify real problems, and you're going to want to learn them eventually anyhow. Furthermore, once you really get OO, most design patterns become obvious in hindsight.

다른 팁

I think there are a few of factors that haven't been mentioned yet.

First of all, at least in "pure OOP" (e.g., Smalltalk) where everything is an object, you have to twist your mind into a rather unnatural configuration to think of a number (for only one example) as an intelligent object instead of just a value -- since in reality, 21 (for example) really is just a value. This becomes especially problematic when on one hand you're told that a big advantage of OOP is modeling reality more closely, but you start off by taking what looks an awful lot like an LSD-inspired view of even the most basic and obvious parts of reality.

Second, inheritance in OOP doesn't follow most people's mental models very closely either. For most people, classifying things most specifically does not have anywhere close to the absolute rules necessary to create a class hierarchy that works. In particular, creating a class D that inherits from another class B means that objects of class D share absolutely, positively all the characteristics of class B. class D can add new and different characteristics of its own, but all the characteristics of class B must remain intact.

By contrast, when people classify things mentally, they typically follow a much looser model. For one example, if a person makes some rules about what constitutes a class of objects, it's pretty typical that almost any one rule can be broken as long as enough other rules are followed. Even the few rules that can't really be broken can almost always be "stretched" a little bit anyway.

Just for example, consider "car" as a class. It's pretty easy to see that the vast majority of what most people think of as "cars" have four wheels. Most people, however, have seen (at least a picture of) a car with only three wheels. A few of us of the right age also remember a race car or two from the early '80s (or so) that had six wheels -- and so on. This leaves us with basically three choices:

  1. Don't assert anything about how many wheels a car has -- but this tends to lead to the implicit assumption that it'll always be 4, and code that's likely to break for another number.
  2. Assert that all cars have four wheels, and just classify those others as "not cars" even though we know they really are.
  3. Design the class to allow variation in the number of wheels, just in case, even though there's a good chance this capability will never be needed, used, or properly tested.

Teaching about OOP often focuses on building huge taxonomies -- e.g., bits and pieces of what would be a giant hierarchy of all known life on earth, or something on that order. This raises two problems: first and foremost, it tends to lead many people toward focusing on huge amounts of information that's utterly irrelevant to the question at hand. At one point I saw a rather lengthy discussion of how to model breeds of dogs, and whether (for example) "miniature poodle" should inherit from "full sized poodle", or vice versa, or whether there should be an abstract base "Poodle" class, with "full-size poodle" and "miniature poodle" both inheriting from it. What they all seemed to ignore was that the application was supposed to deal with keeping track of licenses for dogs, and for the purpose at hand it was entirely adequate to have a single field named "breed" (or something on that order) with no modeling of the relationship between breeds at all.

Second, and almost importantly, it leads to focusing on the characteristics of the items, instead of focusing on the characteristics that are important for the task at hand. It leads toward modeling things as they are, where (most of the time) what's really needed is building the simplest model that will fill our needs, and using abstraction to fit the necessary sub-classes to fit the abstraction we've built.

Finally, I'll say once again: we're slowly following the same path taken by databases over the years. Early databases followed the hierarchical model. Other than focusing exclusively on data, this is single inheritance. For a short time, a few databases followed the network model -- essentially identical to multiple inheritance (and viewed from this angle, multiple interfaces aren't enough different from multiple base classes to notice or care about).

Long ago, however, databases largely converged on the relational model (and even though they aren't SQL, at this level of abstraction the current "NoSQL" databases are relational too). The advantages of the relational model are sufficiently well known that I won't bother repeating them here. I'll just note that the closest analog of the relational model we have in programing is generic programming (and sorry, but despite the name, Java generics, for one example, don't really qualify, though they are a tiny step in the right direction).

OOP requires the ability to think abstractly; a gift/curse that few people, even professional programmers, really have.

I think you can summarize the basic difficulty this way:

// The way most people think.
Operation - object - parameters
// Example:
Turn the car left.

// The way OOP works conceptually
Object - operation - parameters
// Example:
Car.Turn(270);

Sure, people can get used to the mapping of "left" as 270, and yeah, saying "Car.Turn" instead of "turn the car" isn't such a huge leap. BUT, to deal well with these objects and to create them, you have to invert the way you normally think.

Instead of manipulating an object, we're telling the object to actually do things on its own. It may not feel difficult any more, but telling a window to open itself sounds odd. People unused to this way of thinking have to struggle with that oddness over and over until finally it somehow becomes natural.

Any paradigm requires a certain push "over the edge" to grasp, for most people. By definition, it's a new mode of thought and so it requires a certain amount of letting go of old notions and a certain amount of fully grasping why the new notions are useful.

I think a lot of the problem is that the methods used to teach computer programming are pretty poor in general. OOP is so common now that it's not as noticeable, but you still see it often in functional programming:

  • important concepts are hidden behind odd names (FP: What's a monad? OOP: Why do they call them functions sometimes and methods other times?)

  • odd concepts are explained in metaphor instead of in terms of what they actually do, or why you'd use them, or why anyone ever thought to use them (FP: A monad is a spacesuit, it wraps up some code. OOP: An object is like a duck, it can make noise, walk and inherits from Animal)

  • the good stuff varies from person to person, so it's not quite clear what will be the tipping point for any student, and often the teacher can't even remember. (FP: Oh, monads let you hide something in the type itself and carry it on without having to explicitly write out what's happening each time. OOP: Oh, objects let you keep the functions for a kind of data with that data.)

The worst of it is that, as the question indicates, some people will immediately snap to understanding why the concept is good, and some won't. It really depends on what the tipping point is. For me, grasping that objects store data and methods for that data was the key, after that everything else just fit as a natural extension. Then I had later jumps like realizing that a method call from an object is very similar to making a static call with that that object as the first parameter.

The little jumps later on help refine understanding, but it's the initial one that takes a person from "OOP doesn't make sense, why do people do this?" to "OOP is the best, why do people do anything else?"

Because the basic explanation of OOP has very, very little to do with how it's used in the field. Most programs for teaching it try to use a physical model, such as "Think of a car as an object, and wheels as objects, and the doors, and the transmission ...", but outside of some obscure cases of simulation programming, objects are much more often used to represent non-physical concepts or to introduce indirection. The effect is that it makes people understand it intuitively in the wrong way.

Teaching from design patterns is a much better way to describe OOP, as it shows programmers how some actual modeling problems can be effectively attacked with objects, rather than describing it in the abstract.

I disagree with dsimcha's answer for the most part:

  1. Teaching OO from the beginning is not really a bad idea within itself, neither is teaching procedural languages. What's important is that we teach people to write clear, concise, cohesive code, regardless of OO or procedural.

  2. Individual methods in good OO programs DO NOT tend to be procedural looking at all. This is becoming more and more true with evolution of OO languages (read C# because other than C++ that's the only other OO language I know) and their syntax that's getting more complex by the day (lambdas, LINQ to objects, etc.). The only similarity between OO methods and procedures in procedural languages is the linear nature of each, which I doubt would change anytime soon.

  3. You can't master a procedural language without understanding data structures either. The pointer concept is as important for procedural languages as for OO languages. Passing parameters by reference, for example, which is quite common in procedural languages, requires you to understand pointers as much as it's required to learn any OO language.

  4. I don't think that design patterns should be taught early in OO programming at all, because they are not fundamental to OO programming at all. One can definitely be a good OO programmer without knowing anything about design patterns. In fact a person can even be using well-known design patterns without even knowing that they are documented as such with proper names and that books are written about them. What should be taught fundamentally is design principles such as Single Responsibility, Open Close, and Interface Segregation. Unfortunately, many people who consider themselves OO programmers these days are either not familiar with this fundamental concept or just choose to ignore it and that's why we have so much garbage OO code out there. Only after a thorough understanding of these and other principles should design patterns be introduced.

To answer original poster's question, yes, OO is a harder concept to understand than procedural programming. This is because we do not think in terms of properties and methods of real life objects. For example, human brain does not readily think of "TurnOn" as a method of TV, but sees it as a function of human turning on the TV. Similarly, polymorphism is a foreign concept to a human brain that generally sees each real life object by only one "face". Inheritance again is not natural to our brains. Just because I am a developer does not mean that my son would be one. Generally speaking, human brain needs to be trained to learn OO while procedural languages are more natural to it.

I think many programmers have difficulty with upfront design and planning to begin with. Even if someone does all the design for you, it is still possible to break away from OOP principles. If I take a bunch of spaghetti code and dump it into a class, is that really OOP? Someone who doesn't understand OOP can still program in Java. Also, don't confuse difficulty to understand with not willing to follow a certain methodolgy or disagreeing with it.

You should read Objects Never? Well, Hardly Ever. (ACM membership required) by Mordechai Ben-Ari who suggests that OOP is so difficult, because it's not a paradigm that's actually natural for modeling anything. (Though I have reservations about the article, because it's not clear what criteria he feels a program needs to satisfy to say that it's written on the OOP paradigm as opposed to a procedural paradigm using an OO language.)

Object Oriented Programming in itself is not hard.

The hard part comes in doing it well. Where to put the cut between code so you can easily move things to the common base object, and extend them later? How to make your code usable by others (extend classes, wrap in proxies, override method) without jumping through hoops to do so.

That is the hard part, and if done right can be very elegant, and if done badly can be very clumsy. My personal experience is that it requires a lot of practice to have been in all the situations where you would WISH that you did it differently, in order to do it well enough this time.

I was just watching a video by Richard Feynman discussing how people may actually have completely different methodologies going on in their head when thinking--I mean completely different.

When I do high-level design I happen to visualize objects, I can see them, see their interfaces and see what pathways information needs to traverse.

I also have trouble remembering details and found OO to be a great organizational aid--much easier to find functionality than scanning through a loosely-organized list of subroutines.

For me OO was a great benefit, but if you don't visualize the same way or don't do high-level architecture, it's probably pointless and annoying.

I'd done GW-Basic and Turbo Pascal programming a fair bit before being introduced to OO, so initially it DID do my head in.

No idea if this is what happens to others, but to me it was like this: my thought process about programming was purely procedural. As in: "such and such happens, then such and such happens next", etc. I never considered the variables and data to be anything more than fleeting actors in the flow of the program. Programming was "the flow of actions".

I suppose what wasn't easy to grasp (as stupid as that looks to me now), was the idea that the data/variables actually truly matter, in a deeper sense than just being fleeting actors in program "flow". Or to put this another way: I kept trying to understand it via what happens, rather than via what is, which is the real key to grasping it.

I don't think it is difficult to understand but it may be that a lot of the programmers querying are new to the concept, coming from procedural languages.

From what I have seen/read lots of people (in forums at least) look for a 'result' from OOP. If you are a procedural programmer who doesn't go back and modify extend their code it can probably be hard to understand the benefits.

Also, there is a lot of bad OOP out there, if people are reading/seeing that then it is easy to see why they might find it difficult.

IMO you need to wait until it 'clicks' or be taught by someone with real knowledge, I don't think you can rush.

I think the reason OOP is difficult for many is because the tools don't really facilitate it.

Computer languages today are an abstraction of what is going on in the computer.

OOP is an abstracted way to represent abstractions.

So we are using an abstraction to build abstractions with an abstraction. Add to this that what we are abstracting are usually very complex physical/social interactions and, well, no wonder.

I actually have a blog called "Struggles in Object Oriented Programming," that was born out of some of my struggles with learning it. I think it was particularly difficult for me to understand because I spent so much time using procedural programming, and I had a tough time getting my head around the idea that an object could be represented by a collection of attributes and behaviors (I was used to simply a collection of variables and methods).

Also, there's a lot of concepts that make a language object oriented - inheritance, interfaces, polymorphism, composition, etc. There really is a lot to learn about the theory of it before you can actually write code effectively, and in an object-oriented way, whereas with procedural programming, it's simply a matter of understanding things like memory allocation for variables, and entry point calls to other methods.

Motivation. It's harder to learn something when you don't see why, and also when you can't look at what you did and figure whether you did it right or not.

What's needed is small projects that use OO to do useful things. I'd suggest looking through a book on design patterns and come up with one that is obviously useful and works well with OO. (I used Strategy the one time I tried it. Something like Flyweight or Singleton would be bad choices, since they're ways of using objects in general, not using objects to accomplish something.)

I think it depends on age (age as a proxy for experience) and, more importantly, interest. If you're "young" (i.e., green, perhaps) and you've never thought any other way, it seems quite straightforward. On the other hand, if you think it's the coolest thing you've ever seen -- happened to me at age 28 or something -- it's easy to grok.

On the other hand, if you think, as many of my Java students did, "why are we learning this, it's just a fad," it's practically impossible to learn. This is true with most technologies.

Regardless which paradigm (OOP, functional, etc.) you choose, in order to write a computer program, you need to know what steps your program will do.

The natural way of defining a process is writing down its steps, for larger tasks you break down the task into smaller steps. This is the procedural way, this is how the computer works, this is how you go through your checklist step by step.

OOP is a different way of thinking. Instead of thinking of a checklist of tasks which needs to be done step by step, you think of objects, their abilities and relationships. So you will write a lot of objects, small methods and your program will magically work. To achieve this, you need to twist your a mind...

And this is why OOP is difficult. Since everything is an object, all they does is asking other objects to do something, and those other objects basically do the some. So the control in an OOP program can wildly jump back and forth between the objects.

As someone who is currently learning programming and having some issues in this area, I don't think it is so much that the concept is difficult to understand as are the specific implementations of said concept. I say this because I get the idea of OOP, and I've used it in PHP for about a year, but as I move on to C# and look at other programmers' usage of objects, I find that many people do so in ways that I just don't understand. It is this specifically that has lead me down the road to finding a better understanding of the principles of OOP.

Of course, I realize that the issue is most likely my lack of experience with a natively-OOP language, and that as time goes by I will find new ways to utilize objects that will be just as unclear to a new programmer as what I am currently experiencing. Jerry Coffin touches on this a few times, particularly in his comment:

This becomes especially problematic when on one hand you're told that a big advantage of OOP is modeling reality more closely, but you start off by taking what looks an awful lot like an LSD-inspired view of even the most basic and obvious parts of reality.

I find this to be very accurate, as it's the impression I often get when seeing someone creating classes for things that aren't really things - a specific example escapes me, but the closest I can come up with on the fly is treating distance like an object (I will edit the next time I see something that causes this same confusion). At times, OOP seems to temporarily disregard its own rules and becomes less intuitive. This more often than not occurs when objects are producing objects, inherit from a class that is encapsulating them, et cetera.

I think for someone like me, it helps to think of the concept of objects as having multiple facets, one of which includes treating something like an object when it otherwise wouldn't be. Something like distance, with just a little paradigm shift, could come across as a theoretical object, but not one that could be held in your hand. I have to think of it as having a set of properties but a more abstract set of behaviors, such as accessing its properties. I'm not positive that this is the key to my understanding, but it seems to be where my current studies are leading.

Terminologies were my bump in the road when learning the principles of object oriented programming (POOP). It's when you get a grasp of the fundamentals that pieces start to fall into place. Just like all things learning new concepts are a little hard.

Agreed that design patterns should be tought at least parallel to OOP.

The main jump for me was just understanding the abstract concept of OOP. Now I'm very new to programming in general I've been programming for a year to a year and a half now so my introduction into OOP was with Actionscript and Processing. When I first learned Actionscript coding, it wasn't in OOP. I learned to code directly into the Actions panel and that is how I learned the basic fundamentals of programming (variables, functions, loops etc). So I learned it as doing something directly to the stage in Flash or Processing.

When OOP came into things, realizing that I could create methods and properties within an object to be able to use and reuse was a little difficult for me to grasp at first. Everything was very abstract and difficult to process but the programming languages themselves a lot better but it kind of took a leap of faith to make those connections at first.

Recipe

Good OOP understanding = Good Mentor Or Good Books Or Both + Personal Interest + Practice.

Personal Interest

From my personal experience, personal interest comes a long way to cross the bridge from procedural programming to OOP with the right inputs from mentors or good books or both combined.

Practice, Practice and Practice

My best friend to get better understanding of OOP has been nothing but practice. This will definitely foster your OOP abilities.

As the saying goes “There is no substitute to hard work and no shortcut to success.”

Good luck!

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 softwareengineering.stackexchange
scroll top