문제

In a Software Engineering class, we had an assignment to read Parnas' seminal paper on modularization [0]. In this paper, two approaches of dividing a software into modules are discussed:

  1. Traditional Approach: A flow chart is drawn to work out the single processing steps and the program's high-level flow. Then every processing step is turned into a module. This approach doesn't yield very good results.
  2. New Approach: Every design decision will be turned into a module by the means of information hiding. This approach leads to much better results.

My personal interpretation of the term design decision is that the modules are identified as data structures rather than as processing steps of an algorithm. This makes sense, because data structures are much more suitable for information hiding then processing steps of an algorithm. (The information inside a data structure is hidden behind functions, whereas a function only hides more detailed processing steps and no information; the information is actually passed in as arguments.)

Why does the second approach work so much better than the first approach? Here comes my second interpretation: The single processing steps of an algorithm are not replaceable (and thus not reusable), whereas it's possible to convert data structures into other data structures.

And here's my question: Could that be the reason why software development using workflow engines (based on BPMN, for example) never really took off?

My personal experience is that the activities created in such workflows are hardly ever reused, but there often are big data structures passed around all the involved activities, even if most of the activities use only one or two of them.

My question exaggerated: Could we get rid of all those clumsy workflow engines by giving managers Parnas' paper to read?

[0]: On the criteria to be used in decomposing systems into modules (Parnas 1972)

도움이 되었습니까?

해결책

Why is the "new" (in 1971) approach better ?

The second approach, which is the one that Parnas recommends, will ensure the principle of separation of concerns.

In other words, identifying design decision that are difficult and likely to change, and encapsulating them in modules will result in a system architecture, that keeps independent things independent. As a consequence, most changes will remain local to a module, which will facilitate maintenance despite having larger code bases.

Nowadays, with object oriented programming and microservices, we even apply this principle several levels further.

Will it kill workflow engines ?

Workflow engines address a very different need and are not to be seen as the implementation of the first approach.

The first approach is based on the algorithmic sequential decomposition of tasks and business processes. This results in modules that inflexibly communicate with each other in a rigid fashion, according to the analysed workflow.

Workflow engines on the contrary implement in a modular way the complex interactions between different systems and actors. By encapsulating these interactions, you can easily change the orchestration.

In addition, it must be said that workflows are not only a technical matter. For example, you will not replace humans with modules when automating an approval workflow, in which several human actors cooperate through different systems and applications for making a business decision.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 softwareengineering.stackexchange
scroll top