Question

Je ne cesse d’entendre parler de ce terme dans plusieurs contextes différents. Qu'est-ce que c'est?

Était-ce utile?

La solution

La programmation déclarative consiste à écrire votre code de manière à décrire ce que vous voulez faire et non pas la façon dont vous voulez le faire. Il appartient au compilateur de déterminer le comment.

SQL et Prolog sont des exemples de langages de programmation déclaratifs.

Autres conseils

Les autres réponses font déjà un travail fantastique en expliquant ce qu'est la programmation déclarative. Je vais donc simplement donner quelques exemples de la raison pour laquelle cela pourrait être utile.

Indépendance du contexte

Les

programmes déclaratifs sont indépendants du contexte . Parce qu'ils déclarent seulement quel est l'objectif final, mais pas les étapes intermédiaires pour atteindre cet objectif, le même programme peut être utilisé dans différents contextes. Cela est difficile à faire avec les programmes impératifs , car ils dépendent souvent du contexte (par exemple, l’état caché).

Prenez yacc à titre d'exemple. C'est un générateur d'analyseur syntaxique aka. compiler compiler, un DSL déclaratif externe permettant de décrire la grammaire d'un langage, afin qu'un analyseur syntaxique de ce langage puisse être généré automatiquement à partir de la description. En raison de son indépendance du contexte, vous pouvez faire beaucoup de choses différentes avec une telle grammaire:

  • Génère un analyseur C pour cette grammaire (cas d'utilisation original de yacc )
  • Générer un analyseur C ++ pour cette grammaire
  • Générer un analyseur Java pour cette grammaire (à l'aide de Jay)
  • Générer un analyseur C # pour cette grammaire (à l'aide de GPPG)
  • Générer un analyseur Ruby pour cette grammaire (à l'aide de Racc)
  • Générer une arborescence de visualisation pour cette grammaire (à l'aide de GraphViz)
  • réalisez simplement des mises en page sophistiquées et une mise en surbrillance syntaxique du fichier source yacc lui-même et incluez-le dans votre Manuel de référence en tant que spécification syntaxique de votre langue

Et beaucoup d'autres…

Optimisation

Comme vous ne spécifiez pas à l'ordinateur les étapes à suivre et dans quel ordre, il peut réorganiser votre programme beaucoup plus librement, voire même exécuter certaines tâches en parallèle. Un bon exemple est un planificateur de requêtes et un optimiseur de requêtes pour une base de données SQL. La plupart des bases de données SQL vous permettent d’afficher la requête qu’elles exécutent en réalité par rapport à la requête que vous leur avez demandé d’exécuter. Souvent, ces requêtes ne se ressemblent rien . Le planificateur de requêtes tient compte de choses que vous n'auriez même pas imaginées: la latence rotationnelle du plateau de disques, par exemple, ou le fait qu'une application complètement différente pour un utilisateur complètement différent vient d'exécuter une requête similaire et la table que vous êtes. rejoindre avec et que vous avez travaillé si dur pour éviter le chargement est déjà en mémoire de toute façon.

Il existe un compromis intéressant ici: la machine doit travailler plus dur pour comprendre comment faire dans un langage impératif, mais quand elle le fait comprendre, il a beaucoup plus de liberté et beaucoup d’informations pour la phase d’optimisation.

Librement:

La programmation déclarative tend vers: -

  • Ensembles de déclarations ou déclarations, chacune ayant une signification (souvent dans le domaine du problème) et pouvant être comprise indépendamment et isolément.

La programmation impérative tend vers: -

  • Séquences de commandes, chacune effectuant une action; mais qui peut ou non avoir une signification dans le domaine du problème.

Par conséquent, un style impératif aide le lecteur à comprendre les mécanismes de ce que le système est en train de faire, mais peut donner peu d'informations sur le problème qu'il est censé résoudre. D'autre part, un style déclaratif aide le lecteur à comprendre le domaine du problème et l'approche que le système adopte pour résoudre le problème, mais est moins informatif en matière de mécanique.

Les programmes réels (même ceux écrits dans des langues qui favorisent les extrémités du spectre, telles que ProLog ou C) tendent à présenter les deux styles à des degrés divers, à différents moments, afin de satisfaire les diverses complexités et besoins de communication de la pièce. Un style n'est pas supérieur à l'autre; ils servent simplement à des fins différentes et, comme pour beaucoup de choses dans la vie, la modération est la clé.

Je suis désolé, mais je dois être en désaccord avec de nombreuses autres réponses. Je voudrais mettre un terme à ce malentendu confus quant à la définition de la programmation déclarative.

Définition

La transparence référentielle (RT) des sous-expressions est l'attribut uniquement requis d'une expression de programmation déclarative , car il s'agit du seul attribut non partagé avec la programmation impérative.

Les autres attributs cités de la programmation déclarative proviennent de cette RT. Cliquez sur le lien hypertexte ci-dessus pour obtenir une explication détaillée.

Exemple de feuille de calcul

Deux réponses ont mentionné la programmation par tableur. Dans les cas où la programmation de tableur (formules ak) ne permet pas d'accéder à l'état mutable global , il s'agit alors d'une programmation déclarative. En effet, les valeurs des cellules mutables sont les entrées et sorties monolithiques du main () (le programme complet). Les nouvelles valeurs ne sont pas écrites dans les cellules après l'exécution de chaque formule. Elles ne peuvent donc pas être modifiées pendant la durée du programme déclaratif (exécution de toutes les formules du tableur). Ainsi, les formules considèrent donc les cellules mutables comme immuables les unes par rapport aux autres. Une fonction RT est autorisée à accéder à l’état global immuable (ainsi qu’à l’état mutable local ).

Ainsi, la possibilité de muter les valeurs dans les cellules lorsque le programme se termine (en tant que sortie de main () ), ne les rend pas mutables comme des valeurs stockées dans le contexte des règles. La distinction clé est que les valeurs des cellules ne sont pas mises à jour après l'exécution de chaque formule de la feuille de calcul. Par conséquent, l'ordre d'exécution des formules n'a pas d'importance. Les valeurs de cellule sont mises à jour après l'exécution de toutes les formules déclaratives.

Voici un exemple.

En CSS (utilisé pour styliser les pages HTML), si vous souhaitez qu'un élément d'image mesure 100 pixels de haut et 100 pixels de large, vous devez simplement "déclarer". que c'est ce que vous voulez comme suit:

#myImageId {
height: 100px;
width: 100px;
}

Vous pouvez considérer CSS comme une feuille de style " déclarative. langue.

Le moteur de navigateur qui lit et interprète ce CSS est libre de faire en sorte que l'image apparaisse aussi haute et aussi large qu'il le souhaite. Différents moteurs de navigateur (par exemple, le moteur pour IE, le moteur pour Chrome) implémenteront cette tâche différemment.

Leurs implémentations uniques ne sont bien sûr PAS écrites dans un langage déclaratif, mais dans un langage procédural tel que Assembly, C, C ++, Java, JavaScript ou Python. Ce code est un tas d'étapes à exécuter étape par étape (et peut inclure des appels de fonction). Cela peut faire des choses comme interpoler des valeurs de pixels et afficher à l’écran.

La programmation déclarative est l’image, la programmation impérative étant des instructions pour peindre cette image.

Vous écrivez dans un style déclaratif si vous "dites ce que c'est", plutôt que de décrire les étapes que l'ordinateur devrait suivre pour arriver à l'endroit souhaité.

Lorsque vous utilisez XML pour baliser des données, vous utilisez une programmation déclarative parce que vous dites "Ceci est une personne, c'est un anniversaire, et il y a une adresse postale".

Quelques exemples d’endroits où la programmation déclarative et impérative sont combinées pour un plus grand effet:

  • Windows Presentation Foundation utilise la syntaxe XML déclarative pour décrire l'apparence d'une interface utilisateur et les relations (liaisons) entre les contrôles et les structures de données sous-jacentes.

  • Les fichiers de configuration structurés utilisent une syntaxe déclarative (aussi simple que des paires "clé = valeur") pour identifier la signification d'une chaîne ou d'une valeur de données.

  • Le HTML marque le texte avec des balises décrivant le rôle de chaque morceau de texte par rapport au document entier.

imaginez une page excel. Avec des colonnes remplies de formules pour calculer votre déclaration de revenus.

Toute la logique est déclarée dans les cellules, l'ordre de calcul est déterminé par la formule elle-même plutôt que par la procédure.

C’est un peu ce qu’est la programmation déclarative. Vous déclarez l'espace du problème et la solution plutôt que le flux du programme.

Prolog est le seul langage déclaratif que j'ai utilisé. Cela nécessite un type de réflexion différent, mais il est bon d’apprendre si vous vous exposez à autre chose que le langage de programmation procédural typique.

Since I wrote my prior answer, I have formulated a new definition of the declarative property which is quoted below. I have also defined imperative programming as the dual property.

This definition is superior to the one I provided in my prior answer, because it is succinct and it is more general. But it may be more difficult to grok, because the implication of the incompleteness theorems applicable to programming and life in general are difficult for humans to wrap their mind around.

The quoted explanation of the definition discusses the role pure functional programming plays in declarative programming.

Declarative vs. Imperative

The declarative property is weird, obtuse, and difficult to capture in a technically precise definition that remains general and not ambiguous, because it is a naive notion that we can declare the meaning (a.k.a semantics) of the program without incurring unintended side effects. There is an inherent tension between expression of meaning and avoidance of unintended effects, and this tension actually derives from the incompleteness theorems of programming and our universe.

It is oversimplification, technically imprecise, and often ambiguous to define declarative as what to do and imperative as how to do. An ambiguous case is the “what” is the “how” in a program that outputs a program— a compiler.

Evidently the unbounded recursion that makes a language Turing complete, is also analogously in the semantics— not only in the syntactical structure of evaluation (a.k.a. operational semantics). This is logically an example analogous to Gödel's theorem— “any complete system of axioms is also inconsistent”. Ponder the contradictory weirdness of that quote! It is also an example that demonstrates how the expression of semantics does not have a provable bound, thus we can't prove2 that a program (and analogously its semantics) halt a.k.a. the Halting theorem.

The incompleteness theorems derive from the fundamental nature of our universe, which as stated in the Second Law of Thermodynamics is “the entropy (a.k.a. the # of independent possibilities) is trending to maximum forever”. The coding and design of a program is never finished— it's alive!— because it attempts to address a real world need, and the semantics of the real world are always changing and trending to more possibilities. Humans never stop discovering new things (including errors in programs ;-).

To precisely and technically capture this aforementioned desired notion within this weird universe that has no edge (ponder that! there is no “outside” of our universe), requires a terse but deceptively-not-simple definition which will sound incorrect until it is explained deeply.

Definition:


The declarative property is where there can exist only one possible set of statements that can express each specific modular semantic.

The imperative property3 is the dual, where semantics are inconsistent under composition and/or can be expressed with variations of sets of statements.


This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it's instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Better” by Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML.

Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming.

Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency.

An example of the aforementioned definition is the set of formulas in the cells of a spreadsheet program— which are not expected to give the same meaning when moved to different column and row cells, i.e. cell identifiers changed. The cell identifiers are part of and not superfluous to the intended meaning. So each spreadsheet result is unique w.r.t. to the cell identifiers in a set of formulas. The consistent modular semantic in this case is use of cell identifiers as the input and output of pure functions for cells formulas (see below).

Hyper Text Markup Language a.k.a. HTML— the language for static web pages— is an example of a highly (but not perfectly3) declarative language that (at least before HTML 5) had no capability to express dynamic behavior. HTML is perhaps the easiest language to learn. For dynamic behavior, an imperative scripting language such as JavaScript was usually combined with HTML. HTML without JavaScript fits the declarative definition because each nominal type (i.e. the tags) maintains its consistent meaning under composition within the rules of the syntax.

A competing definition for declarative is the commutative and idempotent properties of the semantic statements, i.e. that statements can be reordered and duplicated without changing the meaning. For example, statements assigning values to named fields can be reordered and duplicated without changed the meaning of the program, if those names are modular w.r.t. to any implied order. Names sometimes imply an order, e.g. cell identifiers include their column and row position— moving a total on spreadsheet changes its meaning. Otherwise, these properties implicitly require global consistency of semantics. It is generally impossible to design the semantics of statements so they remain consistent if randomly ordered or duplicated, because order and duplication are intrinsic to semantics. For example, the statements “Foo exists” (or construction) and “Foo does not exist” (and destruction). If one considers random inconsistency endemical of the intended semantics, then one accepts this definition as general enough for the declarative property. In essence this definition is vacuous as a generalized definition because it attempts to make consistency orthogonal to semantics, i.e. to defy the fact that the universe of semantics is dynamically unbounded and can't be captured in a global coherence paradigm.

Requiring the commutative and idempotent properties for the (structural evaluation order of the) lower-level operational semantics converts operational semantics to a declarative localized modular semantic, e.g. pure functional programming (including recursion instead of imperative loops). Then the operational order of the implementation details do not impact (i.e. spread globally into) the consistency of the higher-level semantics. For example, the order of evaluation of (and theoretically also the duplication of) the spreadsheet formulas doesn't matter because the outputs are not copied to the inputs until after all outputs have been computed, i.e. analogous to pure functions.

C, Java, C++, C#, PHP, and JavaScript aren't particularly declarative. Copute's syntax and Python's syntax are more declaratively coupled to intended results, i.e. consistent syntactical semantics that eliminate the extraneous so one can readily comprehend code after they've forgotten it. Copute and Haskell enforce determinism of the operational semantics and encourage “don't repeat yourself” (DRY), because they only allow the pure functional paradigm.


2 Even where we can prove the semantics of a program, e.g. with the language Coq, this is limited to the semantics that are expressed in the typing, and typing can never capture all of the semantics of a program— not even for languages that are not Turing complete, e.g. with HTML+CSS it is possible to express inconsistent combinations which thus have undefined semantics.

3 Many explanations incorrectly claim that only imperative programming has syntactically ordered statements. I clarified this confusion between imperative and functional programming. For example, the order of HTML statements does not reduce the consistency of their meaning.


Edit: I posted the following comment to Robert Harper's blog:

in functional programming ... the range of variation of a variable is a type

Depending on how one distinguishes functional from imperative programming, your ‘assignable’ in an imperative program also may have a type placing a bound on its variability.

The only non-muddled definition I currently appreciate for functional programming is a) functions as first-class objects and types, b) preference for recursion over loops, and/or c) pure functions— i.e. those functions which do not impact the desired semantics of the program when memoized (thus perfectly pure functional programming doesn't exist in a general purpose denotational semantics due to impacts of operational semantics, e.g. memory allocation).

The idempotent property of a pure function means the function call on its variables can be substituted by its value, which is not generally the case for the arguments of an imperative procedure. Pure functions seem to be declarative w.r.t. to the uncomposed state transitions between the input and result types.

But the composition of pure functions does not maintain any such consistency, because it is possible to model a side-effect (global state) imperative process in a pure functional programming language, e.g. Haskell's IOMonad and moreover it is entirely impossible to prevent doing such in any Turing complete pure functional programming language.

As I wrote in 2012 which seems to the similar consensus of comments in your recent blog, that declarative programming is an attempt to capture the notion that the intended semantics are never opaque. Examples of opaque semantics are dependence on order, dependence on erasure of higher-level semantics at the operational semantics layer (e.g. casts are not conversions and reified generics limit higher-level semantics), and dependence on variable values which can not be checked (proved correct) by the programming language.

Thus I have concluded that only non-Turing complete languages can be declarative.

Thus one unambiguous and distinct attribute of a declarative language could be that its output can be proven to obey some enumerable set of generative rules. For example, for any specific HTML program (ignoring differences in the ways interpreters diverge) that is not scripted (i.e. is not Turing complete) then its output variability can be enumerable. Or more succinctly an HTML program is a pure function of its variability. Ditto a spreadsheet program is a pure function of its input variables.

So it seems to me that declarative languages are the antithesis of unbounded recursion, i.e. per Gödel's second incompleteness theorem self-referential theorems can't be proven.

Lesie Lamport wrote a fairytale about how Euclid might have worked around Gödel's incompleteness theorems applied to math proofs in the programming language context by to congruence between types and logic (Curry-Howard correspondence, etc).

It's a method of programming based around describing what something should do or be instead of describing how it should work.

In other words, you don't write algorithms made of expressions, you just layout how you want things to be. Two good examples are HTML and WPF.

This Wikipedia article is a good overview: http://en.wikipedia.org/wiki/Declarative_programming

Describing to a computer what you want, not how to do something.

I have refined my understanding of declarative programming, since Dec 2011 when I provided an answer to this question. Here follows my current understanding.

The long version of my understanding (research) is detailed at this link, which you should read to gain a deep understanding of the summary I will provide below.

Imperative programming is where mutable state is stored and read, thus the ordering and/or duplication of program instructions can alter the behavior (semantics) of the program (and even cause a bug, i.e. unintended behavior).

In the most naive and extreme sense (which I asserted in my prior answer), declarative programming (DP) is avoiding all stored mutable state, thus the ordering and/or duplication of program instructions can NOT alter the behavior (semantics) of the program.

However, such an extreme definition would not be very useful in the real world, since nearly every program involves stored mutable state. The spreadsheet example conforms to this extreme definition of DP, because the entire program code is run to completion with one static copy of the input state, before the new states are stored. Then if any state is changed, this is repeated. But most real world programs can't be limited to such a monolithic model of state changes.

A more useful definition of DP is that the ordering and/or duplication of programming instructions do not alter any opaque semantics. In other words, there are not hidden random changes in semantics occurring-- any changes in program instruction order and/or duplication cause only intended and transparent changes to the program's behavior.

The next step would be to talk about which programming models or paradigms aid in DP, but that is not the question here.

Declarative Programming is programming with declarations, i.e. declarative sentences. Declarative sentences have a number of properties that distinguish them from imperative sentences. In particular, declarations are:

  • commutative (can be reordered)
  • associative (can be regrouped)
  • idempotent (can repeat without change in meaning)
  • monotonic (declarations don't subtract information)

A relevant point is that these are all structural properties and are orthogonal to subject matter. Declarative is not about "What vs. How". We can declare (represent and constrain) a "how" just as easily as we declare a "what". Declarative is about structure, not content. Declarative programming has a significant impact on how we abstract and refactor our code, and how we modularize it into subprograms, but not so much on the domain model.

Often, we can convert from imperative to declarative by adding context. E.g. from "Turn left. (... wait for it ...) Turn Right." to "Bob will turn left at intersection of Foo and Bar at 11:01. Bob will turn right at the intersection of Bar and Baz at 11:06." Note that in the latter case the sentences are idempotent and commutative, whereas in the former case rearranging or repeating the sentences would severely change the meaning of the program.

Regarding monotonic, declarations can add constraints which subtract possibilities. But constraints still add information (more precisely, constraints are information). If we need time-varying declarations, it is typical to model this with explicit temporal semantics - e.g. from "the ball is flat" to "the ball is flat at time T". If we have two contradictory declarations, we have an inconsistent declarative system, though this might be resolved by introducing soft constraints (priorities, probabilities, etc.) or leveraging a paraconsistent logic.

Declarative programming is "the act of programming in languages that conform to the mental model of the developer rather than the operational model of the machine".

The difference between declarative and imperative programming is well illustrated by the problem of parsing structured data.

An imperative program would use mutually recursive functions to consume input and generate data. A declarative program would express a grammar that defines the structure of the data so that it can then be parsed.

The difference between these two approaches is that the declarative program creates a new language that is more closely mapped to the mental model of the problem than is its host language.

It may sound odd, but I'd add Excel (or any spreadsheet really) to the list of declarative systems. A good example of this is given here.

I'd explain it as DP is a way to express

  • A goal expression, the conditions for - what we are searching for. Is there one, maybe or many?
  • Some known facts
  • Rules that extend the know facts

...and where there is a deduct engine usually working with a unification algorithm to find the goals.

As far as I can tell, it started being used to describe programming systems like Prolog, because prolog is (supposedly) about declaring things in an abstract way.

It increasingly means very little, as it has the definition given by the users above. It should be clear that there is a gulf between the declarative programming of Haskell, as against the declarative programming of HTML.

A couple other examples of declarative programming:

  • ASP.Net markup for databinding. It just says "fill this grid with this source", for example, and leaves it to the system for how that happens.
  • Linq expressions

Declarative programming is nice because it can help simplify your mental model* of code, and because it might eventually be more scalable.

For example, let's say you have a function that does something to each element in an array or list. Traditional code would look like this:

foreach (object item in MyList)
{
   DoSomething(item);
}

No big deal there. But what if you use the more-declarative syntax and instead define DoSomething() as an Action? Then you can say it this way:

MyList.ForEach(DoSometing);

This is, of course, more concise. But I'm sure you have more concerns than just saving two lines of code here and there. Performance, for example. The old way, processing had to be done in sequence. What if the .ForEach() method had a way for you to signal that it could handle the processing in parallel, automatically? Now all of a sudden you've made your code multi-threaded in a very safe way and only changed one line of code. And, in fact, there's a an extension for .Net that lets you do just that.

  • If you follow that link, it takes you to a blog post by a friend of mine. The whole post is a little long, but you can scroll down to the heading titled "The Problem" _and pick it up there no problem.*

It depends on how you submit the answer to the text. Overall you can look at the programme at a certain view but it depends what angle you look at the problem. I will get you started with the programme: Dim Bus, Car, Time, Height As Integr

Again it depends on what the problem is an overall. You might have to shorten it due to the programme. Hope this helps and need the feedback if it does not. Thank You.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top