Aesthetics and information

Author Topic: Aesthetics and information  (Read 1082 times)

Offline Shamsuddin

  • Full Member
  • ***
  • Posts: 177
  • Test
    • View Profile
Aesthetics and information
« on: December 04, 2013, 10:38:23 AM »
Aesthetics and information

In the 1970s, Abraham Moles and Frieder Nake were among the first to analyze links between aesthetics, information processing, and information theory. In the 1990s, Jürgen Schmidhuber described an algorithmic theory of beauty which takes the subjectivity of the observer into account and postulates: among several observations classified as comparable by a given subjective observer, the aesthetically most pleasing one is the one with the shortest description, given the observer's previous knowledge and his particular method for encoding the data. This is closely related to the principles of algorithmic information theory and minimum description length. One of his examples: mathematicians enjoy simple proofs with a short description in their formal language. Another very concrete example describes an aesthetically pleasing human face whose proportions can be described by very few bits of information, drawing inspiration from less detailed 15th century proportion studies by Leonardo da Vinci and Albrecht Dürer. Schmidhuber's theory explicitly distinguishes between what's beautiful and what's interesting, stating that interestingness corresponds to the first derivative of subjectively perceived beauty. Here the premise is that any observer continually tries to improve the predictability and compressibility of the observations by discovering regularities such as repetitions and symmetries and fractal self-similarity. Whenever the observer's learning process (which may be a predictive neural network; see also Neuroesthetics) leads to improved data compression such that the observation sequence can be described by fewer bits than before, the temporary interestingness of the data corresponds to the number of saved bits. This compression progress is proportional to the observer's internal reward, also called curiosity reward. A reinforcement learning algorithm is used to maximize future expected reward by learning to execute action sequences that cause additional interesting input data with yet unknown but learnable predictability or regularity. The principles can be implemented on artificial agents who then exhibit a form of artificial curiosity.


Source: Internet
Abu Kalam Shamsuddin
Lecture
MTCA