Gecekondu are informal settlements that have been at the heart of the rapid urbanisation of modern Turkey, especially in Istanbul and Izmir. Gecekondu squatting started in the 1940s as a need-oriented practice of poor rural migrants who needed cheap accomodation in Turkey's growing cities. Over time, gecekondu neighborhoods were legalized, and their dwellers became part of the urban middle class by erecting apartment buildings on their plots. In the 1980s, it became common to buy, and later to steal, privately owned agricultural land in order to construct apartment buildings on it. Appropriation was no longer geared towards need, but towards profit.
Appropriation of state-owned (agricultural) land was already a common practice in Ottoman times, and so was rural work migration to major cities, such as Istanbul and Izmir. In the aftermath of the Armenian Genocide and the Population Exchange with Greece, private property of Armenians and Greeks was appropriated and squatted by Muslims. This article argues that these historical precedents informed the way in which gecekondu dwellers legitimated their need-oriented appropriation of state land. With the arrival of neoliberalism, however, appropriation of state property no longer served to alleviate poverty, but became big business. Today, it is major real estate firms and the state-owned public housing authority (TOKI) that privatize state land in order to build gated communities for the upper middle class.
While some computational models of intelligence test problems were proposed throughout the second half of the XXth century, in the first years of the XXIst century we have seen an increasing number of computer systems being able to score well on particular intelligence test tasks. However, despite this increasing trend there has been no general account of all these works in terms of how they relate to each other and what their real achievements are. Also, there is poor understanding about what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. In this paper, we provide some insight on these issues, in the form of nine specific questions, by giving a comprehensive account of about thirty computer models, from the 1960s to nowadays, and their relationships, focussing on the range of intelligence test tasks they address, the purpose of the models, how general or specialised these models are, the AI techniques they use in each case, their comparison with human performance, and their evaluation of item difficulty. As a conclusion, these tests and the computer models attempting them show that AI is still lacking general techniques to deal with a variety of problems at the same time. Nonetheless, a renewed attention on these problems and a more careful understanding of what intelligence tests offer for AI may help build new bridges between psychometrics, cognitive science, and AI; and may motivate new kinds of problem repositories.
We investigate the application of grammar inference to the analysis of facial expressions to discover underlying sequential regularities characteristic for a specific mental state. The input consists of sequences of action units (AUs), which represent basic facial signals. The typical classification task for facial expression analysis is to assign a set of AUs its corresponding mental state, e.g., an emotion. To our knowledge, there is no research investigating whether there is diagnostic information in the sequence in which the AUs occur in a given time interval. Our study is based on data of facial expressions of pain obtained in a psychological experiment with 347 pain episodes of 86 subjects represented as sequences of AUs. We applied the Alignment-Based Learning (ABL) approach to infer the underlying grammar for the set of all AUs which occurred in the sequences and for a reduced alphabet of the relevant AUs only. We used 10-fold cross-validation to estimate performance and we extended ABL with a frequency-based heuristics to reduce the number of grammar rules by eliminating such rules which do not contribute significantly to performance. The resulting grammar for the reduced AU alphabet provides a first approximation for a “grammar of pain”.