Abstracts (A-K)

Abstracts are listed alphabetically by the speaker's last name.

See also the general descriptions of symposia.

Aberdein, "Francis Bacon, Prerogative instances, and argumentation schemes"

Two thirds of the second part of Francis Bacon’s Novum Organum addresses ‘prerogative instances’. This is Bacon’s term for twenty-seven different circumstances he identifies in which empirical data can become manifest to the investigator. Notable examples include ‘shining instances’, evidence which provides overwhelming prima facie support for a specific theory, and ‘crucial instances’, or ‘instances of the fingerpost’, experiments whose outcome promises to settle disputes between competing theories.

Bacon’s debt to the dialectical tradition of informal reasoning has been observed by several commentators. However, little of this attention has been directed to the prerogative instances. This paper proposes that they may be understood as analogous to the topics, or loci, of traditional logic. Whereas topics comprise procedures for argument invention in debates about matters already known, Bacon’s instances are intended to assist the expansion of knowledge. As he states in the preface to the Novum Organum, ‘the end which this science of mine proposes is the invention not of arguments but of arts; not of things in accordance with principles, but of the principles themselves; not of probable reasons, but of designations and directions for works.’

In recent years, the topical tradition has been revived and reinvented within argumentation theory. In this context, the role of topics is played by ‘argumentation schemes’, stereotypical patterns of argument. However, these schemes are not restricted to the rhetorical task of argument invention to which topics were constrained by the traditional logic of Bacon’s era. As this paper will demonstrate, this greater flexibility permits the reconstruction of Bacon’s prerogative instances as argumentation schemes. This reconstruction yields not only a deeper understanding of Bacon’s account of scientific inference, but also a refinement of the argumentation scheme methodology.

Anstey, "The Bacon-Boyle-Hooke view of experiment"

This paper provides an exposition of the earliest and most distinctive philosophy of experiment in Britain, the view developed by Francis Bacon, Robert Boyle and Robert Hooke. The Bacon-Boyle-Hooke (BBH) view of experiment is then used to shed light on the way in which many early modern natural philosophers conceived of the relation between experiment and theory. This, in turn, provides the basis of a critique of its relevance to the ‘new experimentalism’ of the late twentieth century and some recent accounts of the rise of experiment in the early modern period.

The BBH account of experiment is set within the broader program of Experimental Philosophy which, while inspired by Bacon, was most carefully articulated in the 1660s and 1670s by Boyle and Hooke. It was also applied within a Baconian approach to the acquisition of knowledge of nature, that is, the Baconian method of natural history. After outlining the nature of the Experimental Philosophy and the Baconian method of natural history, the paper introduces the BBH view of experiment through an examination of a typology of experiments as developed by the protagonists: luciferous and fructiferous experiments; experiments solitary and in consort; crucial experiments; and thought experiments.

The most interesting feature of the BBH view of experiment, however, lies in its claims about the relation between observation and theory. It is argued that, in spite of Boyle and Hooke’s assertions that there is an important reciprocal relation between observation and theory, the BBH view tended to underplay the utility of hypotheses and speculative reasoning, and that this led, ultimately, to the downfall of this philosophy of experiment.

One moral of this is that, contrary to what some contemporary philosophers have claimed, early modern experimentation is not a good model of ‘experiment having a life of its own’. A second moral is that the BBH view does not sit well with the Kuhnian distinction between Baconian and mathematical sciences in the seventeenth century. Nor does the BBH view of experiment add support to the views of J. E. Tiles and Peter Dear that early modern experimentation emerged, in part, as a result of the breakdown of the distinction between the natural and non-natural, or that it failed because it focused on the particular rather than the universal.

Aray, "Knowledge Visualization through ISOTYPE as a Logical Empiricist Look on Science, Democracy and Education"

This presentation concerns the way Otto Neurath’s logical empiricist philosophy involved his creation of a new kind of graphism for scientific communication, and in which way his relationship with the politics of Red Vienna influenced his philosophy of science as well as his view of pedagogy.

The life and work of Otto Neurath have received a growing attention since two decades. This new focus on the inter-war period continental logical positivism considers mainly the political aspects of the so-called Left Vienna Circle, especially its involvement in the social life of Red Vienna and its personal relatedness to the modernist movement in Europe, notably through the Bauhaus school network. More than anyone in the circle Neurath took part actively in the new Viennese housing politics and supported the modernist architecture and design.

In his writings, Neurath insists on the importance of statistics and of the knowledge of social and economic phenomena for the struggle of the working class. In a democratic society, knowledge visualization serves as an important tool for political communication, (if not propaganda, as some critics pointed out) since a democratic participation in the administration of a unified planned economy requires a public acquaintance with social facts. The most significant undertaking of Neurath in this perspective was the building of Isotype, a typographic and graphic system intended to show rather abstract and complex facts in a simple way, for the purpose of reaching a wide public through various adult education projects (exhibitions in the Museum of Society and Economy, teaching material for evening courses, diagrams for publications and so on).

The Isotype system is based on the supposed universality of the visual perception as a remedy for the lack of communication between people of different nationalities or social classes. It is designed to be read intuitively so that it communicates the facts in a quick, simple and attractive way, regardless of educational and social backgrounds of the public. Neurath builded this tool with the help of his teaching experience in adult education programs, and made use of then still ongoing development of modern graphic communication techniques.

The main focus of this study will be the Isotype project as an exemplification of the logical empiricist thought in practice. I will examine the emergence of the project, its roots in the Viennese social democracy of the inter-war period as well as its ties to marxism in general. The socialism leaves a major place to science as an ideological weapon, an essential tool for the formation of revolutionary conscience, as well as a concrete power to guarantee the possibility of the socialist economic order through the growth of productive capacities, effective work management and a successful pursuit of the revolutionary struggle. In this sense, a massive introduction of the working class to social sciences made empirical by statistics must be a major part of a socialist party program. Considering this discourse on science and pedagogy, I intend to draw an outlook of the political philosophy of logical empiricism.

Audureau, "Remarks on the relationship between Descartes’metaphysics and the mathematical concept of gender of a curve"

The most widespread opinion about Descartes’ works is that he invented Analytic Geometry.

Yet, there is a long and uninterrupted tradition of interpreters (M.Cantor, G.Milhaud, L.Brunschvicg, P.Schrecker, G.Loria, Y.Belaval, J.Vuillemin, G.Granger) who, from the begining of 20th century to the late 60’, challenge the idea of Descartes as the founder and forerunner of Analytic Geometry.

The aim of this paper is to add new arguments to those already given by these scholars and to challenge more deeply the opinion that Descartes, in his Geometry (1637), is doing Analytic Geometry.

Nowadays, we know that Descartes mathematical works can be, in a first instance, divided in two parts: the official one and the non-official one. The official part is mainly exposed in his Geometry. The non-official part is in his Correspondance. In this Correspondance we can check that Descartes knows perfectly some of the curves he explicitly rejected in The Geometry : the ones he calls “mechanical“ (i.e .transcendental) like, e.g., the equiangular spiral.

To be mechanical, for a curve, is to be generated by two independent motions. This criterium, for Descartes, is equivalent to an other one : a mechanical curve is not constructible point by point, because some of its points required the computing of an infinite number of proportions. Hence, finitism and “to be produced by a deterministic motion“ are specific feautures of the objects of Descartes’ official mathematics. This is what can be drawn from the works of the scholars2 mentionned above and it suffices to understand that Descartes does not follow the way that leads to Analytic Geometry.

This reduction of the scope of mathematics is justified by metaphysics, where metaphysics means, and only means, foundations of physics, and where physics means mathematical physics.

To add evidences to the view that The Geometry is not Analytic Geometry, I will consider two issues. The first one is the status of the concept of gender (genus) of a curve. I will briefly describe the genesis of this concept in Descartes’ works. Then I will show that gender is a two-fold concept. On one side, it provides a measure of complexity of the finite constructions allowed in The Geometry. On another side, it corresponds to the concept of physical dimension, invented by Descartes (and forgotten by Newton).

The second issue is Descartes’standpoint on the continuum. It is well known that Descartes treats Fermat contemptuously because he disagrees with his method of adégalisation, namely with his conception of the continuum. But what is the continuum for Descartes?

In inspecting both issues we will find that Descartes’mathematics is subordinated to the epistemological principles that can be drawn directly from his metaphysics : dualism, intuitionism and mechanicism.

Bellis, "Rethinking the Place of Experience in Cartesian Natural Philosophy: the Case of Regius"

Regius was the first Cartesian professor in the Low Countries and was mainly interested in natural philosophy. Nevertheless, after several exchanges by letters, the collaboration between the Dutch and the French philosophers ended in 1646 when Regius decided to publish his Fundamenta physices with which Descartes didn’t fully agree, especially on metaphysical grounds. In the Conversation with Burman, Descartes expresses as follows the strong divergence between himself and his former disciple: “Regium autem quod attinet, ejusdem demonstratio nulla est ; et quod mirum, in Physicis ille semper auctoris opiniones, etiam ubi eas nesciebat, sequi et conjicere studuit ; in Metaphysicis autem auctori quantum potuit et ejus opiniones novit, contradixit.” (AT V, 170)

According to this statement, Regius does not think it necessary to provide a demonstration of the way the organization of the cosmos can be deduced from the first principles of physics (that is essentially extension and movement), contrary to what Descartes attempts to do in the third part of his Principia philosophiae. For Regius, this is enough to say that the principles of physics could allow someone to deduce the cosmological organization of the material world from an originary chaos, but the deduction itself is not needed. Movement is the proximate cause of the world and of the situation of planets and stars. One needs not present a cosmogony to justify the present state of the world but account for it as it is from extension and movement. Moreover, Descartes himself considers that this theoretical attitude is linked to Regius’ rejection of any metaphysical commitment.

We would like to suggest that this difference in the attitude of both philosophers originates not only in the value given to metaphysics for the elaboration of natural philosophy, but also from a different understanding of experience. For Regius, nature is a set of facts which can be considered independently and accounted for from mechanical principles. In the Fundamenta physices, we therefore find a significant number of occurrences like “ut experientia docet”, “teste experientia”, etc. Experience is therefore not meant to be integrated in a holistic conception of nature. On the contrary, according to Descartes, experience has no value independently from the possibility to be linked to all the phenomenal aspects of the world through a demonstration or series of demonstrations. Whereas the French philosopher defends a holistic account of natural philosophy in which experience plays a circumscribed role, Regius considers experience as a source of factual information on nature. But Regius’ understanding of experience is also linked to a specific psychology which is incompatible with Descartes’ conception of mind and of innate ideas. Experience is thus an illuminating topic at the crossroads of natural philosophy, psychology and metaphysics which can enable us to understand the peculiarities and possibilities of evolution of Cartesian philosophy in the second part of the seventeenth century.

Biagioli, "Hermann Cohen and Alois Riehl on geometrical empiricism: Strict Kantianism or new perspectives throughout transcendentalism?"

In the second half of the 19th century Kantian intuition was challenged by the Riemann-Helmholtzian claim that the choice of a metric, as far as application is concerned, is a matter of empirical science. I make a comparison between two different philosophical attempts to take into account the difficulties surrounding the necessity of geometrical intuition according to Helmholtz’s popular lectures on that subject. In early Neokantianism two different strategies were developed to vindicate the apriority of axioms: 1. to relate the apriority question to the role of certain principles in the constitution of the object of experience (Cohen); 2. to keep necessity and universality as intrinsic properties of certain knowledge, while restricting them to some fundamental concepts at the basis of any further thinkable variation (Riehl). A first divergence concerns the evaluation of the relation between Kant and Helmholtz: Cohen reconsiders the transcendental status of the notion of rigid geometrical figure, whereas Riehl argues that Helmholtz in 1878 moves away from his earlier empiricist program, recognizing now that space can be transcendental, without axioms being so. I point out, firstly, that in both cases here we have no strict Kantian thesis, the theory of pure sensibility being overcome anyway. Cohen bases his interpretation of the distinction between analytic and synthetic judgments on the Analytic of Principles, in explicit contrast with Kant’s literal definition, and the Riehl’s claim, besides Transcendental Aesthetic, presupposes the unkantian distinction between intuitive and geometrical space. Secondly I argue that the Cohenian perspective, although sharply at odds with an empiricist worldview, follows some important consequences of the physical geometry supported by Helmholtz: 1. Cohen attributes a transcendental function to those abstractions, which first make nature comprehensible; 2. he recognises the demand of arithmetisation of continuous in order to accomplish mechanical measurements, no homogeneity of space being independent of such a successive synthesis. Hence a form of intuition, which contradicted any mechanical congruence statement, would be no condition of experience at all. The argument may be more far-reaching than the compatibilist solution adopted by Riehl, according to which such a contradiction may still be solved in favour of certain a priori features of continua as such, the relativity of knowledge grounding after all on empty schemas of space and time itself. To the extent that such a distinction between pure and applied mathematics is committed to apriorism, a fundamental gap separates concepts retaining a geometrical sense from analytical fictions. Finally a deeper struggle arises about the transcendental question. According to Cohen, to discriminate between mere definitions and objective sense is not a matter of critical philosophy, which considers the purity of concepts only with regard to their applicability and lets geometers freely formulate their axioms instead. This may not lead to confuse different branches of knowledge, but to clarify the very difference of levels between knowledge and critique of knowledge.

Biener and Livengood, "The disciplinary status of experimental philosophy: then and now"

Newtonian “Experimental Philosophy” and contemporary “Experimental Philosophy” do not share a direct lineage; however, reactions to their success, then and now, have been remarkably similar. In the two cases, opponents of empirical and mathematical methods in philosophy voiced a familiar complaint: “This isn’t Philosophy!” This paper compares the reaction of proponents of Newtonian and contemporary experimental philosophy to this complaint. We show the responses were interestingly divergent. In the case of Newtonian philosophy, although proponents did their best to engage philosophical critiques (e.g., in the Leibniz-Clarke correspondence or the variety of Newtonian tracts advertising the utility of the new philosophy), they were also quite content to bite the bullet and hold that their philosophy was not traditional philosophy. And so much the worse for traditional philosophy! Proponents of contemporary experimental philosophy, however, take a different approach. One of their main arguments (pace Stephen Stich) is that experimental philosophy is traditional philosophy and that it serves as a corrective to an almost century-long constriction of the scope of philosophy. Thus, while Newtonian experimental philosophers carved out space for their novel approach by fracturing philosophy into components, contemporary philosophers appeal to a broad, “inclusive” model of philosophical activity in order to justify their methodology. In this paper we outline these approaches and attempt to determine their similarities, their differences, and the lessons that can be learned from the two reactions. In particular, we argue that contemporary experimental philosophers have set their sights too low and have something to learn from Newtonian experimental philosophy. To wit, present-day experimental philosophers ought not limit themselves to the study of intuitions, especially in relation to problems in moral and linguistic psychology. Newtonian experimental philosophy makes clear that there are other, deep philosophical problems that were and are addressed empirically—what is the nature of space, of time, of matter, of motion, of reason, of emotion, of perception, etc. Given that so many philosophical problems have definite empirical content and given that so many “traditional” philosophers (both then and now) have objected that experimental philosophy is not really philosophy, we wonder what sort of constraints on empirical content are and were used to restrict the scope of philosophy. We attempt to list those and detail why they were thought reasonable by the opponents of the experimental approach.

Bolduc, "The Figure of the Scientist in Bachelard’s Philosophy of Science"

Although interest for Gaston Bachelard is gaining momentum in Anglo-American literature, numerous aspects of his contributions to philosophy of science still remain to be fully investigated. In this paper, I undertake to highlight the main characteristics of the scientific figure as it can be inferred from Bachelard’s works. Since this implies a shift of focus, from the material on which the French philosopher built his philosophy – the physical sciences of his time – to the people involved in its constitution and manipulation, this exercise has to be carefully delineated. In this paper, I accept Bachelardian epistemology without lingering on its possible weaknesses. Within this narrow context, I then proceed to demonstrate the following thesis: scientific activity, and the temporality in which it is necessarily inscribed, allow for the characterization of a specific type of individual. This thesis does not imply that scientific practice itself generates individuals of a specific nature. Rather, I proceed to outline how, according to Bachelard’s philosophy of science, this activity allows for expressing the uniqueness, and also the relative solitude, of the individuals engaged in it.

Bosse, "The Possibility of an Apriori in Rudolf Carnap's »Logischer Aufbau der Welt«"

The Philosophy of Rudolf Carnap (1891–1970) is undeniably one of the most important and most fruitful contributions to the philosophy of science in the last century. Regarding his work we have to notice, that Carnap did not take up one single position throughout his whole life. And even a simple division of his philosophy into a syntactic and semantic phase does not cope with every aspect of his philosophy either.

Especially in his first period – the syntactic phase – we have to acknowledge some differentiations from the very beginning of his work: We can put on record that he rejects the standpoint of his dissertation even before it was published. When »Der Raum« was printed in Ergänzungsheften der Kant Studien in 1922, Carnap was already on his way to elaborate a new standpoint which culminates in his habilitation treatise »Der logische Aufbau der Welt« (1928). Shortly after that – at the beginning of the 1930s – we observe a new modification of his position. Likewise the work on this standpoint culminated in a book – the »Logische Syntax der Sprache« published in 1934.

But the subordination of his earlier work in the syntactic phase only makes sense, if the »Logische Syntax« is seen as the end of a continuum. The link between the writings of this continuum would be the use of a notion of the apriori which becomes more logical constructed the more it moves to the »Syntax« side of the continuum. A possible apriori in the »Aufbau« would have to fulfill the role of a missing link between Carnaps earlier writings and the »Syntax«.

My proposition is the following: It would be wrong to speak about changing standpoints when we talk about the syntactical period. Rather we should talk about a standpointcontinuum, which I would like to identify with the continuous use of an Apriori throughout this period. While having a synthetic apriori in the dissertation, we find at the other end of the continuum – the »Logische Syntax« – only a completely logical constructed and fully relativised apriori.

In this presentation I would like to show that Carnap's constitutional system and its empirical basis which he develops in the »Aufbau« get along without the notion of a subject, i.e. without a concrete substrate for the apriori. Nevertheless the knowledge should be constructed from this initial point – with help of a conception which I will mark as the constitutive apriori element in the process of cognition. With Carnaps intention in mind that he wants to eliminate this constitutive element – the Grundrelation – I will present how the project of a completely logical constructed apriori is indicated in the »Aufbau«.

Brenner, "Anastasios Axiomatizing Physical Theory: Kirchhoff, Mach, Poincaré and the Vienna Circle"

My aim is to study the origins and development of the received view of scientific theories. According to this view, theories are axiomatic systems linked with experience by various experimental techniques, including notably measurement procedures. F. Suppe studied the particulars of the received view and the criticism it met with. Among the historical figures involved, he mentions Mach, Hertz and Poincaré.

One may seek to go into greater detail. Indeed, as early as 1876, Kirchhoff sought to break away from earlier presentations. He defined the task of mechanics as that of providing a description of the phenomena rather than an explanation. He did not require of the scientist to go into the causes and tendencies. The latter had thus greater freedom. His description was to be guided by a series of requirements: completeness, simplicity and furthermore accuracy. Around the same time, Mach developed a similar conception; his notion of intellectual economy led a step further in the analysis of simplicity. Duhem, among others, noted Kirchhoff definition and was intent on bringing out the consequences: theories are abstract representations of experimental laws, and accuracy is to be understood in terms of a certain degree of approximation based on the progressive application of methods of verification.

The Vienna Circle, bringing together these different results, claimed to provide an adequate picture of the structure of theory. We have here a line of development that can be studied: it covers differences in culture, philosophical sensitivities and scientific research. The received view has been criticized, and attention has turned from the empirical testing of theories to the competition between paradigms. But this does not mean that all the insights of the former view have been discarded: our ideas of theory structure and measurement have only been marginally modified.

More recently, historical methods have been directed toward the emergence of styles of reasoning, bringing out their variety and their combinations. Axiomatics has come to be seen as one among different styles, and one can direct attention to its historical context of emergence as well as its relation with other forms of reasoning.

Camilleri, "Thought Experiments in Early Modern Science"

The last two decades has witnessed a growing interest in the role of thought experiments in science. This trend is most notable in the recent philosophical work of John Norton, James Brown, Roy Sorensen and Tamar Gendler. Yet there is still much we can learn about the role of thought experiments from the perspective of the wider historical and intellectual contexts in which they are employed. In this paper I draw on the recent of work of both historians and philosophers of science in attempting to ascertain to what extent we can speak of a decisive shift in the use of thought experiments in the transition from medieval to early modern natural philosophy.

I begin by outlining some of the different positions that have been articulated. Peter King has argued that one can draw a sharp contrast between the routine use of imaginary scenarios in medieval natural philosophy and the emergence of the modern experimental method in with its emphasis on empirical evidence. While there is undoubtedly a strong element of truth in this claim, the continued use of thought experiments in the writings of such thinkers as Stevin, Galileo and Newton suggests that certain aspects of the earlier medieval scholastic tradition continued well into the 17th century. To this extent King concedes that Galileo’s method might be regarded as more ‘medieval’ than is generally acknowledged. Amos Funkenstein, on the other hand, has argued that while the use of thought experiments are characteristic of both medieval and early modern natural philosophy, it was only in the 17th century that natural philosophers began to use counterfactual scenarios as constitutive of reality, as opposed to mere logical possibility, and in this sense thought experiments began to take on a new constructive role. More recently, James McAllister has argued that the ‘virtual laboratory’ characteristic of thought experiments in 17th century mechanics served as a continuation or an extension of the ‘real laboratory’ in which experiments were devised to isolate the ‘phenomena’ (e.g. free fall in a vacuum) from ‘accidents’ (e.g. friction and air resistance).

These historical reconstructions of the shifting role of thought experiments in medieval and early modern natural philosophy bring to light different assumptions concerning on their epistemological role in science. In drawing on the recent historical and philosophical work in this area, this paper examines this largely neglected aspect of the transition from medieval to early modern scientific thought, and in doing so attempts to throw new light on the philosophical question of the relationship between real and imaginary experiments.

Castelão-Lawless, "Metaphysical and Conceptual Stories of the Scientific Mind"

The scientific revolutions which took place at the beginning of the twentieth century led French philosopher and historian of science Gaston Bachelard (1884-1962) to spend most of his life’s work explaining how these revolutions resulted from modes of thinking about the physical world that were at odds with those which had given birth to modern science. It is well known to Bachelardian scholars that the creation of philosophical and metaphysical terms by Bachelard directly matched the radical changes that he saw occurring in the physical sciences with the advent of theories such as relativity and quantum mechanics. He claimed, for instance, that metaphysics of science mutate according to each the developmental stage in every scientific discipline, and also that they are in tune with the epistemological and ontological levels of reality that scientists happen to be studying at any given time. But most Bachelardians have not paid due attention to other types of evolving relations that Bachelard described in his work. This is the case of the relations between scientific and philosophical concepts, the interactions that scientific concepts maintain with each other and how scientific concepts hang on to each other while sharing the same shifting networks of meaning. As it can be seen in works such as Le Nouvel esprit scientifique, and La philosophie du non, to Bachelard not only philosophical and scientific concepts originate from scientific theories, but concepts themselves relate in special ways to other scientific and philosophical concepts. I demonstrate how some of this co-evolution and co-extended nature of concepts in the physical sciences takes place in Bachelardian epistemology, and also how to him they illuminate crucial but often neglected parts of the scientific mind at work.

Chimisso, "The Life of Concepts in Georges Canguilhem’s Historical Epistemology"

Georges Canguilhem’s historical epistemology is, explicitly and implicitly, the inheritor of a long tradition of the study of the (scientific) mind in history that developed in France. Despite his focus on medicine and the life sciences, rather than physics and chemistry, his epistemology has acknowledged and solid links with Bachelard’s. In particular I will show the continuity with the tradition of which both Bachelard and Canguilhem are inheritors. This tradition, whose twentieth-century early protagonists are Lucien Lévy-Bruhl and Léon Brunschvicg, focuses on the study of mentality and the historicization of Kant. These two focuses remain in the work of Canguilhem ; this is especially apparent when he analyses ‘organizing’ concepts (as Ian Hacking calls them) like those of normal and pathological. However, I will show that Canguilhem partly departs from that tradition in assigning more independence to ‘empirical’ concepts from theories, worldviews and ways of thinking. In his history of the concept of reflex movement, Canguilhem argues that concepts can originate in very different contexts from those in which they find a life later on; indeed for him scientific concepts can originate outside science, or at least outside theories and worldviews that scholars like Bachelard would recognize as sciences. The greater independence that concepts have in Canguilhem’s account can also be read as the conclusion of the anti-positivistic journey of much of the tradition to which he belonged, evident in Brunschvicg, Hélène Metzger and Bachelard, to name a few. Canguilhem emphasizes that Comte regarded progress as the development of order; this order, already strongly questioned by previous philosophers, appears to be wholly undermined by Canguilhem.

Claessens, "The Renaissance Reception of Proclus’ Geometrical Imagination"

The Renaissance reception of Proclus’ Euclid-commentary confronts us with a remarkable paradox. Despite the text’s crucial role in Early Modern epistemology, e.g. in the Quaestio de certitudine mathematicarum, one of its central philosophical features, i.e. the productive role of the phantasia in geometry, seems almost “unreceived”. Proclus’ projectionist theory of geometry consists in a frontal attack on the traditional, Aristotelian point of view, namely that geometrical objects are nothing but accidents abstracted from the sense world, without any reference to our essential nature. According to Proclus, in geometry discursive thinking makes use of imagined representations of innate concepts. Due to and within imagination the unextended and indivisible ideas with which the understanding is essentially equipped, are projected as extended and divisible. Since those innate ideas belong essentially to the soul, imagination becomes an intermediary receptacle that enables the soul to see its inner self. Nevertheless, from a Neoplatonist perspective this self-knowledge can at best be auxiliary, due to imagination’s close connection with the senses. Strangely enough, although Proclus’ text was widely read and referred to by Renaissance authors, the Proclean concept of phantasia was generally either transplanted into an abstractionist (!) account of mathematics or simply ignored.

In this paper I want to address two sides of the same medal. Firstly, I want to examine some of the epistemic a priori’s – stemming from the Aristotelian tradition – that can be held responsible for this bizarre reception, e.g. the commonly accepted abstractionist theory of mathematics. Secondly, I would like to show what happens when Proclus’ text is read outside this metamathematical tradition formed by Early Modern Aristotelianism. In his Harmonces Mundi Kepler invokes Proclus’ authority (against Aristotle’s) when he claims that the terms for constructing the harmonies belong essentially to the soul. Nevertheless, in spite of this shift to an innate conception of mathematical objects, Kepler’s use of the imagination does not coincide with Proclus’. Instead, imagination turns into an intermediary faculty that provides knowledge of our own essence by means of the body. Where Proclus’ imagination allows the soul to see itself despite its bodily locus, the Keplerian imagination permits the soul to recognize its own essential creativity in the sensible world.

Collodel, "Wasn’t Feyerabend really a Popperian after all? Methodological issues in the history of philosophy of science"

Although it is widely acknowledged that Karl Popper and his critical rationalist philosophy played a significant role in Feyerabend’s formative years and on his philosophical development, the import of this influence has become a matter of debate. The evidential basis upon which the diverging interpretations are grounded features primarily Feyerabend’s early published papers, where he explicitly based his arguments on central elements of critical rationalism, and his later scholarly publications, autobiographical remarks and correspondences, where he distanced himself from Popper in increasingly stronger terms and where he downplayed his early link with Popper.

The main thesis of this paper is that once the chronological order, partiality and ultimate reliability of these sources and their content is taken into due consideration and the evidential basis is widened to include Feyerabend’s correspondence with Popper and related unpublished archival material, then a more accurate account of the Feyerabend-Popper relationship can be achieved.

These documents, bedsides providing new valuable details, allow us to correct faulty or biased memories and deliberately misleading statements featuring in Feyerabend’s autobiographical remarks, while definitely confirming and thus establishing some other biographical facts. In particular, what emerges is that despite Feyerabend’s later claims to the contrary, he kept Popper and his views in high esteem from their very first acquaintance in 1948 and he started considering and treating him as his mentor at least from 1952-53 and well into the 60s. Moreover, it is apparent that after his appointment to Berkeley in 1958, Feyerabend acted as a bridgehead of critical rationalism in America: he taught it in his classes for a decade and propagated it in lectures series given in various US universities; he promoted the circulation among American scholars of Popper’s The Logic of Scientific Discovery (1959) and stimulated the publication of his Conjectures and Refutations (1963) as an anthology to be used in university courses; and he was instrumental in having Popper invited both to Berkeley and to the MCPS in the early 60s. On the other hand, not only Popper thought of Feyerabend as (one of) his most promising student(s), but he also gave a substantial contribution to sparking his academic career in Bristol in the mid-50s and strongly supported Feyerabend’s joint appointment to Berkeley and the UCL as late as the mid-60s. Hence, these data seem to confirm that the early Feyerabend was a Popperian after all – at least from a certain, sociologically qualified, point of view.

The enlarged source basis here considered also proves helpful in weighing the relevance of the personal component in shaping intellectual disputes and thus to appreciate the value of autobiographies and correspondences as historical sources in the history of philosophy of science. It should be noticed in this connexion that Feyerabend’s later harsh tones against Popper and its School seem motivated more by the personal rift that definitely opened up between the two in the late 60s due to the well-known peculiarities of Popper’s sensitivity in matters of intellectual priority and personal criticism, than by any theoretical divergence.

Corneanu, "The Generic Context of Francis Bacon’s “New Logic”"

Francis Bacon found logic to be one of the most worryingly deficient branches of knowledge. Most of his reformation program was devoted to devising a “new logic” that could serve rightful inquiry into nature and so could become a valid method of scientific discovery. As such, this newly fashioned discipline also needed to include, Bacon proposed, a new theory of error, much more comprehensive than the doctrine of sophisms in the old repudiated scholastic logic. This theory of error, which Bacon presented in the guise of his celebrated doctrine of the idols of the mind, is an intriguing “logical” development, in that it is part of a more comprehensive inquiry into the operations and failings of the whole set of faculties of the mind, which Bacon distributes along the several branches of his tree of knowledge. It is thus rooted in an anthropological investigation whose territory logic shares with rhetoric and practical moral philosophy, and which also includes a therapeutic component, signaled by Bacon’s framing of these disciplines as arts of mending the intellect. The question this paper asks concerns the generic developments of the time which could form a natural context for this rooting of a theory of error in an anthropological-cum-therapeutic exploration. I argue that the early seventeenth-century “facultative logic” is a less convincing candidate in this respect than the genre of the treatise of the passions of the soul, a multi-disciplinary product which could accommodate a quite Baconian approach to error as an integral part of its anatomy of the “perturbations” of the soul.

Creath, "Richard Problems and Changes In Quine’s Discussion of Simplicity"

Simplicity looms large in the philosophy of science of W.V. Quine. But over the course of his long life there were major changes in its character and status. In the early 1950s simplicity was a kind of evidence that we could use to choose between two alternative revisions of a theory that had run into trouble. Simplicity does not guarantee the truth of a theory, but other things being equal, the simpler theory is likelier to be true. Used in this way simplicity significantly mitigates underdetermination, particularly in ontological matters. Quine is a realist about physical objects because that picture “rounds out and smoothes over” the gerrymandered world of sensory glimpses. Similarly, the introduction of atoms and molecules into one’s ontology simplifies the laws of middle-sized physical objects. Realism here is not just one view among many. Quine plainly thinks that his reader should adopt it too. In 1960 Quine’s picture of simplicity dramatically changes. It is acknowledged to be hard to define and to be the “blind resultant” of a chain of stimulations in their various strengths. It is moreover not on a par with empirical evidence. Quine says that simplicity is relevant to the thinking up of theories but not to the testing of them afterward. This view of simplicity is flatly incompatible with the earlier one. It does not completely replace it, however. Quine uses both accounts for the rest of his career on an as needed basis. Also in 1960, in “On Simple Theories of a Complex World”, and then again in Web of Belief (with Joseph Ullian, 1970 and 1978) Quine gives extended discussions of simplicity. He recognizes that it is subjective and formulation dependant and hence that the maxim of the simplicity of nature is implausible. He does not try to define the notion, much less to show that it can meet the demands that he had raised against analyticity. He does not argue that the simpler theory is more likely to be true, but instead introduces a number of considerations as to why people might think that the simpler theory, other things being equal, is more likely to be confirmed. Construed as arguments, these considerations are most unsatisfactory. But I doubt that Quine thought that they were arguments. In the end Quine is too good a philosopher to think that simplicity and his attempts to shore it up are unproblematic. He is also aware that his philosophy of science collapses without it.

Crocco, "Poincaré’s neo-Kantianism and his conception of the Continuum"

Poincaré's theory of knowledge is organized around three notions: intuition, construction and convention. To understand Poincaré's theory of knowledge is to understand how these three notions are linked together and what their respective functions are in the formation and in the foundation of scientific thought.

This foundation has a double aspect. On the one side, we have what we will call the intuition of repetition. In many contexts Poincaré calls it the pure intuition of number, which is nothing but the intuition of the potential infinite. On the other side, there is the capacity to invent symbolic structures and to use them to freely construct new ones from existing ones. Most of the conventions in Poincaré's theory are such symbolic structures, which are expressible in the language of mathematics and which can be used to determine our experience. Determining our experience means filling the gap between the data of sensation and what we need to organize them into a coherent knowledge. Poincaré stresses that not all the symbolic structures invented by the human mind can be transformed into useful conventions: they have to conform to our intuition in order to be useful, coherent and fruitful

The image of mathematics resulting from this foundation is that of an endless activity of symbolic construction, stimulated by experience and grounded on intuition.

The Kantian root of such a conception seems to us undeniable, at least concerning, first, the solution given to the problem of the objectivity of knowledge, second, the role of mathematics in knowledge and, third, the notion of limit of knowledge. Poincaré's conception of mathematics, through the interplay of intuition, construction and convention, allows him a liberalized neo-Kantian answer to the question of the possibility of knowledge.

I will divide my contribution in two parts.

In the first part I will briefly present the historical context of Poincaré's thought in the late XIX in France and sketch the nature of its conceptual connection with Kant. Through this analysis it would possible to precisely explain in which sense Poincaré can be said a neo-Kantian.

In the second part I will test the importance of the interplay of the notions of construction intuition and convention on Poincare’s conception of the mathematical continuum. In opposition to the modern set-theoretical conception, Poincaré continuum is an open, incomplete and increasable construction of symbols, based on the intuition of intercalation (a particular form of the intuition of repetition which seems to be also at stake in the foundation of topology ) and “inventend” in order to fill the gap between the “paradoxical” data of sensation and imagination and the need of the understanding. We will base our discussion on Poincare’s conception of the continuum on three texts: « L’œuvre mathématique de Weierstrass », Acta Mathematica n.22 1898, pp.1-18, the second chapter of La science et l’hypothèse, (1902) and « Augustin Cournot et les principes du calcul infinitésimal » published in 1905 in Revue de Méthaphysique et de Morale and reprinted in Dernières pensées pp. 300-324. It will appear from the analysis of this texts how Poincaré’s considered the problem of the Continuum as a philosophical, mathematical and physical problem and how he proposed a clear and original anti-dogmatic solution for each of them.

Crull, "Grete Hermann and the Gamma-Ray Microscope Gedankenexperiment"

In 1935, Grete Hermann—a student of Emmy Noether's, Leonard Nelson's and (briefly) of Werner Heisenberg's—authored one of (if not the) first philosophical treatments of quantum mechanics. It is a sad fact of history (partly due to lack of an English translation) that Hermann's manuscript remains largely unexamined by many who study the foundations of quantum mechanics. What little is known and said of Hermann regarding philosophy of physics usually concerns her attempt to salvage Kantian causation in light of the new indeterministic theory (cf. Léna Soler's 1996, which contains a French translation of Hermann's 1935, as well as Soler's 2009). I hope to deepen further our engagement with this critical (and neglected) figure by exploring the part of her essay in which she discusses Heisenberg's infamous gamma-ray microscope thought experiment. More importantly, I wish to examine the interpretation of the uncertainty relations and the dual nature of light underlying her treatment of this thought experiment.

First introduced in Heisenberg's 1927 paper on the uncertainty principle, the microscope thought experiment—meant to illustrate the strange consequences of the uncertainty relations—gave rise to an extended argument with Bohr concerning interpretation of the duality of light. A few years later, Karl Friedrich von Weizsäcker, a close friend and pupil of Heisenberg's, wrote a paper (1931) in which he further discussed the questions of interpreting the nature of light and the uncertainty relations via Heisenberg's thought experiment.

While the interchange amongst these three figures on crucial interpretive issues related to the microscope thought experiment has been carefully studied, a key player in these events has been largely overlooked—namely, Grete Herman. Hermann visited Leipzig in the early 1930s for the express purpose of engaging in deep philosophical discussion on quantum mechanics with Heisenberg and von Weizsäcker. As a result of these interactions, one can see distinctive traces of Hermann's thinking not only in von Weizsäcker's philosophical views, but in Heisenberg's as well. Hermann, too, gained from the collaboration; as much is evidenced in her 1935 paper, which developed directly out of discussions and coursework in Leipzig.

Interestingly (but perhaps unsurprisingly) an entire section in Hermann's treatise is dedicated to a discussion of the gamma-ray microscope thought experiment. I have translated this section into English and present an analysis of the work, part of which includes situating and accounting for Hermann's influence on these matters within the canonical history of quantum mechanics story.

Dahms, "Kuhn, Feyerabend and the Structural View of Theories"

The proponents of the structural view of theories such as Sneed and especially Stegmüller and his school reacted firmly to the challenge posed by Kuhn and Feyerabend in the 1960ies. This challenge focussed upon issues like scientific development, holism and the alleged incommensurability of scientific terms before and after a scientific revolution. I will only briefly touch upon this background in order to introduce my theme.

In the center of my presentation I will concentrate on the reactions of Kuhn and Feyerabend to the structural view of theories. And this I will do on the base of unpublished material (mostly from the Stegmüller-papers). Surprisingly, Kuhn reacted with euphoria. Even the more sceptical Feyerabend seemed to have preferred the structural view of theories to alternatives to his own way of considering theories in traditional philosophy of science. That prompts the question, to which I will turn in my last section: why was the structural view of theories not a big success from the start and why did it fail to reach a big audience in the English-speaking world (and seems to be on the decline even in Germany now)?

Damböck, "Thomas Kuhn’s Concept of Incommensurability and the Stegmüller/Sneed Program as a Formal Approach to that Concept"

In the 1960s and 1970s of the twentieth century the traditional (and essentially formal) research program of philosophy of science was pursued forward by philosophers like Patrick Suppes, Bas van Fraassen, Joseph Sneed and Wolfgang Stegmüller, who developed a semantic (model theoretic) alternative to the received syntactic view of scientific theories. At the same time, however, the historical approach to the sciences, as developed by Thomas Kuhn, Paul Feyerabend and others appeared as a fundamental challenge to the whole formalistic research program of philosophy of science. If scientific “paradigms” are incommensurable then there seem to be no way to draw a unified and comprehensive picture of the sciences solely by means of formal logic. This fundamental challenge led to reactions of three types: (1) a rejection of the formal approach to the sciences; (2) a rejection of the historical approach to the sciences; (3) a reformulation of incommensurability that enables us to formulate the concept of incommensurability inside of the formal field and thus to stay in the realm of logic without rejecting the historical approach to the sciences. The most important (and possibly the only) conception in the sense of (3) is the Stegmüller/Sneed version of the “semantic approach”. In my paper I shall consider some historical, formal and systematic aspects of that rather exotic approach that seem to be highly significant for the “ambivalent” situation of philosophy of science in the 1970s.

de Calan, "Carnap and Hilbert on the axiomatization of the non-mathematical fields"

In his lecture delivered before the Swiss Mathematical Society the 11th of September 1917 and entitled “Axiomatic Thought”, Hilbert, as is well known, refined his account of the axiomatic method and surveyed the role of axiomatization not only in the mathematical fields, but also in the physical sciences, giving numerous illustrations of the axiomatic method drawn from various branches of physics: in statics, mechanics, thermodynamics, electrodynamics, modern quantum theory, etc . Carnap, whose first project for a dissertation was entitled “Axiomatic Foundation of Kinematics”, and who often referred in his works to Hilbert’s Grundlagen as well as to “Axiomatic Thought”, was surely well aware of the general applicability of axiomatics to physics and to all fields of science . His entire 1920s theoretical project, in the Aufbau as well as in Abriss der Logistik or in his Untersuchungen zur allgemeinen Axiomatik, could indeed be described as an outline of a general axiomatics for unified science. In the second part of his Abriss der Logistik for example, whose first part was written in a Russellian and Whiteheadian style, Carnap switched his approach and studied cases of axiomatizations of non-mathematical fields, which shows that he fully understood the power of axiomatic thinking for unifying science. If Carnap’s contribution to axiomatics in the mathematical fields and the problems related to the internal limits of formalized systems have been since these last years well documented, the general axiomatization program by Hilbert and Carnap can now be more systematically compared, once one considers the recent publication of Hilbert’s foundational lectures, especially on the foundation of physics . We will try to lead this comparison, challenging the interpretation of Awodey and Carus (2001), who established a sharp contrast between Hilbert’s “axiomatic” method and Carnap’s attempt to reconcile a “genetic” and an “axiomatic” methods, or to combine their respective strengths.

Demeter (with Zemplén), "Lessons from the Debate on the Demonstrativity of the Experimentum Crucis – or how to be charitable to controversies"

Current philosophical reflections on science have departed from mainstream history of science with respect to both methodology and conclusions. The article investigates how different approaches to reconstructing commitments can explain these differences and facilitate a mutual understanding and communication of these two perspectives, with special focus on the methodology of science. Translating the differences into problems pertaining to principles of charity, the paper offers a platform for clarification and resolution of the differences between the two perspectives. The outlined contextual approach occupies a middle ground between mainstream history and sociology of science, bracketing questions of rationality, and individual coherence-maximizing, rationality-centered approaches. It can satisfy those, who believe that science is an epistemically privileged endeavor, and its epistemic content should not be neglected when reconstructing methodological positions. It can also satisfy those who hold that it is naive to believe that the immediate context, e.g. the challenges to a theory, the expectations of the author about his audience, etc., does not affect the methodological position a scientist takes.

The theoretical considerations are exemplified with a close study of the debate following the 1672 publication of Newton's theory of light and colours, also offering a novel reading of the development of his methodological views concerning the demonstrativity of the famous crucial experiment. Although we only show the capacity of the framework to analyze a direct controversy, given that it is hard to think about any methodological utterance as detached from an argumentative context, this approach has the potential to be a general guide for interpretation.

De Souza, "Herder’s Leibnizian Foundations"

Virtually every aspect of Herder’s thought is informed by his engagement with Leibniz early on in his career. Herder self-consciously saw himself as moving beyond the standard Schulphilosophie, in the form of Baumgarten’s Metaphysik, that he was taught by Kant at the University of Königsberg in the 1760s, and sought to develop a philosophical framework more amenable to his intuitions about human culture and history laid out with such passion in his Journal meiner Reise from 1769. Leibniz, with his rejection of mechanistic materialism, restoration of substantial forms, and, above all, modern introduction of the concept of force or Kraft, was by far the most important source for this framework. At the basis of Herder’s philosophy and his convictions about Bildung and the formation of cultures, however, lies his most important disagreement with Leibniz: his commitment to soul-body interaction. Herder lays this out in his short work, Ueber Leibnitzens Grundsätze der Natur und der Gnade (1769).

Through an analysis of this and other early short works, the paper demonstrates how Herder worked out the most important of his Leibnizian foundations. It will be shown how Herder combined aspects from two conceptions of the soul, i.e., the soul as thinking substance and the soul as substantial form, to develop an original ontology in which the telos of the soul qua thinking power propels it in the construction of a body through whose senses it will acquire knowledge of the outside world with which it will now also be able to interact. Herder’s conception of force, which constitutes ontological bedrock for him, lies at the heart of this process. By positing a fundamental identity among forces of different kinds, Herder, it will be shown, claims that the soul is able to harness and direct the forces of attraction and repulsion, which he adopted from the pre-critical Kant, in order to realize the process by which the soul constructs an organic body for itself. Throughout, Leibniz, whom Herder called “der größte Mann den Deutschland in den neuern Zeiten gehabt [hat]” is the inspiration, his notions of force, petites perceptions, substantial form all being interpreted in a novel way by Herder to serve the overarching purpose of grounding a philosophy of life.

d'Hoine, "Proclus’ Argument from Imperfection"

One of the perennial questions in the philosophy of mathematics concerns the origin of mathematical objects. Are they mere abstractions from perceptible objects, or do they derive from innate conceptions? Do mathematical shapes and forms result from a process of idealization, or do we intuit their content? In his Commentary on the first book of Euclid’s Elements, the fifth-century Neoplatonist Proclus provides a fairly straightforward answer to these questions. Being a good Platonist, Proclus argues for the pre-existence of mathematical forms in the soul, and prior to the soul in the intelligible world. One of these arguments could be called ‘from imperfection’. Sensible objects fall short, in accuracy and perfection, of the mathematical shapes and qualities that we ascribe to them. Yet, the fact that the soul can judge the shortcomings of sensible objects suggests that it must already possess the standards of perfection by which it measures them. These standards are the mathematical reason-principles. They pre-exist in the soul, which draws them from the intelligible Forms.

In this paper it will be my aim to provide a careful analysis of this argument and to reconstruct its philosophical antecedents. Parallels in Syrianus’ commentary on the Metaphysics prove that the argument is hardly original. But what is its philosophical source? The answer may come from the third book of Proclus’ Commentary on the Parmenides, where the Lycian philosopher develops six philosophical arguments in defence of the existence of Forms. One of these is the argument from imperfection, which Proclus now states in terms very much reminiscent of the famous argument in Plato’s Phaedo (74ac). The Neoplatonic commentators on the Phaedo (Damascius and Olympiodorus) indeed thought that the argument did not prove transcendent Forms, but rather universal reason-principles in the soul. Moreover, they interpreted Plato’s example in the argument, equality, in strictly mathematical terms. It can therefore hardly be doubted that Proclus’ argument ‘from imperfection’ in the Commentary on Euclid must be understood as a Neoplatonic application of the argument in Plato’s Phaedo.

Distelzweig, "17th Century Teleo-Mechanics in Anatomy: Muscle, Mathematics and Animal Locomotion"

In this paper I examine the presence of “teleo-mechanics” in three 17th Century works on muscle anatomy and animal locomotion: (i) De Musculi Utilitatibus by Hieronymus Fabricius ab Acquapendente; (ii) A collection of unfinished notes on muscles by William Harvey (First edited with translation by Gweneth Whitteridge and published in 1959 as De Motu Locali Animalium, 1627) and (iii) De Motu Animalium by Giovanni Alfonso Borelli. By "teleo-mechanics" I mean the integration of mathematical mechanics into teleological explanations of anatomical features of (in this case) muscles.

Though some scholarly attention has been given to each of these works (e.g., Baldini 1997; Des Chene, 2005; Jaynes 1970; Stannard 1970; Whitteridge 1959, 1979), none of it seems to appreciate properly the presence and precise historical significance of this teleo-mechanics. As a result, they mischaracterize important changes in explanatory goals across them. After briefly characterizing this weakness in the literature, I will attempt to remedy it, comparing the presence of this general mode of explanation across the three works, examining (1) the conceptualization and justification of such explanations, (2) the kinds of features thus to be explained, and (3) the precise character of the explanations offered.

As necessary prerequisite to the discussion, I begin by providing a brief orientation to Galenic and Aristotelian teleological explanation in the medical-anatomical tradition, the preface to the Psuedo-Aristotelian Mechanical Questions, and the Early Modern notion of a subalternate science. These will be crucial for properly understanding this 17th century teleo-mechanics and appreciating its variations. In fact, I will suggest, an under-appreciation of this background--especially of role of teleology in anatomy--has led the small literature on these and related works to effectively miss the "teleo" in teleo-mechanics (with unhappy results).

As indicator of the interest of 17th century teleo-mechanics I will conclude by briefly considering interesting ways such modes of explanation differ from Descartes' well-known “corpusculo-mechanical” accounts in human physiology.

Dobre, "Jacques Rohault and the use of experiment in Cartesian physics"

Jacques Rohault (1618-1672) was one of the leading Cartesians in seventeenth-century France. Not only philosophically connected to the views of Descartes, but also socially rooted, his Cartesianism was respected and debated all over Europe. From the social point of view, Rohault became the son-in-law of Claude Clerselier and the host of some very famous “conférences publiques,” which gathered around him in Paris various people interested in the new philosophy. During these meetings, Rohault devoted particular attention to problems concerning physics and unlike other Cartesian fellows, he did not hesitate in investigating the problems experimentally. Thus, he performed countless experiments, some with the instruments invented by himself, such as his own version of the air-pump. The outcome of these conferences was his celebrated Traité de physique (1671), a book of Cartesian physics which was quickly translated in Latin and printed in Geneva, Amsterdam, London, and Louvain among other places. A number of universities had accepted this book as a textbook on natural philosophy, contributing this way to the dissemination of Cartesian ideas.

Of particular interest for the history of science is the English translation of this book, which includes the annotations of the celebrated Newtonian, Samuel Clarke. Already in the Latin edition of 1702, Clarke had commented on the text, something which was preserved in the subsequent editions, producing a “battleground between Newton and Descartes” (Des Chene). Besides these Newtonian additions to Rohault’s Cartesianism, the treatise has many other merits, including the attempt to better clarify the status of physics as an independent discipline. For this, Rohault begins by discriminating natural philosophy from metaphysics, the latter being presented as the main impediment in the development of the former. Instead, his solution is to build physics upon empirically grounded theories. How his view on experiment fits into the general Cartesian framework of metaphysically grounded natural philosophy will represent the focus of this paper. Thus, Rohault’s solution for some of the inherited difficulties from Descartes’s natural philosophy will be examined in close connection with his use of instruments to perform experiments and with the findings of his theoretical foundations of physics.

Domski, ‘The Mathematics and Metaphysics of Descartes’ Mature Philosophy’

In Part Two of his Principles of Philosophy (1644), Descartes famously claims that the only principles which he requires in physics ‘are those of geometry and pure mathematics’, for ‘these principles explain all natural phenomena, and enable us to provide quite certain demonstrations regarding them’ (Part Two, Article 63). At first glance, there is nothing peculiar about these remarks given that, for Descartes, the material bodies studied in physics are essentially extended, and as such, are bodies that possess all those properties that ‘are comprised within the general subject matter of pure mathematics’ (AT VII, 80; CSM II, 55). Nonetheless, there remains something curious about the relationship Descartes establishes between his physics and pure mathematics when we consider the manner in which he presents the physics of the Principles: Descartes does not present mathematically formulated laws of motion or provide the sorts of mathematical derivations of physical principles that we might expect in light of the remark above. Instead, as emphasized in Dan Garber’s recent work, what Descartes offers is a physics – a system of the laws and rules that govern the motion of natural bodies – that is firmly grounded on the metaphysical first principles presented in Part One of the Principles.

My goal in this paper is to address why mathematical formalism and mathematical derivations are both conspicuously absent in the Principles and I do so by taking a careful look at the account of geometry that Descartes embraces in his mature work. My interpretation of the geometry of the Principles takes seriously the progress in Descartes’ thinking about geometry over the course of his career and uses as its springboard Descartes’ claim to Mersenne in the late 1630s, immediately after the appearance of his groundbreaking Geometrié (1637), that he has decided to give up ‘the investigation of [geometrical] problems which function merely as mental exercise’ so that he might ‘have more time to devote to another sort of geometry where the problems have to do with the explanation of natural phenomena’ (Letter to Mersenne, 27 July 1638; AT II, 268; CSMK 118-119). By examining the differences between the ‘abstract geometry’ of Descartes’ early works and the geometry of the Principles, at a time when he had turned his attention to the explanation of nature, I hope to better explain the relationship that Descartes forges between his geometry and his physics, and moreover, I aim to show that it was precisely by refashioning his account of geometry during the mature period of his work that Descartes could establish a meaningful connection between mathematics and nature as well as between his mathematics and his metaphysics.

Douglas, "Social Science, the Unity of Science, and Values of Science"

When Weber articulated his ideal of value-neutrality in his 1904 essay “’Objectivity’ in Social Science and Social Policy,” he did so with the idea that social sciences had a special problem with respect to the issue of values in science. He argued that because the subject of social science was man himself and the specific cultural practices of man, the issue of how values function in science was both particularly acute for social science and having a special character for social science. In brief, it was Weber’s view that because of the complexity of social life, social science cannot proceed without some value judgments to direct what is to be taken as significant. Nevertheless, the descriptive and the normative must be kept conceptually distinct in the practice of social science. By the 1930s, however, with the rise of the project to unify the sciences, such demarcations between natural and social science fell out of favor. Arguments about the role of values in social science shifted, centering instead on how the social sciences faced no special problem of values, but rather faced merely the very same problems as natural sciences. That social science studied man was to be no particular obstacle to the objectivity of social science, according to logical empiricists such as Otto Neurath and George Lundberg. Whether or not social science had a special problem regarding objectivity would be a particular concern in the post World War II era, as debates about government funding of science gathered steam. Whether social sciences should be funded along with the natural sciences was a key point of contention in the debates over the founding of the National Science Foundation, and philosophers continued to dispute whether social science bore a special burden in achieving value neutrality or objectivity. I will trace the history of this debate, from Weber to Nagel’s 1961 tome The Structure of Science, locating the arguments made by philosophers in the context of both the prevailing philosophical winds (such as the rise and decline of the unity of science project) and the debates over science policy (such as the founding of the National Science Foundation).

Drozdova, "Alexandre Koyré disciple of Emile Meyerson: immutability and historicity of human reason"

Alexandre Koyré is known primarily by his research of the history of the scientific thought; his main interest was to grasp the moments of the transformation of rational structures within the history. Although he was far to believe that the human rationality is entirely dominated by historical development; on the contrary, in this paper I will argue that Alexandre Koyré accepts Emile Meyerson’s claim on the immutability of human reason. However, Koyré modifies and elaborates Meyerson’s thesis; within the rationality in general he distinguishes the immutable core that is constituted by the logical laws of reasoning and the outer level of “mentality” which is subjected to the historical changes and transformations.

It is well known that two great French epistemologists, Alexandre Koyré and Emile Meyerson, were personal friends. Koyré, who was at that time a young scholar, belonged to the small circle of Jewish intellectuals that concentrated around Meyerson in the twenties and early thirties. In the special letter to the Societé Philosophique Française in 1961 Koyré wrote on how important was this experience for him; he emphasized that it was under Meyerson’s influence that he turned from the history of religious and philosophical thought to the history of science. Koyré consecrated a number of texts to the analysis of Meyerson’s epistemology, including an important text published in Russian as early as 1926.

In all these papers Koyré points out the main idea of Meyerson’s epistemology: the essence of the human reason never changes, the essence of reason stays the same under any circumstances and in any moment in history. At the same time Koyré’s own attention is directed towards historical changes in the scientific rationality rather than to the inalterability of the reason; that’s why G. Jorland claims that Koyré’s point of view is much closer to the Brunschvicg’s position than to that of Meyerson and regards the reason as entirely alterable in the historical processes.

I cannot agree with Jorland’s claim that for Koyré the very structure of rationality is involved in the process of the historical transformation. I intend to show that for Koyré as well as for Meyerson there is certain stability in the human reason. This stability can be perceived in at least two different aspects. First of all, it can be seen in the immutability of logic that is constantly stresses by Koyré. Moreover, according to Koyré the adventure of the thought within the history could be described as certain quest of truth, itinerarium mentis in veritatem. The faces of truth changes in the process of history; but the moving force of the reason, its desire of truth, is always the same.

Ducheyne, "Facing the Limits of Deductions from Phenomena: Newton’s Quest for a Mathematical-Demonstrative Optics"

In this talk, I take up and develop the suggestion made by the late I. Bernard Cohen, Casper Hakfoort and Alan E. Shapiro that Newton’s methodological ideal of “deducing causes from phenomena”, on which we have elaborated on in the preceding chapters, was not equally attainable in the study of optical phenomena. If this suggestion is correct, then in the apex of Newton’s optical researches, The Opticks, which in fact contained a set of interrelated theories on optical phenomena, Newton failed to rigidly deduce these theories in the sense he had done in the Principia.

By contrasting Newton’s methodology in the Principia with the way in which theoretical conclusions are established in The Opticks, I shall be able to explain why Newton was less successful to accommodate optical phenomena according to his own methodological desiderata of deducing causes from phenomena. After having commented briefly upon Newton’s methodology in the Principia, I will review the kinds of trouble that Newton ran into when trying to methodize optics in a Principia-Style fashion. My focal point will be Newton’s arguments for the thesis that white light consists of rays differently ‘refrangible’. Special attention will be paid to Newton’s presumed application of Rule II of the regulae philosophandi in establishing that part of his optical theory. It is shown that Rule II licenses the identification of instances of causes of the same kind which have been shown to be true and sufficient to explain their phenomena. Thus, on the basis of Rule II we identify two instances of causal parameters of the same kind, which have separately been derived from phenomena. The disanalogy involved is thus that in the experimentum crucis (and its related sections in The Opticks) we use an argument for uniformity to establish a single causal claim, while Rule II licences the identification of similar causal parameters which were independently established and were deduced from phenomena by systematic dependencies.

In the Principia there are systematic dependencies, derived from the laws of motion, between causes and effects. Given the absence of systematic dependencies, Newton could offer only sufficient causes in The Opticks. Moreover, by means of what I call ‘macro-micro inference tickets’ (i.e. Props. LXX-LVI, Book I), Newton was able to license conclusions about the inverse-square centripetal forces of each of the individual micro-particles that constitute a macroscopic body from the overall inverse-square centripetal force exerted by that body – in this way, Newton was thus able to back up transductive inferences about the particles constituting a macroscopic body. In The Opticks none of the above was at hand. Newton clearly wanted to do more than to simply establish the phenomenological laws regulating optical phenomena: he also wanted to provide a solid physical account of optical phenomena. However, given the empirical and methodological problems Newton later encountered when methodizing optics in a Principia-style, it turned out that establishing non-hypothetical physical interpretations of optical phenomena was quite a difficult matter.

It is the aim of this talk to pinpoint the dynamics between method and ‘phenomena’ in Newton’s optical research.

Dunlop, "“Pure Intuition” in the Critique of Pure Reason and in the Later Elaboration of Kant’s Critical Philosophy"

Kant’s argument for the syntheticity of mathematics in the Critique of Pure Reason (first edition 1781) seems to turn on the need for intuition in proving particular theorems, where intuition is understood as individual representations that are singular and relate immediately to objects. In particular, the doctrine that mathematical concepts are “constructed in pure intuition” appears to function as an account of how sums are reached in arithmetic and theorems are proved in Euclidean geometry. As a result, much attention has been given to the question whether Kant thinks intuition is required to deduce results from first principles, or merely to cognize the principles.

I will argue that in Kant’s later presentation of his views, the case that mathematics is synthetic is virtually limited to the role of intuition in cognizing first principles. Beginning in 1789, the Leibnizian philosophers J. A. Eberhard and J. G. Maass attacked Kant’s view that judgments that extend knowledge require intuition (understood as that which gives objective reality to concepts) to connect subject and predicate. They claimed that ampliative judgments not requiring intuition are found in mathematics. In reply, Kant maintains that the definition of a mathematical concept already includes the construction that establishes its objective reality. (This doctrine already appeared in the Critique of Pure Reason and is elaborated in Kant’s lectures on logic.) The intuition on which mathematical cognition relies must now be understood as the faculty that makes immediate and singular representation possible, rather than the representations produced by it.

This foregrounding of intuition’s role in cognizing definitions raises several issues. (1) It becomes difficult to see the relevance of representation of particulars, such as geometrical diagrams or collections of enumerated objects, for mathematical cognition. This avoids the notorious problem of explaining their relevance. But it makes the first Critique’s argument for the syntheticity of mathematics appear idle, and makes it a problem to explain why mathematical cognition requires anything other than the faculty (namely, of concepts) that deals in generalities and rules. (2) Because Kant subscribes to the view that mathematical concepts are produced by “arbitrary” [willkürlich] syntheses, he was already vulnerable to Maass’s objection that any analytic judgment could be made synthetic by including the predicate in the concept of the subject. It is now imperative that he answer it. (3) Salomon Maimon’s objection to Kant’s view of definitions as constructive, namely that Kant passes off criteria for the concept’s application to extant objects as rules for making new objects, also becomes urgent.

Kant can resolve (1) and (2) by clarifying the constraints on the formation of mathematical concepts. I explain how the first Critique’s doctrine of the schematization of mathematical concepts can be used to this end.

Easton, "The Father of Cartesian Empiricism: Robert Desgabets on the physics and metaphysics of blood transfusion"

The early history of blood transfusion begins in 1628 with the publication of Harvey’s work on blood circulation, De Motu Cordis, and ends in 1668, the year of the first allegedly successful transfusion of blood into a human subject by a French physician Jean Denis, and the official French order to prohibit the procedure. The subject of special interest in this brief and uncelebrated history is Robert Desgabets (1610-1678), an early defender and teacher of the Cartesian philosophy at St. Maur, in the region of Lorraine, France. Desgabets’ Discours de la communication ou transfusion du sang (Recueil B.N. Manuscrits Thoisy 326, c. 1658) contains a defense and description of the procedure. This three-page manuscript is a lecture delivered at one of the meetings held at M. de Montmor in July 1658. Letters and documents which describe the events of these conferences during Desgabets’ eight-month stay in Paris in 1658, indicate that he had numerous discussions with the Cartesian physicist Jacques Rohault and also with another leading Cartesian scientist, Gerauld Cordemoy. Desgabets’ version of Cartesianism was that it provided a complete explanation of the world—from the physics and mechanics of the communication of blood to the mechanics of transubstantiation. Remarkably, Desgabets entered into a debate between 1671-1672 with Thomas Le Géant, Archbishop of Paris, over Desgabets’ thoughts on the Eucharist, stated in an anonymous work, Considérations sur l’état présent de la controverse touchant le T. S. Sacrement de l’autel (1671). This latter topic was to prove especially contentious, and has been attributed as being responsible for the persecution of Cartesianism in France.

Desgabets’ empirical research, most notably his work on blood transfusion, and his conception of the contingent nature of the eternal truths and material bodies, and his insistence on the necessary role of sensible signs in the formation of all ideas, each testify to the essential and necessary role he sees for expérience. His allegiance to Cartesianism, mechanistic explanation, and experience, and his role in the dissemination of Cartesianism in France, thus merits examination.

Ferrari, "William James and Philosophy of Science"

In his book on Pragmatism (1907) William James offers a short but very illuminating account of contemporary philosophy of science. James argues that according to Mach, Duhem and Poincaré «no hypothesis is truer than any other in the sense of being a more literal copy of reality. They are all but ways of talking on our part, to be compared solely from the point of view of their use». Moreover, James gives a holistic account of what means the acquisition and growth of truth within the historical process of knowledge, which seems undoubtedly ‘up to date’ to a reader well acquainted with the following philosophy of science from Neurath to Quine. A new truth is always «a go-between, a smoother-over of transitions», so that the older stock of truths can be preserved with a minimum of modifications, whereas the new truth is adopted in the framework of the historically given scientific insights. James is otherwise fully convinced that an anti-foundationalist view of knowledge is required when we want to take into account that our thinking develops in quite a different way from that offered by traditional philosophy since Descartes. According to James, «our knowledge grows in spots», and no ultimate foundation of knowledge can be exhibited: our prejudices and beliefs are deeply embedded in the process of knowledge, which James suggests to be always «stewed down in the sauce of the old».

To be sure, similar statements have nothing to do with the usual “image” of James widely influential at the very beginning of 20th century. Indeed, James was not at all the philosopher supporting the Yankee way of thinking deplored by his most prominent German colleagues at the time of the III International Congress of Philosophy in Heidelberg (September 1908). James was rather a philosopher of late 19th century that was perfectly aware of his commitment to recent philosophy of science. In his essay Humanism and Truth (1904), for instance, James pointed out that his pragmatism was connected with the increasing transformations in exact and natural sciences during the last decades. The enormously rapid multiplication of theories, he maintained, had involved a new conception of truth: «Our mind has become tolerant of symbol instead of reproduction, of approximation instead of exactness, of plasticity instead of rigor».

In my talk, I would like to give an essential picture of James’ connection to contemporary philosophy of science, regarding in particular his relationship to Mach, Duhem and Poincaré. Moreover it seems useful to offer a brief account of Neurath’s reception of James’ pragmatism, and to recall an outsider as Wilhelm Jerusalem, who in 1908 translated the book on Pragmatism in German, endorsing an interpretation of James’ philosophy which represented a very interesting background of the ‘first Vienna Circle’ and the origins of Logical Empiricism.

Fisher, "Philosophical Foundations of a Science of Architecture, 1886-1954"

One of the most compelling episodes of philosophy of science influencing other areas of philosophy occurs in the era of Helmholtz, Fechner, and Lotze, when aesthetics was shaped by empirical psychology, psycho-physiology, and methodological accounts of sciences of the mind. This story is well known in the cases of music and art.

In a less known chapter, elements of an architectural science were developed on similar foundations. In his earliest work, the art historian Heinrich Wölfflin outlines a program for grasping the nature and psychological function of architecture, architectural objects, and their constituent parts, as rooted in the physiology, empathic psychology, and formalism of his day. Similar views were mooted by other aestheticians in his milieu and at the end of the nineteenth century, the ‘Germanist’ scholar Victor Basch brought this tradition to France.

A Serbian disciple of Basch, Miloutine Borissavliévitch, spent the next few decades developing further aspects of a self-styled ‘science of architecture’. The aim of this science is to account for the nature of architectural composition—and in particular the conditions for creating beauty through the use of forms—by appealing to characteristic physiological and optical reception of and reactions to forms. Thus, physiological optics and geometry are the bases for a theory of perspective, and a formal aesthetics entails principles of architectural composition governing proportion and the harmony, symmetry, asymmetry, and rhythm of forms in architectural works.

While Borissavliévitch’s ‘scientific aesthetics of architecture’ is largely lost to time—and empirical spuriousness—his effort is valuable for addressing the matter of what such a science might be, as built on these related views on method:

(a) Empathic method—Following the German einfühlung tradition and French adherents like Bergson, we can project our physiological structures and functions into architectural forms, and determine aesthetic ‘success’ by this standard;

(b) Formal analysis—Per Wölfflin, architectural works are constituted and assessed by their formal properties, which are immediate to the senses; as distinct from Wölfflin’s view, reason also plays a role in grasping those properties, and architectural content is rooted in the forms.

(c) Group taste (aesthetic judgment) concept—Objectivity in determining aesthetic preferences is attained by summing over individual subjective preferences.

Borissavliévitch frames his science of architecture as an exercise in statistical aesthetics, modeled after Fechner’s efforts (in Vorschule der Aesthetik, 1876)—as improved by sampling techniques and multivariate statistical analysis, and so yielding a more robust account of group taste. Fechner is also found wanting in his psycho-physics; Borissavliévitch proposes a proto-eliminativist challenge to Fechner’s ‘parallelist’ dualism, and rejects his proto-functionalism as well. An experimental aesthetics must, accordingly, be subsumed under a psycho-physiology—and a physiological optics, in particular—rather than a psycho-physics. While Borissavliévitch’s own star soon faded, his emphasis on experiment, causal explanation, ‘facts rather than concepts’, and principles or ‘laws’ would endure in the psychology of architecture that developed over the next years.

Folina, "Axioms, Evidence and Truth"

Up until the 19th century, the traditional view held that axioms are self-evident truths about a certain domain of objects. But what does "self-evidence" mean in this context? How, on the traditional view, do we recognize the obvious? And what do we do when axioms, such as the parallel postulate, fail to seem "self-evident"?

When an axiom fails to seem "self-evident" one response is to try to replace it with another axiom that is more self-evident. This was the main approach to Postulate V, the parallel postulate, for hundreds of years (producing in its wake many logically equivalent versions of Postulate V). A more radical response, however, is to give up the requirement of self-evidence – and re-conceive the nature of an axiom. The first response is mathematical, for it involves working within a mathematical framework to come up with a more appealing mathematical statement. The second response, however, is philosophical. For it involves a revolution in the conceptual framework of mathematics, rather than a derivation within that framework.

I propose to examine two such philosophical revolutions. One regards geometric axioms in the late 19th to early 20th centuries. Here I will focus on Poincaré's conception of geometric axioms as conventional. The other revolution emerges in the British algebraist school in the early to mid - 19th century. Here I will focus on De Morgan's conception of algebra as formalist. The rise of conventionalism and formalism is often associated with the development of non-Euclidan geometries; and the philosophical work is thought of as primarily subsequent to the mathematical development of non-Euclidean systems. That is, given the existence of mathematical alternatives, a new philosophy of geometric axioms was needed. If there are viable alternatives, then no one system can be the obvious set of truths. Axioms as truths that simply reflect prior meanings thus yielded in this case to the idea of axioms as determining meanings (of 'point' and 'straight line' for example) by stipulating truths.

It is interesting, however, that just as the mathematical work on non-Euclidean geometries was beginning, and before the philosophy of geometric conventionalism was articulated, we can already find similarly revolutionary ideas about axioms - in the context of algebra rather than geometry. I will argue that - though related - there were at least two distinct philosophical motives for the revolutions. The problem in algebra was an absence: there was no natural interpretation to guide the development of the mathematical principles. In contrast, the problem in geometry was an over abundance: there were too many viable possibilities for any one interpretation to seem a priori mathematically true. The result of both not enough and too many interpretations was a new conception of both mathematical axioms and mathematical truth.

Formica, "Almost von Neumann, definitely Gödel. The 2nd Incompleteness Theorem’s Early Story"

Since the early ’80 – cfr. Hao Wang (1981), John Dawson, Jr. (1984, 1984a) – it is known that the discovery of the 2nd incompleteness theorem has on the background the meeting of two young mathematicians at the Königsberg Conference in 1930. The two were Kurt Gödel and John von Neumann. However, only after the publication of the last two volumes of Gödel’s Collected Works (2003), devoted to his correspondence, it has become clear, at least to those who did not have access to Gödel’s Nachlass, that meeting could be the matter of a story. After Gödel’s announcement of an early version of the 1st incompleteness theorem at the Conference, von Neumann had a private conversation with him and some weeks later (cfr. To Gödel, 20 November 1930) wrote the colleague a letter in which he states to have proven the 2nd incompleteness theorem. Unfortunately, Gödel had already submitted for publication his 1931 paper on undecidable propositions where the proof of the theorem was assured. Once known this, von Neumann loyally decided to left to Gödel the paternity of the great discovery (cfr. To Gödel, 29 November 1930) that was «a natural continuation and deepening» of his earlier result (the 1st incompleteness theorem). Nevertheless, the early correspondence between the two continues to be of great interest (cfr. To Gödel, 10, 12 January 1931) because they gave a different interpretation of the theorem’s impact on the foundational debate in mathematics. While Gödel has been firmly cautious at least till 1933, von Neumann did not have any doubt in declaring negatively solved the main foundational question: «there is no rigorous justification for classical mathematics» (cfr. To Gödel, 29 November 1930). Analyzing Gödel’s correspondence with von Neumann and Jacques Herbrand, Wilfried Sieg (2003, 2003a, 2005, 2006) has answered many questions about the 2nd incompleteness theorem’s early story. For instance, he has explained the nature of the disagreement between the two mathematicians and has also shown what have later motivated Gödel in changing his mind on the theorem’s impact on the foundational debate. There is only one question that has not been, but needs to be answered in order to complete the picture: how did von Neumann reached the proof of the theorem in few weeks and with very few information? The lack of an answer for this question – which is both crucial and fascinating – is due to the fact that von Neumann’s proof has been lost. In this paper I formulate a conjecture for such an answer by the analysis of some basic documents a) from the Königsberg Conference, b) around the Königsberg Conference and c) about the Königsberg Conference.

French, "Reichenbach, von Kries, and Boltzmann on the Justification for Objective Probabilities"

In this talk, I briefly detail the adoption of an objective, or physical, notion of probability in the work of the physicist Ludwig Boltzmann before 1877 and then compare the details of that account with a similar conception of objective probability developed first by the neo-Kantian Johannes von Kries (1886) which was then influential for Hans Reichenbach in his 1915 dissertation. This examination has two tasks: (i) describe what distinctions were available, especially from Boltzmann, at the turn of the century to characterize objective probability and (ii) locate and articulate Reichenbach’s 1915 views on probability. This project is important precisely because in 1915 Reichenbach did not have an straightforward limiting-frequentist notion of probability. Instead, under the influence of von Kries, Reichenbach relies on two key ideas. First, he provides a justification for physical probability as constitutive of the structure of Spielräume, or “event spaces.” He then offers an explicitly Kantian argument for a transcendental principle that guarantees the existence of probability functions required for scientific knowledge (1915, 41; Glymour and Eberhardt 2008, 4-5).

Although not motivated by transcendental arguments, Boltzmann’s work in the kinetic theory of gases, in a series of papers from 1866 to 1872, exhibits physical, or rather, mechanical, justifications for symmetry arguments allowing him to derive an equilibrium distribution for a particle system. For example, in 1868 Boltzmann gives the example of a N particle bounded gas system, with fixed energy. He then argues that, because of the physical properties of such an idealized system (and the conservation of energy), we only need to consider N-1 degrees of freedom from which a unique probability distribution for equilibrium is derivable. In general, before 1877, Boltzmann is motivated to reconcile a commitment to a mechanical, or deterministic, interpretation of physics with the employment of statistical tools to analyze equilibriums of gases. However, by 1877, Boltzmann comes to adopt a statistical interpretation of physics to examine more complicated (e.g. irreversible) processes which makes his commitments to an objective notion of probability unclear. It is this later work of Boltzmann, i.e. Boltzmann’s Gastheorie, that both Reichenbach and von Kries will later find influential and not his earlier views.

Nevertheless, we can then contrast Boltzmann’s mechanical and deterministic motivations to adopt an objective notion of probability with the motivations of both von Kries’ and especially Reichenbach’s Kantianism and commitment to causality. Like Boltzmann, Reichenbach resists any subjective notion of probability, but also diverges from von Kries on this point, as the latter takes advantage of the principle of indifference to construct probability functions. Instead, Reichenbach stresses that probabilities express what should be true, i.e. as a rational expectation which requires a factual claim, based on objective regularities (1915, 47). Just as von Kries bases probability functions on physical structure, Reichenbach offers a physically “ideal” example of a probability machine which, using techniques similar to work by Felix Hausdorrf and Henri Poincaré, allows him to classify the Spielräume into “strike ratios” from which he can justify the introduction of equal probabilities. I will investigate the similarities and differences between such an argument and Boltzmann’s early work, including an analysis between Reichenbach’s Kantian and Boltzmann’s mechanical commitments to physics.

Gauthier, "The notion of analytical apparatus"

The notion of analytical apparatus was introduced in the 1926 paper by Hilbert, Nordheim and von Neumann « Über die Grundlagen der Quantenmechanik »: The analytical apparatus << der analytische Apparat >> is simply the mathematical formalism or the set of logico-mathematical structures of a physical theory. Von Neumann used the notion extensively in his 1932 seminal work on the mathematical foundations of Quantum Mechanics and he stressed particularly the auxiliary notion of conditions of reality << Realitätsbedingungen >>. I want to show that this last notion corresponds grosso modo to our notion of model in contemporary philosophy of physics.

Hermann Weyl, who has initiated group theory in Quantum Mechanics, also exploited the idea in his conception of the parallelism between a mathematical formalism and its physical models. We know that Weyl defended a constructivist interpretation of physics and the exact sciences in general. But one can even go further and argue that Minkowski, a close friend of Hilbert, shared the same idea of the prevalence of the analytical apparatus over its physical interpretation and it can be shown that in his papers on physics, particularly his 1908 paper « Raum und Zeit », Minkowski proposed a derivation of physical geometry from the geometry of numbers in the Leibnizian spirit of the harmony between pure mathematics and the physical world.

I shall conclude that this point of view concurs with the empirico-transcendental turn in the history of philosophy of science as exemplified in the work of Friedman and Ryckman and recently van Fraassen, in particular.

Gerstorfer, "Uexküll between Kant and Quine"

The German biologist Jakob von Uexküll (1864 – 1944) is best known for introducing the concept of »Umwelt« into biology. Less known is the fact, that Uexküll tried to given science a biological foundation, based on Kant's Critique of Pure Reason. He treats Kant's project of transcendental philosophy as a research programme, which needs to be reformulated in biological terminology. The aim of this paper is to clarify the question Uexküll poses and to reconstruct his theory as presented in his »Theoretische Biologie« (1928). He claims that Kants results of the analysis of the structure of subjectivity have to be extended in two directions: 1.) The subject has to be treated as a biological organism and research should focus on the body, the brain and the workings of the perceptual organs and 2.) account has to be taken of the relations between other subjects (animals) and objects.

Uexküll is convinced, that the apriori forms of sensible intuition are relative to the subject (organism) and can be further analysed – and reconstructed –, employing biological methods. Those methods include anatomy, physiology, ethology and the study of the nervous system. He relies heavily on the results provided by Johannes Müller, Hermann Lotze and Hermann von Helmholtz but provides an original interpretation: The sign processes which can be observed are not governed by physical laws but by autonomous systemic principles which he calls Planmäßigkeiten. Planmäßigkeiten can be identified with the Kantian understanding of natural purposes (cf. McLaughlin 1990); this means, that the forms of sensible intuition are determined by the structure, the blueprint, of the subject. I will argue, that for Uexküll Planmäßigkeit is the synthetic principle apriori, constituting the objects of knowledge and the cognitive faculties of the subject. Thus, he claims, all epistemological questions can be answered by functional analysis of the cognitive apparatus of the subject and its organisation.

The resulting theory shares many features with Quine's »Epistemology Naturalized« (1969). Uexküll would agree with Quine, who says that »Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz., a physical human subject« (82). Nevertheless there are significant differences between them: Uexküll would reject Quine's behaviouristic conception of psychology and claim that the structures within the black box can be known by identifying neural processes and mental properties. He does so, arguing that the physical extension of the brain and the qualitative features of the mind are tied together by Planmäßigkeit, which in turn acts as constitutive principle apriori. In my paper I will show, what Uexküll has in common with Kant and Quine and where they differ. The main question is, why he tries to rescue some form of apriori instead of abandoning transcendental reasoning in favour of straightforward naturalism.

Gelfert, "Observation, Experiment, and Imagination: Elements of Edgar Allan Poe’s Philosophy of Science"

The relationship between science and literature has, in recent years, provided fertile ground for analyses in the history of science. Whereas historians of science have turned to literature as a source of popular representations of science, literary theorists have tended to focus on the literary qualities of scientific texts. In the present paper, I provide a case study of how historians of philosophy of science, too, can profit from analysing prima facie literary representations of science. I do so by looking in-depth at the example of Edgar Allan Poe. It is, of course, a well-rehearsed point that Poe’s influence on modern fiction has been significant: Not only is Poe often credited with inventing the modern detective story (through his ‘tales of ratiocination’), he has also been regarded as something of a role model for a line of science fiction writers that starts with Jules Verne, one of Poe’s ardent admirers. Poe’s influence, however, went beyond the literary world: Thus, in several passages of his work, Charles Sanders Peirce acknowledges (albeit in passing) a debt to Poe, whose detective character C. Auguste Dupin is known for deploying elaborate chains of abductive reasoning in order to solve his cases. While the parallels between Dupin’s ‘ratiocination’ and scientific ‘inference to the best explanation’ have often been noted (e.g., U. Eco & T.A. Sebeok 1983), somewhat less attention has been paid to the fact that, in addition to his writing of fiction, Edgar Allan Poe also commented on questions of scientific method more generally. In his ambitious work Eureka (1848), which he dedicated ‘with very profound respect’ to Alexander von Humboldt, Poe sets out to provide a synthesis of the astronomical and cosmological knowledge of his time. It is now widely acknowledged that (in the words of the editors of the 2004 University of Illinois Press edition of Eureka) Poe’s summaries of the various scientific positions ‘are very competent’, his extrapolations ‘consistently intelligent’, and most of his presentation ‘no more rhetorically overblown than other comparable statements of the era’ (S. Levine & S.F. Levine 2004). This has given rise, among Poe enthusiasts, to somewhat misguided attempts to vindicate Poe’s scientific speculations by comparing them favourably to the results of contemporary (20th/21st century) physics. By contrast, the present paper focuses not on the alleged ‘successes’ of Poe’s scientific speculations – striking though some similarities may be – but instead on Poe’s characterization of science itself, as a process constituted jointly by observation, experimentation and theorising. As it turns out, Poe arrives at a bold and original characterization of the scientific process, by considering the role of observation, experimentation and inference in science as well as discussing issues of underdetermination, incommensurability, and trade-offs between different desiderata of scientific theories and explanations. Based on a wholesale rejection of a priori knowledge (as well as of axiomatizations ‘in any particular science other than Logic’), Poe develops his own broadly naturalistic view of science, which attempts to reconcile reliance on empirical evidence with an active role of imagination in the generation of scientific hypotheses.

Giglioni, "Francis Bacon and the Medical Context: Metaphors and Realities"

Throughout his work, Bacon refers to medicine as both one of the various disciplines in need of reformation and a meta-disciplinary language that can be applied to other fields of human learning, especially ethical and political philosophy, to improve our understanding of human nature. Unsurprisingly, medical metaphors in Bacon’s works are frequent and often enlightening. The boundary between the literal and metaphorical use of medical concepts, however, is not always clearly drawn. This is particularly evident when Bacons sets out to describe the discipline that he calls cultura animi. In this case, it is the very interaction of the mind with the body that demands a language that is medical as well as ethical, rhetorical and political, poetical and historical, astrological and religious. Here metaphors are required by the very elusive and opaque nature of the objects and concepts under examination. In The Advancement of Learning, Bacon argues that, ‘as in medicining of the body it is in order first to know the divers complexions and constitutions, secondly the diseases, and lastly the cures; so in medicining of the mind, after knowledge of the divers characters of man’s natures, it followeth in order to know the diseases and infirmities of the mind, which are no other than the perturbations and distempers of the affections’. The main aim of this paper is to show that, in outlining the framework of his medicine of the mind, Bacon attempted to overcome the impasse in which, in his opinion, Aristotle had left the study of logic, rhetoric and ethics, and to inaugurate a new, foundational ‘knowledge of ourselves’, preliminary to knowledge of nature.

Giovanelli, "Indiscernibility. On Leibniz's Influence on Logical Empiricist Interpretation of General Relativity"

The aim of this paper is to give an account of the role of Leibniz’s indiscernibility arguments in the logical empiricist interpretation of General Relativity. I will proceed as follows:

1. Developing an idea of Hermann Weyl’s I will argue that Leibniz’s celebrated thought experiments on the impossibility of noticing a universal dilation of the whole universe or its mirroring by changing east into west and so on, can be considered as the first attempt to define the modern concept of “symmetry”: Symmetries are transformations that leave all relevant geometrical structure intact, so that the result is indistinguishable from the original without referring to something that does not participate to the transformation. The fiction of a change that involves the entire universe serves actually to exclude in principle the possibility of such comparison.

2. I will show that most of the 19th century discussion on the foundations of geometry was dominated by Leibniz’s indiscernibility arguments. In particular Helmholtz, Poincaré and Hausdorff/Mongré generalized Leibniz’s thought experiments, showing that two worlds will be indistinguishable not only if they are congruent or similar, but also if they are mapped into each other by any continuous deformation whatever, only requiring that points that are close together before the transformation is applied also end up close together. Which worlds count as indistinguishable depends namely on which geometrical structure one wants to preserve, so that it is possible to reduce the relevant structure to mere topological relations of coincidence and neighborhood of points.

3. Early logical empiricists (Schlick, Reichenbach, Carnap) interpreted General Relativity under the light of such kind of thought experiments, considering Einstein’s celebrated “point-coincidence argument” as an expression of indiscernibility in the sense of Leibniz. Since worlds that agree only on point-coincidences will be indistinguishable, the logical positivists could reach the untenable conclusion that the space-time of General Relativity is “metrically amorphous”, failing to grasp the Riemannian nature of general relativistic space-time, that has a perfectly determinate, although variable, metrical structure: the symmetry group of an arbitrary general relativistic space-time is not the widest group of all smooth coordinate transformations, but the trivial group consisting of the identity alone.

4. I will suggest that the error of logical empiricists consists in identifying Einstein’s point-coincidence argument with the Leibnitian sort of arguments dominating the 19th century debate about geometry. Historical research however has shown, that the point-coincidence argument, as response to the “hole argument”, does not intend to show that metrical properties of space-time are less fundamental than “topological” ones, but that different distributions of metric field that agree on point-coincidence are indistinguishable, since space and time have no reality separate from the metric field.

5. I will conclude showing that Einstein’s point-coincidence remarks profoundly transform the nature of indiscernibility arguments. In a semi-Riemannian space-time such arguments do not serve to determine the fix geometrical background of space-time, but to eliminate every non-dynamical geometrical structure independent of the metric field.

Giovannini, "Remarks on the notion of 'spatial intuition' in Hilbert's early works on the foundation of geometry"

It is not until relatively recently that the view of David Hilbert (1862-1943) as a full-edged formalist in geometry has been dispelled. This has occurred mainly through serious studies of his unpublished Nachlass, particularly, his notes for lectures courses in geometry between 1891 and 1902. In these lecture courses Hilbert shows a di erent face, a much more speculative one, in which he advances several philosophical insights that might help to understand how he actually considered his major methodological breakthroughs regarding the foundations of geometry. However, what one perceives at a rst glance to these rich insights is a rather curious, and sometimes inconsistent, gathering of theses. Among them, it is worth to mention: a) a rather traditional conception of geometry, in the sense that geometry is conceived as a natural science which deals with the properties or shapes of things in the physical space (Hilbert, 2004, p. 22); b) a strong empiricist commitment regarding the epistemological basis of our geometrical knowledge (Hilbert, 2004, p. 74); c) a bare representationalist conception of scienti c theories, exhibited for instance in his reference to H. Hertz Bildtheorie when talking about the nature of the axiomatic method (Hilbert, 2004, p. 74); d) an abstract stance in connection with the axiomatic presentation of geometrical theories (Hilbert, 2004, p. 224).

Yet these theses keep a close connection with the notion of `spatial intuition', to which Hilbert refers constantly throughout the lecture courses. This difficult notion, however, still appears to be lacking of an accurate elucidation. This could be explained due to the difficulties associated with it, namely, 1) the complexity of the widespread and equivocal use of this notion in the philosophical and mathematical literature during the second half of the 19th century; 2) the unsystematic way in which Hilbert himself employs this notion. Regarding the latter, at least it is possible to mention four uses in Hilbert's early works: a) spatial intuition as empirical intuition or perception; then b) he mentions a pure geometrical intuition [reine geometrische Anschauung], as the intuition which operates in geometry when it is developed according to the synthetic method, as opposed to the analytic method; c) an allusion to a kind of symbolic intuition, by means of the claim that geometrical axioms are images [Bilder] or symbols in our mind [Geist] ; and nally d) he often talks, with an implicit Kantian tone, about an `Anschauungsvermoegen' as the ultimate object of study in his axiomatic investigations, which nds its main task in establishing the basic law of intuition [Grundgesetze der Anschauung] (Hilbert, 2004, p. 230).

In this paper we will analyze Hilbert's conception of `spatial intuition' in his rst works on the foundation of geometry. We will approach the question whether if it is possible or not to get a more systematic and accurate account of this notion, its main traits and its role in geometry. The present task is relevant not only to get a better knowledge of Hilbert's early conception of geometry, but also due to the importance that this notion will acquire in the epistemological foundation of metamathematics in the Hilbert's Program in the 1920s.

Giurgea, "The concept of time in Descartes’ system of thought"

The issue of time in Descartes’ though has been a serious issue of debate among scholars of early modern thought. This has happened not only because the obvious difficulties in interpreting Descartes’ concept of time, but also because such a interpretation bears upon a large number of vexing problems of Cartesian physics and metaphysics, like the individuation of bodies, causality, persistence in time and conservation or continuous creation are important problems for the Early Modern Philosophy.

In my paper I aim to discuss the two main interpretations given to Descartes’ conception of time, namely temporal continuity and temporal atomism and I will build up a case for rejecting the second. I will follow primarily the conflicting interpretations as constructed by Ken Levy, Bernard William, Richard Arthur and Daniel Garber. Even if philosophers such as Ken Levy and Bernard William commonly believed that Descartes consider time as made up of temporal atoms, my claim is to argue for a temporal continuity approach. Moreover, I am inclined to accept Richard Arthur’s distinction between a strong sense and a weaker sense of discontinuity and also Daniel Garber’s view about the time problem in Descartes’ works and I strongly believe that it is more appropriate for Descartes’ works to consider time as continuous.

The aim of this paper is to discuss the meaning of the concepts of time, duration and successiveness. Moreover, I will emphasize how these terms can be explained as parts of a coherent system of thought being also in perfect agreement with Descartes’ theory of substance. Having as main purpose to explain the concept of time and duration, I consider that a presentation of Augustine’s distinction of time and eternity will give us the clarifications necessary for understanding Descartes’ terminology.

In the first part of the paper, I will sum up Augustine’s view of the concepts of time and eternity illustrated in his Confessions, more precisely in Book XI. I will appeal to Augustine’s concept of time also because I agree with the fact that his conception is representative for a temporal continuity approach. In the second part, having in a background this interpretation concerning the nature of time in Augustine’s work, I will present a possible interpretation of Descartes’ approach of ‘time’ as mode of thought and ‘duration’ as attribute of all things in the world. For this presentation I will use some important paragraphs from Principles, Meditations and the Correspondence. In the third part, I will discuss the problem of persistence in time in Descartes’ view which is strongly related with the distinction between temporary continuity and temporal atomism. This discussion is based on the articles of Ken Levy, Sophie Roux, Bernard William and Richard Arthur. In conclusion, I will argue for the rejection of a temporal atomist view proving that it is not consistent with Descartes’ approach.

Glas, "Eduard Between Pólya and Popper: Lakatos’ Heuristic"

In the acknowledgement preceding Lakatos’ 1963 article on Proofs and Refutations (chapter 1 of Lakatos 1976 ), Lakatos tells us that ‘the paper should be seen against the background of Pólya’s revival of mathematical heuristic, and of Popper’s critical philosophy’ (ibid. p.xii). Of course, Lakatos was not the first to confer an important role on heuristic reasoning, but he was the first to give it a more than subsidiary role, not restricting it – as was the standard view at the time – to the context of discovery as distinct from the context of just¬ification of the finished product. Instead, he transformed the idea of heuristic into a critical methodological concept, that is to say: a set of criteria that indicate which paths should be followed and which should be avoided in order that our knowledge may grow. The basis of Lakatos’ heuristic was, of course, Popper’s logic of scientific discovery, but the idea of ‘heuristic power’ as a basis for methodological choices is absent from Popper’s work. As Lakatos wrote:

there is a fallibilist logic of discovery, which is the logic of scientific progress. But Popper, who has laid down the basis of this logic of discovery, was not interested in the metaquestion of what was the nature of his inquiry and he did not realise that this is neither psychology nor logic, it is an independent discipline, the logic of discovery, heuristic (ibid. p.144, footnote).

By the same token, Lakatos’ heuristic is something entirely different from Pólya’s heuristic, which is more or less a psychological subject, an ars inveniendi or mode of plausible reasoning in the context of discovery, whose products subsequently are to be proven in the context of justification. In Lakatos’ conception, discovery and justification (not of the products but of the choices made) are melted together.

It is has often been suggested that Lakatos’ Methodology of Scientific Research Programmes was an attempt to reformulate Thomas Kuhn’s historical analyses of scientific developments in critical-rationalist terms in order to counter what (wrongly) was considered the irrationalist tendency of Kuhn’s views. However, all ingredients of the methodology can already be found in Proofs and Refutations, for instance the neglect of falsifying instances per se, the relative autonomy of theoretical development and in particular the attempt to confer methodological respectability on the notion of ‘heuristic power’ (in the non-psychological, non-Kuhnian sense).

In my contribution I will further characterise the position of Lakatos between Pólya and Popper, and argue that in spite of obvious similarities between Lakatos’ research programmes and Kuhn’s paradigms or disciplinary matrices, the former’s methodology was a direct outgrowth of the method¬ology of Proofs and Refutations and its retransfer to natural science .

Guillaumin, "The distance to the Sun: measurement as fundamental advance during the Scientific Revolution"

It is common to consider mathematical sciences and experimental philosophy as two different, although interrelated, branches of the new Natural Philosophy. For instance, John Henry has written that “the precise nature of the role of the mathematical sciences in the formulation of the experimental method has not yet been established by historical research, although there are a number of highly suggestive studies. It seems fairly clear, nonetheless, that mathematical practitioners played an important part in the establishment of the experimental method.” (The Scientific Revolution and the Origin of Modern Science 2002:31). My claim in this conference is that measurement was at the base of the mathematical sciences and experimental philosophy. In other words, what I want to show is that measurement was the central cognitive advance during the Seventeenth Century Natural Philosophy and astronomy. Accordingly, during Seventeenth Century, experimental philosophy and mathematical sciences were two different branches of a same cognitive development, namely, measurement. I use the classical concept of measurement which is the determination of ratios of quantitative attributes, which is sufficient to establish my historical and philosophical point. My aim is not to discuss the philosophical meaning of the classical concept of measurement, but to show its cognitive importance for developing empirical knowledge in general, and for establishing unobservable phenomena and “non-measurable” qualities, in particular. In this sense, here measurement is considered as a cognitive resource developed through different methods, both conceptual and technological, to measure quantitative attributes of (traditional considered) qualities, like temperature or speed; but also to measure traditional quantities, like time, through new artifacts and new conceptual devices. Thus, it is possible to say that Scientific Revolution was cognitively possible mainly because it was methodological and conceptual possible to measure diverse qualitative and quantitative aspects of Nature. Thomas Kuhn (1961) saw this same point when he maintained that sciences as electricity, magnetism and caloric suffered a revolution just when it was possible to measure the “imponderable fluids”. Jasok Chang (2004) has recently analyzed the measurement of temperature, and the technological development of thermometers; he has detailed illustrated how this technological and conceptual develop was crucial in the Revolution of caloric sciences at the beginning of Nineteenth Century. In order to illustrated the cognitive preeminence of measurement in the emergence of modern science, I analyze here the cognitive merits of the determination of the distance Earth-Sun developed by Marin Mersenne, Giovanni Cassini, and Christian Huygens.

Hattab, "Gorlaeus’ Atomist Monism and Its Implications for Scientific Knowledge"

With the exception of Cartesianism, many of the new philosophies of the seventeenth century reduced all substance to one kind. Whether it took the form of Hobbes’ materialism, Leibniz’s spiritual atomism or Spinoza’s solitary substance, some version of metaphysical monism often went hand in hand with the new science. One of the earliest instances of seventeenth century monism can be found in the metaphysical foundations David Gorlaeus laid out in 1610 in connection with his atomist natural philosophy. In this paper, I examine Gorlaeus’ main argument for monism so as to reveal its logical connections to his atomism and his theory of knowledge, especially scientific knowledge. The example of Gorlaeus shows that seventeenth century monism was not merely a later outcome of replacing Aristotelianism with mechanistic natural philosophies and alternative epistemologies. Rather, for Gorlaeus, the case for metaphysical monism itself forms an integral part of his defense of atomism and his rejection of the Aristotelian view that scientific knowledge consists in knowledge of causes.

Gorlaeus’ argument for monism appears to reverse the argument Spinoza gives at the beginning of his Ethics. Spinoza presupposes the distinction between a substance and its modes, as found in Descartes’ metaphysics, and argues from these definitions to the conclusion that there is one, indivisible, eternally existing substance. By contrast, Gorlaeus’ argument for monism reasons from the unity of being to the impossibility of separable accidental being and then, from there, replaces the substance/accident distinction with a substance/mode distinction that resembles Descartes’. Gorlaeus claims that anything that is not essential to substance is not a being but a co-existence. Co-existences include affections such as cause and effect. This leads Gorlaeus to redefine scientific knowledge as knowledge based on necessary axioms rather than knowledge of causes.

Gorlaeus’ metaphysics includes some further distinctions in addition to substance and its modes or co-existences. He distinguishes between real being (atoms), beings of reason, accidental beings, and modes. With the exception of light, a special case which I show does not undermine his rejection of Aristotelian accidents, accidental beings are inseparable from real beings and consist in aggregates of atoms. Physics, according to Gorlaeus, is the branch of metaphysics that studies created accidents. We can surmise from this that, unlike Aristotelian physics, Gorlaeus’ physics does not concern anything that is essential to real being.

Gorlaeus is not a mechanist. Nor is he a skeptic who rejects sense perception as a reliable starting point for knowledge of physics. And yet, due to his commitment to monism, Gorlaeus is led to embrace an atomist natural philosophy and a view of scientific knowledge that anticipates later modern views.

Hallet, "Dani The Legacy of Goethe's Farbenlehre"

Even amongst contemporary philosophers of science, Goethe's Farbenlehre tends to be a polarizing topic. Goethe the scientist played a highly political role in articulations of scientific projects in the middle decades of the nineteenth-century. Timothy Lenoir points out that the generation of scientists who grew up around Johannes Müller in the mid-nineteenth century were more than scientists; they were also shapers of their culture (Lenoir 1997, 132). The significance of Goethe's Farbenlehre was particularly important for these scientists in delineating their conception of legitimate science by contrasting it with what they took to be the methodologically misguided and conceptually confused science of the preceding generation.

In 1853, during a lecture delivered before the German Society of Königsberg, Hermann von Helmholtz said, "Goethe, on the other hand, as a genuine poet, conceives that he finds in the phenomenon the direct expression of the idea [. . .] This, too, is the secret of his affinity with the natural philosophy of Schelling and Hegel" (Helmholtz 1853, 46). Such characterizations of Goethe as, first and foremost, a poet and implicated in Naturphilosophie were not uncommon amongst Helmholtz's set; as late as 1882 Emil du Bois-Reymond was compelled to comment on Goethe's merit as a scientist. Yet, in fact, Goethe's intellectual relationships with the Naturphilosophen were problematic and at times, antagonistic. Helmholtz and his peers rejected, in the first instance, Goethe's scientific methods, partially on the basis of a problematic conflation with those of Naturphilosophie. This paper will address the conditions that made it possible, for Helmholtz in 1853, to lump the two together in one denouncing sweep.

Scientists and philosophers in the nineteenth-century aligned Goethe's scientific work alongside that of Naturphilosophie on the basis of a number of perceived commonalities: 1) the rejection or severe limiting of the role of mathematics in science; 2) a de-emphasis of experimentation and an emphasis on the role of ideas in scientific methodology; 3) and an inversion of the typical hierarchy of the sciences in which the mechanical sciences are at the level of greatest simplicity and the life sciences at the apex. None of these points serve as an unproblematic axis upon which to align Goethe and the Naturphilosophen and it will be my contention that these associations must be understood through the specific political, cultural and professional interests they served for particular thinkers in the mid-1800s.

Recent work on the science of Goethe has worked towards complicating such simplistic narratives that polarize the debate such that Goethe is labeled anti-scientific and naturphilosophisch and the adherents of experimentalist science are lauded as the true voice of science (Cf. Zemplen 2003; Lauxtermann 1990; Sepper 1998; Burwick 1986). Yet, even today scholarship on Goethe's Farbenlehre seeks to proclaim on the question of his status as a scientist. This paper will investigate the systematics of the nineteenth-century debate. It will look to explain the role of Goethe's Farbenlehre in the context specific interminglings of social conditions, institutional factors, personal relationships and political interests in nineteenth-century German science. In doing so it will avoid proclaiming on Goethe's status as a scientist and instead show how the very standards by which we make such judgments were importantly informed by the nineteenth-century discussions.

Harari, "Aristotelian Demonstrations and Neoplatonic Ontology:The Notion of Proof in Proclus' Commentary on Euclid's Elements"

In his comments on the first proof in Euclid's Elements Proclus points out that Euclid's proofs do not always conform to the Aristotelian model. He bases this observation on a brief analysis of Euclid's proof that the sum of the interior angles of a triangle is equal to two right angles (I.32), arguing that this proof is not demonstrative because its middle term is not a definition or a cause. When considering Proclus' adherence to Aristotle's conception of demonstrative proofs, it may seem that Proclus' metaphysical commitments do not bear on his methodological stance. In other words, Proclus may seem to be a (Neo- )Platonist inasmuch as ontology is concerned and Aristotelian inasmuch as logic and methodology are concerned. However, this neat distinction between (Neo-)Platonic ontology and Aristotelian logic does not capture adequately Proclus' view of the relationship between logic and methodology. In his second argument for the ontological priority of mathematical objects over sense objects, introduced in his first prologue to the Euclid commentary, Proclus argues that his ontological stance is indispensable for securing demonstrations in mathematics. His argument to this effect shows that the alternative ontological stance, which he tacitly ascribes to Aristotle, is inconsistent with the priority and explanatory role of universals in scientific demonstrations, because it identifies the objects of geometry with abstractions or later-born universals (13.27-14.23 Friedlein). This argument suggests that Proclus' notion of demonstration defies the simplistic dichotomy between Platonism and Aristotelianism and it calls for a reexamination of Proclus' commitment to these philosophical stances. In this paper I address this issue through an examination of two questions: (1) why the assumption that geometrical objects are projections of pre-existent forms onto the imagination secures demonstrations in geometry, and (2) what are the consequences of this assumption for the logic of demonstration. I show that Proclus' adheres to Aristotle's requirement that demonstrations should ground the conclusion in the being of the subject of demonstration, but endows this requirement with Platonic content: Whereas for Aristotle 'in virtue of its being' means 'in virtue of its essences as expressed in its definition per genus et differentias', for Proclus it means 'in virtue of its productive cause'. Thus we see Proclus modifying Aristotle's notion of demonstration, which he generally adopts with certain Platonic conceptions regarding forms as productive causes.

Heidelberger, "Maxwell’s method of ‘physical analogy’ and structural realism"

In James Clerk Maxwell’s view, the concept of analogy plays a major role in physical methodology as well as in the way mathematical physics relates to reality. My paper has two aims: 1) I would like to connect Maxwell’s ideas on analogy with our present-day discussion of ‘structural realism’. 2) I claim that Maxwell’s ideas on analogy have fallen on fertile ground in Helmholtz and led to his theory of measurement. This opens the possibility to confront Helmholtz’s views on the role of mathematics in physical theories with structural realism.

Maxwell insisted that physics has to stir a middle way between theories that use only mathematical formulae without a physical conception and theories that use preconceived and unwarranted hypotheses about the nature of the underlying agent(s). “There is … one method which combines the advantages, while it gets rid of the disadvantages both of premature physical theories and technical mathematical formulae. I mean the method of Physical Analogy” (1855). According to this method, a new theory is developed by making use of an old theory whose (partial) structural similarity serves as a “scientific illustration” or a “metaphor” for the new one. The old theory can thereby suggest solutions to mathematical problems but also structural constraints of the underlying agents. “It becomes an important philosophical question to determine to what degree the applicability of the old ideas to the new subject may be taken as evidence that the new phenomena are physically similar to the old” (1870). Maxwell thus argues here as an epistemic structural realist: We have epistemic access only to the structural relations of an underlying agent, but can never fully expound it.

In exceptional cases, however, there arises the possibility of crossing the boundaries of structural realism in two respects: 1. The deeper an analogy between two theories becomes, the more we are entitled to believe that the physical agents the theories talk about are similar to each other. We can call this an inference to a conditional ontic scientific realism. It rests, however, on the presupposition that all “phenomena of nature” are “varieties of motion”. 2. The more theories are bound together by a common analogy in a fundamental law of motion, the less metaphorical the similarity of the theories becomes, or, as Maxwell says, the more the analogy is “real”. “In this case … the resemblances between the laws of different classes of phenomena should hardly be called analogies, as they are only transformed identities” (1856). The identity of light with an electromagnetic wave is a case in point.

Maxwell also asks whether our ideas of space, time and mathematical quantities are real analogies of the external world or not. His thought that the application of the doctrine of number to natural phenomena depends on analogy was later developed by Helmholtz who was thoroughly familiar with Maxwell’s (and W. Thomson’s) method of analogy. Helmholtz showed how measurement can be conceived as the embedding into a structural system. This makes it possible to reconcile mathematical realism with epistemic structural realism.

Heis, "Kant and Parallel Lines"

By the turn of the twentieth century, many philosophers claimed that Kant's philosophy of mathematics (and, indeed, his entire philosophy) was undermined by the real possibility of geometries in which Euclid's axiom of parallels is false. Indeed, it seems that Kant's discussion of geometry was faulty even by the standards of his own day, since many of his contemporaries had reflected at length on the problems posed by Euclid's axiom. These facts are well-known. But what is surprisingly less well-known is that Kant – though he never discusses parallel lines in his published work – did write a series of unpublished notes on the philosophical problems posed by parallel lines (Reflexionen 5-10 (1778-1789) and Reflexion 11 (1790); Akademie volume 14, p.23-52).

My talk will present the chief issues and the argument of these notes. Kant's main point is that neither Euclid's definition of parallel lines, nor the alternative definition proposed by Christian Wolff, fulfills the requirements on mathematical definitions that Kant lays out in the Critique of Pure Reason (A727-32/B755-60). In the critical period, Kant argued that only mathematics has (real) definitions – that is, only in mathematics, are there definitions such that anyone who possessed the definition would know immediately that the defined object is possible. In order to understand a mathematical definition, one has to carry out a particular constructive procedure; and the possibility of carrying out the procedure itself guarantees the possible existence of the thing defined. In discussing this feature of mathematical definitions, Kant notices that there is nothing in the definition of two coplanar non-intersecting lines that would allow one to construct these lines: so Euclid's definition of parallel lines fails to be a real definition. Of course, one can by the end of Book I of Euclid's Elements learn to construct parallels. But that procedure is not immediate since it relies on the axiom of parallels. On the other hand, Wolff's alternative definition simply assumes that it is possible to construct two straight lines that are everywhere equidistant, and so it smuggles in Euclid's parallel axiom unawares.

I draw two conclusions from these notes. First, Kant did recognize that his philosophy of mathematics has a particular difficulty accommodating parallel lines. Second, the challenge for Kant posed by parallel lines is not first and foremost a problem with Euclid's parallel axiom. It is, more fundamentally, an issue with Euclid's definition. Both of these conclusions have been overlooked – for example, in Michael Friedman's Kant and the Exact Sciences. Part of the reason that Friedman and others have mislocated Kant's difficulty with parallel lines is -- I argue -- that Kant commentators have failed to distinguish, as Kant does, between axioms and postulates.

Heit, "Positivist Post-Positivism: Nietzsche's Reception in the Vienna Circle"

“Nietzsche is typically regarded neither as a philosopher of science nor an epistemologist” (Babich, 1999: 1). There is a wide agreement that Nietzsche did not contribute anything essential to these fields, and if so, his “account falls almost entirely outside the pale of rationality“ (Nola, 2003: 463). If scholars attribute some significance to Nietzsche in the field of philosophy of science it is more likely that they agree with Zammito: “post-positivism picks up a line of criticism launched by an early anti-positivist, Friedrich Nietzsche” (2004: 10). Under these conditions it might come as a surprise, that prominent representatives of the Vienna Circle like Schlick, Frank, Carnap and Neurath, had a deep, and generally positive attitude towards Nietzsche. In the early 20th century, during the founding period of modern philosophy of science in Vienna, the role of Nietzsche was more multifaceted as one might expect. In spite of the reception of Nietzsche among existentialist metaphysicians like Jaspers or Heidegger, irrational artits like the George Kreis or fascists like Mussolini, several proponents of the Scientific Conception of the World took Nietzsche serious.

A first section of my paper investigates the reception of Nietzsche. Nietzsche and Mach exchanged some of their publications with respectful letters and thinkers like Frank (1917: 72) or Kleinpeter (1913: 193-257) were well aware of the relation between the two. Wittgenstein (1977: 25f) admitted, Nietzsche as a philosopher might have come closer to some problems he himself could not grasp; and Moritz (Schlick, 1927) took his views on the meaning of life very serious. Neurath refers to Nietzsche at a number of occasions, generally positive. In a historical reconstruction of Logical Empiricism he notes: “Nietzsche and his criticism of the metaphysicians [...] was an immediate part of the rising of the Vienna school” (Neurath, 1936: 692). This leads to my second section, reconstructing Nietzsche's actual philosophical impact on the Vienna Circle. I argue that Nietzsche’s philosophy of science bears significant similarities with Carnap's and Neurath’s views: Nietzsche took contemporary science serious. He was very hostile towards contemporary academic philosophy and all kinds of metaphysics. He was not convinced by the Kantian reconstruction of metaphysics. And, as an educated philologist, his epistemology focuses on analysis of language. Moreover, as Carnap points out in his famous paper on the Overcoming of Metaphysics, Nietzsche found the most appropriate medium to talk metaphysics: the art of Zarathustra (Carnap, 1931: 240f). A re-evaluation of their reasons to read Nietzsche might help to enrich or understanding of the Vienna Circle and of the scope of their philosophy of science.

Hennig (with Stinson), "Aristotle’s Argument Against Functionalism"

Functionalism rests on the claim that mental states are constituted by their functional role and can thus be described independently of their physical (i.e. material) realization. If this claim were true, one could study mental states without studying their material basis, and psychology could fruitfully be pursued independently of neurobiology.

Aristotle has been characterized as a functionalist about mental states, and this characterization has been challenged, but this debate has been somewhat limited. We take a broader perspective and discuss Aristotle’s views on the relationship between functional and structural descriptions, theoretical and applied sciences, and the implications of these views for the autonomy of psychology.

If Aristotle were a functionalist, one would expect him to claim that one can study the form (= soul) of a living being independently from its material realization. He seems to encourage this kind of view when he calls the soul of a living being the form of its body (in De Anima) and adds that whereas the soul of a human being is the same as its essence and in this sense self-sufficient, a particular human being is not the same as its essence (Metaphysics H 3).

On the other hand, Aristotle also argues in a number of places that because all natural substances are subject to change, none of them can be understood without taking their matter into account. This implies that no physical system can be studied independently of its material realization. Aristotle draws this conclusion in Physics II 2 where he discusses the difference between the student of nature and the student of mathematics. It should follow, in the case of psychology, that one must consider both the matter and the form of a living being in order to properly understand it, since a mental state is a part of a system that changes.

We will show, on the basis of Aristotle’s theory of natural science and its implementation in the biological writings, that Aristotle does not think that functional description and physical realization can be separated. Accordingly, he would not think that psychology is independent from neurobiology. Aristotle, far from being a functionalist, offers arguments against this position that may be of interest to both contemporary philosophy of mind and philosophy of science.

Howard, "Boltzmann edits Maxwell"

James Clerk Maxwell and Ludwig Boltzmann are both well known for employing in their physics and their philosophies of science distinctive points of view about the role of models. Boltzmann’s article on models for the 10th edition of the Encyclopædia Britannica (1902) is quite well known. Not at all well known is that one important channel through which Maxwell’s ideas were taken up in the German-language physics literature was Boltzmann’s editing, with extensive commentary, of both Maxwell’s “On Faraday’s Lines of Force” (1855/56; 1864) and “On Physical Lines of Force” (1861/62) for publication in German translation in the very widely read series, Ostwalds Klassiker der exakten Wissenschaften (Leipzig: Engelmann), as Ueber Faradays Kraftlinien (no. 69; 1895) and Ueber physikalische Kraftlinien (no. 102; 1898). The present paper begins with a detailed analysis of Boltzmann’s commentary. But then that commentary serves as the basis for a new look at both Maxwell and Boltzmann on models in science. Prominent among the suggested conclusions is that, in understanding both Maxwell and Boltzmann, the realist moment in their thinking about models should be somewhat diminished and the heuristic moment more strongly emphasized. A more general conclusion would be that we should not hastily combine the discussion of models in electrodynamics with the energeticism debate, and the Planck-Mach controversy as all part of one fin-de-siècle realism-antirealism argument prefiguring later twentieth-century debates about the same. Both national context and specific physics context make a difference in understanding what was at issue in all of these cases.

Irzik (in absentia, with Spirtes), "A History of Causal Modeling"

This paper aims to give a historical overview of the origin and development of causal modeling, which we suggest in four phases. Phase I is the birth of causal modeling which we owe to the population geneticist Sewall Wright's work during the first quarter of the 20th century. Phase II is the work done by economists in the Cowles commission in the 40s and 50s, who did not acknowledge Wright's work. Phase III is the discovery and extension of Wright's method of path coefficients by sociologists (notably, Hubert Blalock and Otis Duncan) in the 60s. Phase IV is the discovery of causal modelling by philosophers of science in the mid-80s, which they curiously have neglected until then. Despite this long neglect, causal modelling has now become a major area of collaborative research by philosophers of science, computer scientists and statisticians. We conclude by summarizing the current state of the art of causal modelling.

Jalobeanu, "Natural History, Natural Philosophy, and Medicine of the Mind: The novelty of Bacon’s project revisited"

Francis Bacon features prominently in various accounts of the scientific (or philosophical) revolution as the author of a „new vision of kowledge”: a societal project of mastering and manipulation of nature constructed upon a foundation of open ended collections of empirical data called natural histories. What exactly are these natural histories has been a notoriously difficult question in the field of Baconian studies. Equally difficult has proved the attempt to reconstruct the relation between the natural historical „foundation” and the natural philosophical superstructure. A number of recent investigations have shown that we have to revaluate the traditional view that equates natural history with a storehouse of raw data and empirical facts and that a way to achieve such a thing is through a contextual reading of Bacon’s project. This paper aims to reconstruct such a contextual reading of Bacon’s late natural histories within the „problem situation” delimitated by various writings dealing with medicina mentis. I will contrast Bacon’s views on natural philosophy with contemporary views that equate explicitly natural philosophy with a medicine of the mind (as formulated by Pierre de la Primaudaye in his French Academy). I will also compare Bacon’s natural histories with some of the contemporary natural histories claiming equally that the activity of the natural historian has benefic consequences for settling and medicining the mind. The claim of my paper is that only against such a background one can start to understand and evaluate the weight and depth of Bacon’s project and the „novelty” of his views.

Kilinc, "Kant’s Notion of Objective Probability"

One of the earliest distinctions between objective and subjective senses of probability can be found in Kant’s lectures on logic. More precisely, Kant reserved the word Wahrscheinlichkeit to denote objective probability, while using the word Scheinbarkeit to refer to a subjective variety of uncertainty. Comparing Kant's treatment of probability with the account found in George Friedrich Meier's Auszug aus der Vernunftlehre leaves no doubt that Kant was familiar with the quantitative notion of probability that was developed by mathematicians such as Jakob Bernoulli in the course of the seventeenth and eighteenth centuries. While the prevailing commitment to what can be called a theocentric view of knowledge in the latter circles led them view probabilistic evaluations as a subjective facet of belief due to the limitation of human cognition, Kant had no scruples rendering the same probability assessments objective. Perhaps this shift can be understood in part as deriving from Kant’s anthropocentric view of knowledge. Yet, this epistemological view does not explain how Kant can discern between objective and subjective probabilities. Furthermore, the fact that Kant could countenance objective probabilities is surprising in view of his deterministic idea of nature, which ruled out chance as an empty concept. I provide an interpretation of Kant’s objective probability, arguing that indeterminacy in representations of nature may arise not only through ignorance or chance. Since Kant defined probability as a ratio between sufficient and insufficient grounds, I pay close attention to his characterization of different kinds of grounds. I show that Kant’s appeal to grounds in his account of probability is best understood in terms of grounds of actuality (equivalently, grounds of becoming), rather than grounds of truth. That is because Kant would not allow judgments of probability rest on any kind of evidential relation—probability for him did not grade the strength of an argument. Kant explicitly denied that a logic of probability (logica probabilium) is possible. The modality involved in Kant’s explication of the sufficient ground—as in “The sufficient ground is the sum of all cases that could happen at all”—is best understood as de re modality. Grounds of actuality, understood as partial causes, mesh with his account of probability scattered throughout the critical writings as well. Working out some of the examples Kant dwelled on, I suggest that for Kant there can be objectively insufficient grounds, when for instance a sufficient ground for a disjunction, E or F, divides equally into insufficient grounds for each of E and F. The associated probabilities do not arise out of ignorance, but rather from the causal structure that engenders a community of events. Thus, probabilities measure the possibility that a fair two-sided coin lands heads, but not to measure the possibility that a coin, which is known to be either two-headed or two-tailed, lands up heads. The modeling of the first situation involves causal underdetermination, while the second one only cognitive underdetermination. I concur with Kant that causal underdeterminations can be objective without contradicting the thesis of determinism.

Klein, "In defense of Berkeley's empiricism? Galton's experiments concerning abstract general ideas"

The canonical empiricists are Locke, Berkeley, and Hume. But these figures did not use the word “empiricism,” and did not see themselves as united in one epistemological school or tradition. The concept of an empiricist tradition began to figure prominently in English-language philosophy only in the late 19th century, especially in connection with arguments over the then-fledgling science of psychology. As it was then conceived, “empiricism” was the view that philosophy should begin with a picture of the mind drawn from empirical research. Locke, Berkeley, and Hume were portrayed as pioneers of this empirical approach. In the 20th century though, the very idea of an empiricist tradition shifted in meaning as people like A. J. Ayer sought to incorporate an anti-psychologistic bent owing to Frege and others. For Ayer, empiricists—indeed, philosophers—make no empirical claims at all, and so a philosophical result can never be contradicted by a result in the sciences (especially in psychology), even in principle. Instead, philosophy is concerned with choosing the most convenient linguistic frameworks for talking about the mind and the world.

In the present paper I defend the older conception of empiricism (which I call “old-school empiricism”) against the newer. My strategy is to uncover an old-school criticism of Berkeley that new-school empiricists cannot even accept as philosophically relevant. The criticism, due to the 19th century polymath Francis Galton, focuses on Berkeley’s famous claim that we can frame no abstract general ideas (viz., no mental image of a generic triangle or generic human face or whatever). Galton devised a way to produce what he called “composite portraits”—single images formed by combining dozens of images of human faces onto one photographic plate. The result, Galton held, is a pictorial representation of an “average” face (see below). According to Galton, an idea we form when we look at such an image constitutes a counter-example to Berkeley’s claim that there can be no abstract general ideas. On my analysis, Galton’s argument is only partly successful against Berkeley’s position on abstract general ideas. But even this partial success is interesting because it demonstrates the implausibility of the new-school empiricist claim that empirical results are irrelevant to philosophical argument. In particular, careful consideration of Galton’s results suggests that Berkeley’s case against abstract general ideas—one of his most celebrated philosophical arguments—relied on claims about the mind that can be put (and have been put) to experimental test.

I conclude by pointing to an irony concerning contemporary experimental philosophy. One of the most visible strains of experimentalism today concerns research into the nature of intuitions. But the self-conscious reliance on intuitions in philosophy is very much a product of the sort of anti-naturalism ushered to prominence by people like Ayer. So experimentalists who study intuitions are being misleading when they portray themselves as part of an older tradition of experimentalism in philosophy. Instead, I suggest that experimentalists who draw on research in perceptual psychology (such as Jesse Prinz) bear a closer kinship at least with the sort of 19th-century experimentalism Galton embodies.

Kochiras, "Two Senses of Activity, and Gravity in Newton’s Treatise"

Newton’s Treatise is intriguing because it contains a lengthy, qualitative description of gravity, one that is unique among his writings. That description is preceded by the familiar caveat that he intends to speak of the force only mathematically, without attempting to determine its nature. Yet he departs from that intention once he begins to describe the force qualitatively; while it may be mathematically convenient to consider the attraction between two gravitating bodies as if it were two distinct forces, considered naturally, rather than mathematically, there is only a single force between them. And in the course of describing the force as it is naturally, Newton begins to use causal language, even indicating that the “double cause of action” is “the disposition of both bodies”. Is Newton suggesting in this Treatise passage that the sun and planets causally affect one another across empty space, without any medium to convey the affect, and was Leibniz then correct when he later charged Newton with accepting action at a distance? Although one commentator (Schliesser, 2009) has recently answered affirmatively, I argue that when one examines the passage in its context closely, that interpretation cannot be sustained.

Once that interpretation is rejected, I argue, the Treatise passage becomes interesting in part for what it does not contain. While it does contain some echoes of the sense of activity appearing in the Principia, notably in Definition 3, it does not contain the robust sense of activity found in the much later text, Query 31 of the Opticks, While these senses of activity are sometimes conflated, I distinguish them, and trace the provenance of each—the thinner sense in Descartes, and the more robust sense in vitalism. I suggest that over a long period of time, Newton consistently associated the question about gravity’s cause with the robust sense of activity.

Koterski, "The Lviv-Warsaw School Contribution to Encyclopaedism"

Neurath’s encyclopaedism is a great discovery in the history of twentieth century philosophy of science. It is also rather surprising--highly discredited theory turned out to be deep and quite modern approach to science, especially against the background of so-called ‘received view.’ His theory, being so different from the common Vienna Circle picture, may be seen not only as ‘overcoming logical positivism from within’ but even as destroying it.

This paper is to show, however, that the way Neurath understood and described science was also independently developed by Edward Poznañski (1901–1976) and Aleksander Wundheiler (1902–1957), the logical positivists from the Lviv-Warsaw school. Such tendencies are clearly detectable in Alfred Tarski’s views on empirical science and they will be mentioned as well. Thus, Neurath was not a lonely sailor going against the flux: the ‘encyclopaedic’ direction was taken by others as a natural (i.e. non-revolutionary) way of developing logical positivism instead of destroying it. Some striking conformity between Neurath’s views and those of Poznañski and Wundheiler concern the following themes: 1) fallibility of scientific knowledge; 2) anti-foundationalism; 3) the limitedness of empirical control; 4) conventionalism; 5) encyclopedic structure of science; 6) the notion of truth in science; 7) holism and 8) physicalism.

The contribution of the Lviv-Warsaw school members to encyclopaedism remains unknown because historians of philosophy focused entirely on the influence the Polish philosophers exerted in the field of logic. Moreover, the paper presenting Poznañski-Wundheiler position, ‘The Notion of Truth in Physics’ (1934, in Polish), was never published in the West. Tarski’s views in this respect are unknown either, because he expressed them very rarely and almost solely in private talks and letters.

Kutrovátz, "Lakatos’ unchanging views on science"

Prior to his career in England, Imre Lakatos spent his post-university years in Hungary. Because of his well known involvement in the political turmoil of the Hungarian post-war era, his dubious political activity has been a focus of several studies and discussions. Much less has been said about the philosophical content of his Hungarian writings. Apart from his often cited commitment to Hegelian roots in his philosophy of mathematics, as well as the usually vague references to the Marxist framework of his intellectual background, only a few details are known of his early adventures in philosophy.

After his graduation from the university of Debrecen (1945), and before his studies in Moscow (1949) and the following imprisonment, Lakatos published a relatively large number of papers on science and science education (for a partial list and summaries, see Kutrovátz 2008). The main purpose of these works was to develop a Marxist interpretation of scientific progress and to criticize improper (idealist, bourgeois) forms of scientific thought. Perhaps most notable of these are the two papers representing his (lost) doctoral dissertation of 1947 (for an English translation, see Lakatos 2002).

During his final years in Hungary Lakatos turned his interest to mathematics, and this line of research continued through his Cambridge years. He returned to philosophy of science in general only in the 1960s, developing his theory of the MSRP. The fundamental question of this paper is whether, despite the temporal discontinuity, there are continuous, or at least common, characteristic intellectual elements in the two periods focusing on philosophy of science. The research I conducted in the Archives at LSE, on the surface level, proved mostly negative with respect to continuity: neither in his notes and scratches nor in his correspondences could I find convincing traces of Lakatos’s interest in science in general in the period when his attention was focused almost exclusively on mathematical and logical issues. However, some remarks concerning his disappointment in Marxism give us important clues regarding how his views on science changed.

A common conviction of both the early and the later Lakatos was that science is a never ending intellectual process of conceptualizing nature. He never saw this process isolated from the social environment; however, while previously he emphasized the social impact of class structures and historical conditions on science, later he argued for the autonomy of science while maintaining the importance of its impact on society. His study notes stand witness to his acceptance of the autonomy principle. On the other hand, even the Marxist Lakatos claimed that science is not an ideology or world view, and that it can never be identified with ideologies extracted from it. He was always explicit about his view that scientific education teaches humbleness towards facts and manifests an essentially democratic attitude.

Livengood -- see Biener

<!--[if gte mso 10]>

This paper aims to give a historical overview of the origin and development of causal modeling, which we suggest in four phases. Phase I is the birth of causal modeling which we owe to the population geneticist Sewall Wright's work during the first quarter of the 20th century. Phase II is the work done by economists in the Cowles commission in the 40s and 50s, who did not acknowledge Wright's work.  Phase III is the discovery and extension of Wright's method of path coefficients by sociologists (notably, Hubert Blalock and Otis Duncan) in the 60s. Phase IV is the discovery of causal modelling by philosophers of science in the mid-80s, which they curiously have neglected until then. Despite this long neglect, causal modelling has now become a major area of collaborative research by philosophers of science, computer scientists and statisticians. We conclude by summarizing the current state of the art of causal modelling.