Does this summer school exist ?
27 Jul-7 Aug 2022

Planning

27th of July

2pm: Assembly to decide on the organization of the tasks and get to know each other.

4pm: Beginning of the introduction presentations (15 minutes). Running order :

Oleynik Leonardo

Escobedo Jorge Arturo

Carette Titouan

Plutniak Sébastien

Afzal Abdullah

vives eva

Beyne Simon

Domino Sélène


6pm: Dinner preparation

7:30pm: Dinner

We'll also have regular coffee breaks !



28th of July

8:30am: Breakfast

9am: Introduction presentations (15 minutes). Running order :

Aysever Berk

Carlos Romero

Muolo Riccardo

Verelst Karin

Schiavon Matteo

Fahim Tarek

 


After that, everyone ranges the subjects in an order of preference and the running order of the longer sessions will be determined to maximize satisfaction. We'll make sure that everyone is happy with that plan and stays open to suggestions otherwise.

29th and 30th of July

2-hours sessions, same schedule.

31st of July: weekend day

Taking some rest half-way of the journey.

1st to 6th of August

The rest of the program will be collectively decided, divided into workshop sessions, social activities and writing sessions to concretized what we've learned.

7th of August

The school will be officially over after lunch. You can leave on the afternoon or on the morning of the 8th.

 

 

 

Abstracts

 

Abdullah Afzal

I am interested in wide areas of queries. Including but not limited to the ontological status of wave function in non-relativistic QM, ontological representation of QFT in form of ontic structural realism (OSR), the many-worlds interpretation, ideas pertaining to scientific realism in form of OSR and effective realism and, the general project of scientific-metaphysics as advocated by Ladyman and Ross and how to fit all of these projects into one cohesive world view.

Berk Aysever

As far as the problem of carrying an ontological commitment to mathematical entities is concerned, what is the interest of a philosopher to work on examples taken from scientific practice? Further, would it be profitable or even possible to show scientific evidence and therefore infer the existence of a mathematical concept? Since it is impossible to give empirical evidence for a mathematical object (e.g., numbers, functions), one can answer this question by referring to some types of scientific explanations called genuine, distinctive, or science-driven mathematical explanations of a physical phenomenon. These explanations are arguments, namely a set of propositions among which at least one is a mathematical theorem. If we remove this specific statement from our argument, the remaining one will no longer be explanatory for the physical phenomena in question. Therefore we should rely on this theorem, and eventually on the abstract objects that it contains. In order to discuss this idea, I would like to mention first Mark Steiner, for whom this character of a mathematical object originates from the concept "ontic reality", which is defined to refer to the fact that these objects cannot be eliminated by paraphrase from the language of science. Moreover, these objects also belong to an "epistemic reality", which means that it is possible for a mathematical object to have different and independent descriptions. For Steiner, these two senses of reality have a strong connection. On the one hand, the independence criteria in the former definition are guaranteed in connection to a mathematical explanation. For him, if we don't have an explanatory proof of coreference to the same mathematical object, this object is real in epistemic sens. On the other hand, according to his most criticized account, when removing physical assumptions, mathematical explanations of physical phenomena are merely mathematical explanations of mathematical facts. Secondly, Alan Baker opposes Steiner by arguing that it is only possible to give "evidence" to external mathematical explanations. For instance, unusual life cycle periods of cicadas which are 13 or 17 years cannot be explained without a theorem from number theory. He concludes that this explanation is irrelevant to internal mathematical explanations. I consider finally the account of Daniele Molinini. While defending the existence of genuine mathematical explanations in a pluralistic way, he speaks against the ontological import of the explanatory power of mathematical explanation. For this purpose, he underlies the issue of compatible mathematical explanations that posit the same physical ontology but appeal to different mathematical entities.

Simon Beyne

Our cosmology is based on a model called the Lambda-CDM model, to refer to the two entities that would make up the cosmos: dark energy (?) and dark matter (CDM). The existence of dark energy would explain the acceleration of cosmic expansion. This statement, taken in isolation, leads to this question: is dark energy a new ether? We will nevertheless focus on the case of dark matter, this invisible matter which would be responsible for important gravitational phenomena which could not be caused by visible matter alone. It would be accessed through various observations. So, is dark matter an observable entity? The meaning of "dark matter" having evolved, this shakes its ontological status up. Is it a simple theoretical tool, an ad hoc hypothesis? Or do we have elements that lead us to believe that "dark matter" refers to a reality?

Titouan Carette

Surely because of its position at the crossroads of mathematics and engineering, few works have been done on the philosophical aspects of computer science. However, I think there are some interesting ontological problems there. I propose some examples. I would like to investigate if those problems really are new or merely computational reformulation of existing ones, and in this second case, to see if this reformulation provides new insights. Problem 1: Do bits exist? A bit, the smallest unit of information is a binary alternative, 0 or 1, a physical system that can be in only two, well-defined, distinguishable states. However, in practice, we never rely on such pure concepts, but on logical bits which are encoded in several physical bits, including all the machinery ensuring its persistence. But are there any physical bits at all? Aren't they also all at some point some idealized logical ones? How can we say it from bit if they are no clear bits available? Problem 2: Do computations exist? Let's consider a computer. We run it on input and at some point, it outputs an answer. The question is what happened exactly? If we take some perspective, the computer is a physical system, that was put in an initial configuration and then let to evolve freely until some point. Then where is the computation? Are all the physical systems evolving around us with no one caring also computing things? And what about reversible processes? Or even worse, covariant formulation of physics, where would be the computation there? Problem 3: Do simulations exist? Let's assume that I can be entirely simulated by a computer. Then all my self can be encoded in a program that would unravel all my existence when run. Where am I then? Am I in the program itself? Or in any execution of it? And what is the meaning of running such a simulation? Would you say that someone who analyzed my program closely enough has still brought me to existence, even without executing my program properly? Note that the problem remains if I am replaced here by any simulated physical system, as simple as it can be. Problem 4: Does information exist? If I store a file in a USB key and its file system gets corrupted, then the file might be lost forever. However, everything is still there in the key, only impossible to access. Where is my file? Does it make any sense to say that it still exists? That it ever existed?

S. Domino

The traditional ontological units of biology - mainly the species, gene, and organism - are under renewed criticism, fuelled by discoveries in genomics, environmental microbiology, single-cell and in-situ studies. Other types of biological objects are being proposed, which try to account for the overlapping functional integrations and non-hierarchical evolutionary histories that have blurred the usual frontiers between entities. They often tend towards an ecologisation of entities, and a multiplication of the representations that can be accepted, questioning whether unification under a single scheme, such as the Tree of Life, is really relevant in view of all that it needs to overlook. Is biology currently inventing a new, more appropriate ontology? Or is it abandoning any unity of its ontology? We hope that syntheses of practises as they are emerging from the fields and the labs, and attempts to delineate the strengths and limitations of new propositions, will give fruitful answers.

Jorge Arturo Escobedo

In the context of the discussions about topics in philosophy of science, several authors had argued for a pluralist picture of reality, pointing out that the practice and results from science, shows that the explanations, entities, properties, etc., from different scientific disciplines, are independent from each other. This picture has de consequence that a reductive picture of reality it is not possible. In the context of neuroscience, authors like Bechtel and Craver, had proposed that the explanations in that context are a clear example of the non-reductive picture, even saying that that a no-reductive picture it is the standard picture in neuroscience. In this context, for ?non-reductivism', these authors mean that the explanations depend upon different levels of organization and that it is not possible to privilege one level upon another (for example, the lower-level vs the higher-level). The way these authors support their non-reductivism position consist in the appeal to cases studies, where in those cases they show evidence that the explanations in neuroscience are nonreductive in nature. Against the non-reductive position in neuroscience, Bickle had argued that the explanations in that context are, in fact, reductive in nature. For Bickle, the explanation in neuroscience consists in intervene at the lower-level and then track the changes that occur at the higher-level. To have a good explanation, we must appeal to the lowest level possible (for example, in neuroscience those levels are the molecular/cellular), and the higher levels take a heuristic role, that is, a role in specifying the explanandum, and that's it. As in the non-reductive position, Bickle uses case studies to support his position, in those cases he shows that the explanations in neuroscience are reductive in nature. Given the context of the discussion between non-reductivists and reductivists in neuroscience, the questions I have interest in exploring are the next: which position characterize better the explanations in neuroscience? What are the criteria which need to be followed to settle the debate between one position and the other? I say that we cannot answer those questions from a pure epistemological point of view (for example, just appealing to case studies). I propose that we need to appeal to metaphysical and ontological aspects in order to settle the debate, specifically, to causal and metaphysical relations between different levels of organization. The strategy I want to follow is the next: a) I want to defend that the discussion between the no-reductivists and reductivists cannot be settle appealing to pure epistemological aspects like the evidence given from case studies, we need to appeal to metaphysical aspects, like causation and relations between levels of organization; b) I want to introduce physicalism as an important position to settle the debate between no-reductivists and reductivists.

Tarek Fahim

Insensitivity of aggregate relations to micro foundations : a hurdle for reductionism in economics ?

In this presentation, I will discuss the insensitivity problem of aggregate relations to micro foundations. It is an argument put forward by the economist Anwar Shaikh, in a recent contribution, against the methodological criterion of micro foundations in macroeconomics. Shaikh argues that macro regularities are « strongly indifferent » to microeconomic details. To this end, he ran four ABM (Agent-Based Models) simulations, which rely on different microeconomic hypotheses. Every simulation yields the same macro patterns : market demand functions with negative slopes ; income elasticity lower than one for necessary goods ; income elasticity greater than one for luxury goods ; and Keynesian aggregate consumption function that is linear in real income in the short run and includes wealth effects in the long run. According to Shaikh, these macro patterns heavily depend on what he calls « shaping structures » which limit the spectrum of agents’ decisions. These economic « shaping structures » play the same role as physical « shaping structures » in the « emergence » of fundamental laws such as the ideal gas law. We will argue that, despite the undeniable value of this contribution in the debate about micro foundations, it remains highly questionable. First of all, Shaikh can be criticized for committing a cherry picking fallacy. Is every aggregate relation really insensitive to micro details ? There are grounds for doubt in some respects. Furthermore, the main flaw of this analytical framework is that it does not take into account downward causation, which is extremely common in economics. It is indisputable that many microeconomic properties are themselves partially caused by macroeconomic properties. And this change in microeconomic phenomena caused by macroeconomic phenomena induces a change in the course of macroeconomic phenomena. This is why microeconomic phenomena cannot be considered as « details » in macroeconomic analysis. So I will speak more of « micro-macro » links than of « micro foundations », due to reciprocal determinations and causalities between the two levels, in accordance to the interactionist dualism I defend in my PhD thesis.

Riccardo Muolo

Nature swarms with patterns. Just by looking around us, we can observe many systems who exhibit ordered structures in time, space or both. Several theories try to account for such diversity emerging from self-organization. One of the most popular is due to Alan Turing, who proved that in reaction-diffusion systems a homogeneous equilibrium can lose its stability driven by diffusion. Such instability, also called Turing instability, is then responsible for the emergence of a new equilibrium, this time inhomogeneous, i.e., a pattern. The theory is remarkable, but the conditions yielding patterns are rather restrictive, in contrast with the abundance of patterns observed in nature. Hence, scholars have tried to fill this gap by adding more realistic hypothesis and, indeed, the new settings prove to enhance the arising of Turing instability. However, such models are still very far from reality. So, what does it mean to fill the gap between theory and observations? Can an approach based on "classical" physical systems, i.e., starting from an ideal setting and then add more variables, apply to complex systems?

Leonardo Oleynik

What are the distinctions and parallels between Newton's and Bell's nonlocality? Einstein's principle of local action prompts the debate. For two remote systems (A and B), it states: ?externally influencing A has no immediate influence on B?. In Newton's mechanics, the principle can be viewed as a restriction on B's dynamic ? disturbances on A's position cannot instantaneously alter B's acceleration. Whereas in quantum mechanics, the requirement of locality implies that a measurement result on B is unaffected by operations on A. Although apparently distinct, there might be some similarities between these notions. By employing general covariance, one verifies that locality is a frame-dependent quantity: Newton's mechanics is explicitly nonlocal when extended to non-inertial frames, and different quantum reference frames describe entanglement distinctly. These characteristics are intrinsically related to the prescription of a physical system provided by each theory and, therefore, could be further scrutinized within an ontological approach. Being nonlocality a perennial aspect of physical reality, it constitutes the perfect arena to investigate how quantum and classical ontology can dialogue.

Sébastien Plutniak

Composition relationships and ontology in archaeology: towards a formal domain ontology Current approaches in archaeological theory address ontological issues from a culturalist standpoint, where ?ontologies? refers to the categories that peoples from past societies might have shared. Although there is a long-standing relationship between archaeology and computer science (since the 1950s), only a few contemporary works address the properties of archaeological entities from the perspective of philosophical and formal and/or applied ontology (exception includes work by F. Niccolucci, S. Hermon, S. Cardinal). Much has still to be done in this domain, raising challenging conceptual and methodological problems. Starting from a basic task made by archaeologists (to analyze sets of refitting fragments scattered in the layers of a site), I aim at formalizing a domain ontology in archaeology, using the composition relationship as a primitive. This ongoing research, drawing on the scholarship from philosophy, mereology, and applied ontology (e.g., work by A. Varzi, B. Smith), raises ontological issues about time and material and non-material entities, and is intended to renew some aspects of archaeological theory.

Carlos Romero

Many different scientific theories and models appeal to merely possible phenomena in order to conceptualize, classify and explain actual, real, concrete phenomena. Consider the classifications made in dynamical systems theory, or explanations by optimization. These are just two examples, but there are more: the explanans of an occurrent event may be the topology of the possibility space, or the statistical properties of many possible outcomes. But how could that be? What is merely possible, by definition, is not actualized: the merely potential is not real. How could what is not real explain what is? In this talk, I want to (a) motivate this philosophical problem, (b) argue that the two currently most favoured theories of natural possibility cannot, indeed, solve it, and (c) suggest a new way to think of the matter, which, I will argue, does solve the problem: constraint realism.

Matteo Schiavon

The process of measurement is at the basis of quantum mechanics, since it provides the interface between the theoretical description of a system and the information that can be extracted from it. According to quantum mechanics, this process consists on the collapse of the wave-function of the system due to its interaction with the measurement apparatus. However, the definition of measurement apparatus is ambiguous, and sometimes it is necessary to split it into a "quantum" part and a "classical" part, where the collapse take place. This is fundamental in the analysis of some experiments such as the Wheeler's delayed choice. In this presentation, I will describe an implementation of this experiment on a satellite link and I will illustrate some of the fundamental problems its description rises. In particular, I will try to analyse how far it is possible to go with the "quantum" part of the experimental apparatus, linking this aspect with the possibility of undoing a quantum measurement.


Karin Verelst

Creating ontologies as a basic aspect of a description of the world (a 'theory') is a basically metaphysical approach to worldview construction. This remains true in the case of the sciences. I shall raise the question of what the underlying structure is that all these ontologies have in common, and whether more insight in this structure helps us forward in understanding what ontologies are appropiate, if any, under which conditions.

Eva Vives

Modeling human cognition : from behaviorism to connectionism

Online user: 3 Privacy
Loading...