online shoes store xkshoes,here check the latest yeezy shoes click here.

know more about 2020 nike and adidas soccer cleats news,check shopcleat and wpsoccer.

Workshop “The Experimental Side of Modeling” (2)

SFSU HUM 587 March 30-31, 2010
Click here for DIRECTIONS: to campus and on the campus.

SCHEDULE WITH ABSTRACTS

BACK TO PROGRAM

Tuesday March 30, 2010
MORNING
             coffee at 9:30

10:00- 11:20 Anthony Chemero (Franklin and Marshall College, Scientific and PhilosStudies of Mind)
Title: Dynamics, Data and Noise in the Cognitive Sciences. 
ABSTRACT

The standard understanding of good experimental design in the cognitive sciences calls for the minimization of error variance, a.k.a., noise. Yet certain nonlinear dynamical modeling techniques, recently imported to the cognitive sciences from statistical mechanics, upend this conception by taking the structure of noise to be the primary data. These methods are allowing psychologists to address previously elusive questions concerning central topics such as the modularity of cognitive systems and the nature of insight in problem solving. As an example of the sort of nonlinear modeling which treats noise as the primary data, I will describe work in which we were able to demonstrate experimentally, for the first time, a central aspect of Heidegger's phenomenological philosophy: the transition from readiness-to-hand to unreadiness-to-hand. After discussing this use of noise as data, I return to the consequences of this kind of nonlinear modeling for the distinction between data and noise, and for experimental design in the cognitive sciences more generally.

11:30-12:50 Isabelle Peschard (San Francisco State University)

Title: Data Model, Reliability, and the Neglected Relevance of Relevance
ABSTRACT
That the representational function of models cannot be accounted for merely in terms of a 2-place relation, between the model and the phenomenon it is a model of, is now largely recognized. To be a representation is to be used as such, and being used calls for a user. It has also been suggested that a 3-place relation is still too simple and should be complemented by adding the function of the model.
       But in spite of the addition of these terms in the relation, these accounts still share a critical, undermining feature of the 2-place relation picture: models are still considered in abstraction from the procedure through which they are constructed. Part of the construction, however, is precisely to specify what has to be accounted for by a model in order to count as a model of a certain phenomenon P, that is, to be usable as a representation of P.
       Scientists will be able refer to a phenomenon under investigation well before they agree on what characterizes this phenomenon (cf. Steinle). Rather, it is typically part of an experimental process of modeling not only to produce theoretical models able to make correct inferences but also to determine what would count as correct inferences, that is, what has to be accounted for. What has to be accounted for is a data-model; but not any data-model will do. What is needed is what I will call a ‘data-model of P’.
       The difficulty of getting the data-model ‘right’ is usually treated as a technical one, in terms of empirical reliability grounded on statistical and/or causal analysis. But first, what are the relata of this causal relation needs to be clarified. In particular it is not clear that the relation between P and the data-model of P is a causal one if the data-model of P, as I will argue, should be seen as the outcome of a complex measurement of P. Rather the data-model of P is typically meant to show the causal dependence of some variable of interest on the different factors whose effect is regarded as relevant, that is, as characteristic of P.
       Second, reliability, avoiding artifacts, is a technical affair only in so far as it is already clear what are these factors whose effect is characteristic of P and whose effects only should be accounted for by a model of P. But judgments of relevance, even though they depend on empirical considerations, are clearly not just empirical judgments.

AFTERNOON                lunch 1:00-1:50 will be available on location
2:00-3:20 Roberta Millstein (University of California at Davis, Science and Technology Studies)
Title: Obtaining Data on Natural Selection and Random Drift in the Wild: The Case of the Great Snail Debate
ABSTRACT
Various components of evolutionary processes such as natural selection and random drift are observable, but the processes themselves are difficult to observe in the wild, given the long periods of time over which they occur.  Thus, biologists must marshal a variety of different types of data, some of them  indirect, in order to substantiate their claims.  The case of the Great Snail Debate in the 1950s will illustrate this.  (The debate was over the relative significance of natural selection and random drift in populations of the land snail, Cepaea nemoralis).  In demonstrating natural selection, biologists considered correlations between habitat-types and phenotypic traits such as color and banding patterns.  Crucially, they also sought evidence for the causes of the correlations -- i.e., they tried to determine what were the particular selection mechanism(s) at work.  Here they used both laboratory and observational field work.  In demonstrating random drift, biologist focused on the variance in the correlations and considered differences among populations of different sizes.  However, it was difficult for them to determine the drift mechanism(s) at work.  But that was not the primary difficulty; rather, one of the main sources of dispute was a disagreement over how finely to construe the relevant habitats.  This was a disagreement with both methodological and conceptual aspects; the latter make it difficult to resol

3:30- 4:50 Alan Love (U Minnesota, Center for the Philosophy of Science)
Title: Modeling Experimental Evidence in Studies of Ontogeny: Idealization, Abstraction, and Whole Mount In Situ Hybridization
ABSTRACT
Philosophers have devoted substantial attention to understanding the nature of models and the activity of modeling in different areas of science but have not focused as much on the nature of evidence and its procurement in ongoing inquiry. In fact, most philosophical effort with respect to evidence has been spent on understanding how it confirms or tests hypotheses while assuming that the evidence is already at hand and treating it formally (observation O, or evidence E). We lack detailed accounts of the material aspects (or ‘nature’ and procurement) of evidence that are intertwined with diverse practices in different sciences, including determinations of relevance and the inferences licensed by disparate kinds of evidence, all of which are largely ignored in formal accounts of confirmation or hypothesis testing.
       In an effort to redress this lacuna, I take several themes from philosophical discussions of models (e.g., idealization and abstraction) and bring them to bear on a particular kind of experimental evidence found in diverse studies of ontogeny: gene expression patterns from whole mount in situ hybridization (WMISH). This task of ‘modeling experimental evidence’ bears on the topic of confirmation and hypothesis testing but also has import for other aspects of scientific reasoning: the empirical characterization of phenomena, judgments of evidential relevance, comprehension of the inferential potential of evidence, and the specification of experimental artifacts. A central message of my analysis is that models of evidence are always relative to a scientific problem – gene expression patterns from WMISH count as experimental evidence in different ways depending on the context of inquiry. Different variables, or values of those variables, are chosen for idealization and abstraction in different problem contexts where WMISH is in use. Thus, a single experimental approach does not generate a single type of evidence, and the confirmation relations or explanatory inferences afforded cannot be understood apart from the material context where these decisions about how to model the evidence are made. As a consequence, we are positioned to comprehend how different disciplinary approaches using the same experimental approach can be in disagreement over standards of evidence and have very different conceptions of the ‘same’ biological phenomenon.

                     BACK TO PROGRAM

Wednesday March 31
MORNING
               coffee at 9:30
10:00-11:20 Bas van Fraassen (San Francisco State University)
Title: Modeling and measurement: the criterion of empirical grounding
ABSTRACT
Undoubtedly theoretical models are tested by confrontation of the empirical implications or numerical simulations with data models based on measurement outcomes. But for this confrontation to occur, it must first be a settled matter what counts as relevant measurement procedures for physical quantities represented in those models.
       The first point to be argued here is that the classification of a physical procedure as measurement of a parameter in a model or simulation is itself provided by at least a core of the theory itself, so what counts as a measurement of a given parameter is not settled independently. This point is to be explored through examination of several examples in physics.
1. Galileo’s design of an apparatus to measure the force of the vacuum (Galileo, Two New Sciences fig. 4).
2. Atwood’s machine, designed by him to confirm Newton’s second law, but interpreted variously by later writers as (a) measuring mass ratios, (b) measuring the force of gravity.
3. Michelsen and Morley’s (1887) apparatus. Their article distinguishes the basic ether/wave theory, call it T, from its augmentation T* by Fresnel’s hypotheses to overcome a difficulty with respect to light aberration. Given T, the assertion that this apparatus measures the relative velocity of earth and ether is correct, and the measurement outcome determines its value to be 0 (to within limit of accuracy), while that assertion-plus-outcome is inconsistent with T*.
4. ‘Time of flight’ and other classical measurement procedures purporting to determine simultaneous values for non-commuting quantities, but then argued not to count as measurement at all, in early controversies over quantum mechanics .
       The second point to be argued is that this inquiry into the nature of measurement does not threaten a skeptical point . It is part of the character of ‘empirical grounding’ of scientific theory to be further explored here, as part of interplay of theory, modeling, and experiment during which both the identification of parameters and the physical operations suitable for measuring them are determined. The requirements of Concordance and Determinability as factors in empirical grounding are constitutive of what counts as measurement in a given theoretical context, and it is a contingent and empirical matter whether or not they are satisfied.

11:30-12:50 Elizabeth Lloyd (U. Indiana, History and Philosophy of Science)
Title: When the models are right and the data are wrong: Climate modeling of the troposphere
ABSTRACT
For over two decades, all of the models of global climate change indicated that the tropical troposphere (the region of atmosphere above the surface and below the stratosphere) warmed at least as much as the surface itself in the late twentieth century. Yet temperature values derived from satellite measurements indicated no such warming. Nor did the warming appear in weather balloon measurements. Different interested parties had different reactions to this state of affairs. Skeptics of global warming made hay of the discrepancy, using it to prove that global climate models were so faulty that they couldn't even represent present climate -- how could these models be trusted to represent the future? Thus, there was no such thing as global warming today, and the models weren't to be trusted in their representation of warming in the future. The modelers themselves, however, had an interesting reaction: they didn't trust the data, and thought both the satellite measures and weather balloon data were wrong. I will discuss how this situation got resolved (from 2004-2008) between the providers of data and the modelers (and one of the skeptics), and what role the models had in the process.

BACK TO PROGRAM
Die Nationalmannschaft von Jersey ist eine bundesliga trikots gunstig, die Jersey bei internationalen Begegnungen vertritt. Es wird vom trikotsatz gunstig Association kontrolliert. Jersey ist kein Mitglied der FIFA Deutschland fussball trikots oder der UEFA und kann daher nicht an Weltmeisterschaften moncler jacke damen gunstig und anderen Turnieren teilnehmen.