Friday, May 28, 2004

Philosophy Colloquium: Paul Skokowski

Now for something more substantive -- I think. This was either a discussion at the deepest reaches of human comprehension. Or the most simplistic. I'm not quite sure how the model discussed would have been different from anything a 18th century philosopher might have imagined. There was undoubtedly a lot of research between now and then propping it up. But the discussion hovered at the level of simplistic illustrations, hypotheticals, and presupposition. Someone brought up "ordinary language" problem in apologizing for his question. If I understand the term correctly, that's what I took it for.

My notes:

Structural Content: A Naturalistic Approach to Implicit Belief

not Searle
belief and desires - internal states that cause behavior (Dretske model)
important content of behavior can be missing
boy and bike
need internal state w/ missing info -- how?

Dretske : indicators (hawk indicator for chick)
got promoted to beliefs : where do implicit states come from? (Ev.)
two examples
1. neural networks
transparent architecture
2. LTP - long-term potentiation

Neural Network
learning causes internal change
history installs weight structure
Gorman-Sejnowski Network
types not tokens
H -> W
Hebbian Learning - Mazzoni recurrent networks
-> more biologically plausible
NOT covariational account of regularity theory; rather, stable

Rumelhart : all knowledge in the networks -> implicit in structure
distributive, not declarative

Q) What is W for? (function)
functional state -> causal role
Empirical measures -> weight space
IF network computes function, THEN it has learned, an history (H)
WETWARE - neurons in brain
LTP example of Hebbian learning
H -> W
Eyelid Experiment : tone-puff; humans, rabbits

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home