Tuesday, October 25, 2005

two things

particular thing:
On 161, in the process of troubling the notion of “internal representation” of situations, objects, whatever, Clark talks about a neural network with an “adaptive oscillator,” a feature that would allow the network to learn the timing of events that it perceives. The adaptive oscillator has some preexisting output analogous to the output it will produce upon perception of the event(s). The output originally has a certain frequency or period. When the event occurs and output is produced out of the normal period, the frequency of the oscillator is adjusted so that it outputs in time with the event. In this way, the oscillator learns the timing of the event the network perceives.
I like this because it suggests that, in order to know or transact in any way with what’s outside it, a system (e.g. the mind) must have some preexisting structure/feature that in some way is analogous to or resembles the thing outside of it. Avital Ronell says that this is what endorphins are: the addictive, pleasure-producing chemical inside your body that allows you to be affected by addictive, pleasure-producing chemicals that are foreign to your body. This sort of thing seems to support Clark’s claim that the boundary between body and world is not as clear as we might immediately think it is.

general thing:
In the end, I’m left wondering if we should feel any discomfort (?) at the fact that our failure to appropriately model/understand the brain and cognition in general is largely due to our affinity to language/linguaform concepts and their structure. Just as we’ve come to accept (I guess) that the brain does not operate according to the rules of symbolic logic, Clark wants us to accept that cognitive systems, the brain and the human body (separately and together) included, are not structured like a natural human language. But how, then, are we going to come to model/understand these systems, if at all? Is scientific knowledge (conscious knowledge?) absolutely dependent on natural human language? (What is the relationship between language and consciousness, anyway?) If basically everything in the world is (a) cognitive system(s) (ha! is that what any of these authors is actually saying?), and language is more or less one node that doesn't match the rest of the system(s), what is the status of knowledge that we have through/in language? And the most important question of all, I'm sure: How does this relate to a poststructuralist understanding of language as always slipping around and never landing on/grabbing the truth?

3 Comments:

Blogger Annie said...

Maybe one reason why scholars seem unable to understand or model the brain / body / mind system(s) in any satisfying way is that the brain--our vehicle for understanding and modeling--is a part of the system. Can a part of a system ever fully understand the whole system? I think we (industrialized societies) have given the brain too high a rank among the myriad elements with which we make meaning and define ourselves.

2:29 PM  
Blogger Jim said...

yo. I'm curious about this idea that language is a node that "doesn't match the rest of the system(s)."

As I read Clark, he once to entertain the notion that language adapts to us. That language gets "better" as it "learns" to work better with our minds: "Suppost (just suppose) that language...is an artifact that has in part evolved so as to be easily acquired and used by beings like us" (212).

I think I actually agree that language "doesn't match" other systems. This is why we talk/argue about things - this is why we need rhetoric. If there was no distance between language and other systems, we wouldn't need language. But I think I missed the part in Clark where he says this? Let's talk about this in class.

As far as the status of the knowledge we have - it seems like the answer is not that "new": it's contingent. However, it's not contingent in a way that's floating all over the place. It's contingent on the "matching up" of external scaffolding and internal representation (141). We manipulate tools/scaffolding actively, but that activity is contingent about multiple networks.

8:15 AM  
Blogger Alison said...

Anthony,
I think the problem is not language but how we view it. We are wired into a paradigm that it is a logical system - why does Clark use "computational" contexts? It works for several of his arguments but ultimately confines the emergent thinking I thought he was directing us to (am I totally off here?)

2:22 PM  

Post a Comment

<< Home