jducoeur: (Default)
[livejournal.com profile] siderea posted an interesting meditation on certain types of human thought. It's worth reading, but led me to a response which is a bit too tangential to go there. I recommend starting there first, to understand the context.


I largely agree with all of her post, but with a slightly different spin. Some thoughts, all anecdotal but based on musing about this for about 20 years. (I've mentioned this topic before, but this is an interesting lens through which to examine it, and slightly more detail than I usually go into.)

AFAICT, all thought -- human or otherwise, at all levels of cognitions -- is fundamentally based on pattern-matching. From the simplest creatures up through human, what neural networks seem to do is to take multiple input sensory stimuli, correlate them, and store those correlations as expectations. That's not a minor detail: I don't think it's at all coincidental that all of our limbs and other mechanisms for affecting the world contain input nerves (so that we can correlate action with response), nor that notionally "input-only" senses like sight have complex feedback mechanisms with higher-level nerves (so that, eg, our expectations -- existing patterns -- can influence our vision).

The implication I draw is to think about Patterns first, and the other phenomena in light of that. For example, I think of syllogistic reasoning as mainly an effect rather than a cause. Our simplest patterning causes us to draw correlations between simple stimuli. Higher-level networks then observe those correlations and make higher-level correlations from them. Eventually, these patterns become complex enough to call "categories", and from those the higher-level networks derive higher-level expectation patterns -- syllogisms -- from the lower-level stimulus/response.

Similarly, I see value assessment as likely coming from a similar source, but in this case as the general case of pain/pleasure response. That is, the simplest and most primitive kind of value assessment is physical pain/pleasure -- that is present is all animals. But as you get more complex brains, you introduce the crucial notion of cognitive agreement / dissonance, and as far as I can tell, the brain interprets those as, essentially, pleasure / pain. That is, we (generally) interpret that which agrees with our existing mental patterns as "pleasurable", and we seem to interpret cognitive dissonance as, in some sense, "painful". (Hence, we resist accepting input stimuli that disagree with our patterns, because it causes mental discomfort.) Most peoples' value judgements seem to be fairly grounded in this: we value that which agrees with our patterns, whether those are low-level sensory-input patterns (this tastes like the food I am accustomed to) or higher-level derived patterns (this behavior matches what my culture has trained me to think of as appropriate).

Yes, there are lots of complexities here: for example, I have to some degree trained myself to *like* cognitive dissonance, mainly because it gives me pleasure to think of myself as open to new ideas. But I think there's an element of mental masochism there, and I try not to fool myself about how successful I really am. And pattern-matching is not one-dimensional: a given stimulus often touches on multiple, often contradictory, patterns, leaving it up to the higher-level systems to apply a higher-level pattern to choose which lower-level one to utilize.

I don't think any of that necessarily disagrees with what [livejournal.com profile] siderea says in her post; it's just a slightly different way of approaching the topic. I can't say that any of this is grounded in terribly careful study, but personally, I find that human behavior, at all sorts of levels, simply makes more *sense* when I think about it in terms of pattern-matching -- that we are not so much rational creatures as rationalizing ones, whose highest-level neural networks are trying to create the best patterns based on the lower-level ones. And by implication, human thought is always somewhat subjective: we cannot but view the world through our own patterns...
jducoeur: (Default)
The cognitive-science geeks here should have fun speculating about the implications of this study about the relationship between language and spatial reasoning. It's an interesting article, and cites previous work that I hadn't known (being merely an armchair speculator in this field) that had indicated such a link -- that people *think* differently about space depending on how their language *describes* it.

In this particular case, a group of researchers found an unusual opportunity for Real Science: a group of deaf people in Nicaragua who had created a new artificial language, which was evolving at human speeds, and examining how its practitioners reasoned about space. They found that people who learned this artificial language at different points in its evolution do seem to think somewhat differently.

It's intriguing stuff. I can't say I'm totally astonished -- my long-standing observation is that cognition is more or less entirely about feedback loops, so it's not surprising that language on a topic would feed into how one thinks about it. It does raise some interesting questions: in particular, since animals clearly have some spatial-reasoning capability, it can't entirely depend on language. But I can believe that it is *affected* by language, and it's at least somewhat plausible that humans have experienced a sort of evolutionary atrophy of more instinctive mechanisms since language became available -- that we don't think about space quite the same way, since we don't need to.

I look forward to further studies here -- it's a neat, very practical illustration of the larger questions of human thought...
jducoeur: (Default)
The cognitive-science geeks here should have fun speculating about the implications of this study about the relationship between language and spatial reasoning. It's an interesting article, and cites previous work that I hadn't known (being merely an armchair speculator in this field) that had indicated such a link -- that people *think* differently about space depending on how their language *describes* it.

In this particular case, a group of researchers found an unusual opportunity for Real Science: a group of deaf people in Nicaragua who had created a new artificial language, which was evolving at human speeds, and examining how its practitioners reasoned about space. They found that people who learned this artificial language at different points in its evolution do seem to think somewhat differently.

It's intriguing stuff. I can't say I'm totally astonished -- my long-standing observation is that cognition is more or less entirely about feedback loops, so it's not surprising that language on a topic would feed into how one thinks about it. It does raise some interesting questions: in particular, since animals clearly have some spatial-reasoning capability, it can't entirely depend on language. But I can believe that it is *affected* by language, and it's at least somewhat plausible that humans have experienced a sort of evolutionary atrophy of more instinctive mechanisms since language became available -- that we don't think about space quite the same way, since we don't need to.

I look forward to further studies here -- it's a neat, very practical illustration of the larger questions of human thought...

Profile

jducoeur: (Default)
jducoeur

June 2025

S M T W T F S
12 34567
891011121314
15161718192021
22232425262728
2930     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags