Eve Muses While Writing Essays


Here’s an idea: thought’s main purpose is to make sense of our environment in order to survive in it and make more humans, right? Which is why there’s no real thought inside the womb; it isn’t needed to avert death. You’ve got everything you need: umbilical sustenance, amniotic protection, big-screen TV. So this would logically mean that humans who remain in the womb throughout their lives wouldn’t develop thought because it wouldn’t be necessary. (They’d be moved into larger wombs according to growth, of course, like gardeners change plant pots.)

If this is the case, then what’s up with The Matrix? They wouldn’t have needed it for more than a generation before they could just harvest thoughtless babies and scrap the illusion program. Would the prophecized freedom fighters ever be found? Or would Morpheus become just as non-badass as his stimulus-free kin? Would there still be a “One” to save us all? And most importantly, would there still be sweaty underground dance orgies?

A world without these things is a frightening thought, but at least we’re still of the generation that has the capacity to consider them. Our children, on the other hand… *ominous music*

Side note: What is up with The Meatrix?


Comments 16

  1. Riz wrote:

    I don’t think you can say there is no thought inside the womb. Maybe very simple thought.


    A lot of stuff at the bottom is a good read.
    Ultrasonographers have recorded fetal erections as early as 16 weeks g.a., often in conjunction with finger sucking, suggesting that pleasurable self-stimulation is already possible. WOAH!

    Posted 14 Jul 2004 at 6:55 am
  2. Riz wrote:

    Oh and this is cool (same site, root directory):
    Twins can be seen developing certain gestures and habits at twenty weeks gestational age which persist into their postnatal years. In one case, a brother and sister were seen playing cheek-to-cheek on either side of the dividing membrane. At one year of age, their favorite game was to take positions on opposite sides of a curtain, and begin to laugh and giggle as they touched each other and played through the curtain.

    Posted 14 Jul 2004 at 6:57 am
  3. The Rev wrote:

    Thought has a bunch of other purposes than merely understanding our environment.

    Posted 14 Jul 2004 at 8:58 am
  4. Fraser wrote:

    Drive-by philosophy Rev?

    Posted 14 Jul 2004 at 11:01 am
  5. Eve wrote:

    What I’m saying is that without stimulation, capacity for complex thought doesn’t develop.

    Posted 14 Jul 2004 at 11:02 am
  6. Eve wrote:

    And Riz, remember that the first is a correlative study and the second involves interaction between twins rather than a solitary foetus.

    Posted 14 Jul 2004 at 11:05 am
  7. The Rev wrote:

    Eve> What about an AI, then, or the various pseudo-AI’s developed by various research groups (up to and including, Douglas Hofstadter’s group at UoIndiana)? They necessarily assume that the structures and functions of consciousness are not dependent on sensory stimulation for their existence.

    An analogy: Imagine a machine built to say, turn sawdust into blocks. The machine exists, and retains all of its functions, even when no sawdust is being fed into it, and no blocks are coming out. It’s not doing anything, but it hasn’t lost any of its capacities. It may even be running (if it’s left on by a negligent worker, say) and performing all the same functions.

    Fraser> I give Eve enough credit to not require me to spoon-feed her everything.

    Posted 14 Jul 2004 at 11:32 am
  8. Eve wrote:

    Pieces of industrial equipment are a touch different from the human brain. If they’re designed to learn, well, I don’t think they’d learn much without input anyway.

    More time for argument tomorrow; right now it’s bedtime.

    Posted 15 Jul 2004 at 12:11 pm
  9. The Rev wrote:

    Yes, but to what extent are input and stimulation analogous, let alone identical? After all, input is purely representational information, whereas the nature of sensory information is not necessarily representational.

    Posted 15 Jul 2004 at 1:33 am
  10. Eve wrote:

    It is once it reaches the brain; all it is is action potential.

    You’re arguing word definitions again. What do you mean by representational information?

    Posted 15 Jul 2004 at 8:33 am
  11. The Rev wrote:

    Nyet. It’s action potential and structure. It’s not merely a neuron zipping and zapping, but it doing so within a particular structure and connected in certain ways with other neurons whose discharges also affect it, and not any other way.

    I’ll avoid the long philosophical definition for something shorter and a bit more easy to comprehend. Representational information is information capable of being encapsulated in signs (representations) which exhaustively describe it. Once you know the correct list of signs characterising all of a thing’s properties, you know all that there is to know about that thing.

    Whether or not there’s non-representational information is an open question. The consensus at the moment is that qualia are non-representational (sort of). Chalmers, Jackson and Dennett, all of whom you’ll read when you take Philosophy of Mind, have a huge argument about this, because it’s one of the most important conceptual questions out there for AI.

    Posted 15 Jul 2004 at 1:07 am
  12. Fraser wrote:

    “I give Eve enough credit to not require me to spoon-feed her everything.”

    Look how much writing you’ve used just to try to explain yourself. Your first comment is obviously insuficient, aswell as most of your communication.

    In regards to the AI/pseudo AI:

    These examples don’t prove your point (that thought has purposes beyond understanding our environment). Look at a program that’s designed to think creativly and also doesn’t have external senses, like ALICE which attempts to carry on conversations. All of it’s worldly knowledge is borrowed from the people who programed it. ALICE doesn’t hear speech, see lip movements, read the dictionary and then come up with a creative topic of conversation. Humans hear speech, see lip movements, read the dictionary and then impart that knowledge onto ALICE so that it can carry on conversations.

    The pseudo AI is just a bank of collected human cognition, which involves stimulation of the senses.

    Tolkien is dead. Is “The Lord of the Rings” still a fantastic and creative piece of work? Of course because Tolkien, when he was alive, put that creativity in there.

    Posted 15 Jul 2004 at 9:50 am
  13. Fraser wrote:

    ALICE can be found here:


    Click on “chat with A.L.I.C.E.” to …chat with ALICE.

    Last I heard, ALICE was the closest thing to passing the Turing Test. A program passes the Turing Test if a user can speak with the program for 5 minutes and not know whether they are talking to a computer or a real live person.

    Though we’re all geeks here, so I imagine we all already knew about this. I’m just being complete is all.

    Posted 15 Jul 2004 at 9:56 am
  14. The Rev wrote:

    There are in fact, two theses under debate.

    1) Thought has purposes other than understanding the environment. This one is simple to disprove – thought also has the purpose of reconfiguring our environment. Sure, this requires some level of understanding our environment, but is an entirely different process. Question settled. Don’t set overly restrictive boundary conditions without good reason. Eve’s intuition is reasonable enough for perceptual systems, but she inadvertently conflated perception with the totality of thought (probably unintentionally).

    2) Without stimulation, the capacity for complex thought does not develop. “Complex thought” is undefined, but let’s assume it’s something to do with the manipulation of representations. The existence of pseudo-AI indicates that certain behaviours functionally identical in their output to some range of thought processes are capable of being created without stimulation. I can see no clear boundary between this and complex thought that allows the one while forbidding the other. The complexity of the translation between representations we understand and those the machine understands increases, but that’s an engineering problem, not a conceptual one.

    Addendum: Let’s not get ridiculously global with this argument. The computers we’re talking about are hypothetical logical structures, not bits of silicon. The qualities of the person who realises them in physical form is inconsequential for the purposes of this argument.

    Posted 16 Jul 2004 at 3:11 am
  15. Eve wrote:

    I’m pretty sure you and I are looking at this from incompatible perspectives.

    Posted 16 Jul 2004 at 2:33 am
  16. The Rev wrote:

    Possibly so.

    Posted 16 Jul 2004 at 3:14 am