Some thoughts on “tulpamancy” and the nature of consciousness, among other things

All right, here’s the post that is why I even created this blog in the first place. I expect this to spawn a series of posts, or lead to something less formal but still to that effect.

Before I give you anything interesting, here’s a big, terrible warning:

The following post contains a mild information hazard.

I have received an anecdotal report that knowledge of a version of my core hypothesis made tulpa creation somewhat more difficult. If you are currently creating a tulpa or plan on doing so in the future, you may want to skip this. I suspect the risk is lessened if you have already done so in the past, but have little more than a hunch to support that view. I would not be shocked if this risk also pertains to other possibly related practices and phenomena, such as learning to think with a multi-agent model. You have been warned.

With that out of the way, here we go. This is your last chance to turn back.

All right. I was linked to /r/tulpas from somewhere or other and immediately found the concept interesting.

Over the summer I spent about a week or two casually trying something with significant similarities to all of this, though at the time I was just trying to trick my drive for social interaction and companionship into shutting up so I could focus on the things I care about and be happier without spending the time and effort to make friends I would just loose track of when I went back to college. My results were ambiguous but very, very minimal. Now I’ve found the subreddit by chance and would like to try again, properly this time.

Naturally, being a nerdy, rationalist type with interests in AI and cognitive science, I’ve given the underlying mechanisms some thought. I think I’ve come up with something interesting.

I bet the underlying mechanism behind tulpas is a more developed version of the same mechanism we all use to model the minds of other people with varying degrees of accuracy.  This would hint at that same mechanism may be closely related to how consciousness works. That second bit isn’t especially new or innovative thought, actually. At this point, I’m going to just paste in a few pages from  Gödel, Escher, Bach: An Eternal Golden Braid (commonly GEB)  by Douglas Hofstadter. It’s long, but you really should read it. I’ve added emphasis to particularly relevant bits. In my edition, this starts on page 386. I’d have shortened this quite a bit if I could figure out how to without creating unnecessary opportunities for confusion.

There is no reason to expect that “I”, or “the self”‘, should not be represented by a symbol. In fact, the symbol for the self is probably the most complex of all the symbols in the brain. For this reason, I choose to put it on a new level of the hierarchy and call it a subsystem, rather than a symbol. To be precise, by “subsystem”, I mean a constellation of symbols, each of which can be separately activated under the control of the subsystem itself. The image I wish to convey of a subsystem is that it functions almost as an independent “subbrain”, equipped with its own repertoire of symbols which can trigger each other internally. Of course, there is also much communication between the subsystem and the “outside” world-that is, the rest of the brain. “Subsystem” is just another name for an overgrown symbol, one which has gotten so complicated that it has many subsymbols which interact among themselves. Thus, there is no strict level distinction between symbols and subsystems.

Because of the extensive links between a subsystem and the rest of the brain (some of which will be described shortly), it would be very difficult to draw a sharp boundary between the subsystem and the outside; but even if the border is fuzzy, the subsystem is quite a real thing. The interesting thing about a subsystem is that, once activated and left to its own devices, it can work on its own. Thus, two or more subsystems of the brain of an individual may operate simultaneously. I have noticed this happening on occasion in my own brain: sometimes I become aware that two different melodies are running through my mind, competing for “my” attention. Somehow, each melody is being manufactured, or “played”, in a separate compartment of my brain. Each of the systems responsible for drawing a melody out of my brain is presumably activating a number of symbols, one after another, completely oblivious to the other system doing the same thing. Then they both attempt to communicate with a third subsystem of my brain-mv self’-symbol- and it is at that point that the “I” inside my brain gets wind of what’s going on: in other words, it starts picking up a chunked description of the activities of those two subsystems.

Typical subsystems might be those that represent the people we know intimately. They are represented in such a complex way in our brains that their symbols enlarge to the rank of subsystem, becoming able to act autonomously, making use of some resources in our brains for support. By this, I mean that a subsystem symbolizing a friend can activate many of the symbols in my brain just as I can. For instance, I can fire up my subsystem for a good friend and virtually feel myself in his shoes, running through thoughts which he might have, activating symbols in sequences which reflect his thinking patterns more accurately than my own. It could be said that my model of this friend, as embodied in a subsystem of my brain, constitutes my own chunked description of his brain.

Does this subsystem include, then, a symbol for every symbol which I think is in his brain? That would be redundant. Probably the subsystem makes extensive use of symbols already present in my brain. For instance, the symbol for “mountain” in my brain can be borrowed by the subsystem, when it is activated. The way in which that symbol is then used by the subsystem will not necessarily be identical to the way it is used by my full brain. In particular, if I am talking with my friend about the Tien Shan mountain range in Central Asia (neither of us having been there), and I know that a number of years ago he had a wonderful hiking experience in the Alps, then my interpretation of his remarks will be colored in part by my imported images of his earlier Alpine experience, since I will be trying to imagine how he visualizes the area.

In the vocabulary we have been building up in this Chapter, we could say that the activation of” the “mountain” symbol in me is under control of my subsystem representing him. The effect of this is to open up a different window onto to my memories from the one which I normally use-namely, my “default option” switches from the full range of my memories to the set of my memories of his memories. Needless to say, my representations of his memories are only  approximations to his actual memories, which are complex modes of activation of the symbols in his brain, inaccessible to me. My representations of his memories are also complex modes of activation of my own symbols-those for “primordial” concepts, such as grass, trees, snow, sky, clouds, and so on. These are concepts which I must assume are represented in him “identically” to the way they are in me. I must also assume a similar representation in him of even more primordial notions: the experiences of gravity, breathing, fatigue, color, and so forth. Less primordial but perhaps a nearly universal human quality is the enjoyment of reaching a summit and seeing a view. Therefore, the intricate processes in my brain which are responsible for this enjoyment can be taken over directly by the friend-subsystem without much loss of fidelity.

We could go on to attempt to describe how I understand an entire tale told by my friend, a tale filled with many complexities of human relationships and mental experiences. But our terminology would quickly become inadequate. There would be tricky recursions connected with representations in him of representations in me of representations in him of one thing and another. If’ mutual friends figured in the tale being told, I would unconsciously look for compromises between my image of his representations of them, and my own images of them. Pure recursion would simply be an inappropriate formalism for dealing with symbol amalgams of this type. And I have barely scratched the surface!

We plainly lack the vocabulary today for describing the complex interactions that are possible between symbols. So let us stop before we get bogged down.

We should note, however, that computer systems are beginning to run into some of the some kinds of complexity, and therefore some of these notions have been given names. For instance, my “mountain” symbol is analogous to what in computer jargon is called shared (or reentrant) codecode which can be used by two or more separate timesharing programs running on a single computer. The fact that activation of one symbol can have different results when it is part of different subsystems can be explained by saying that its code is being processed by different interpreters. Thus, the triggering patterns in the “mountain” symbol are not absolute; they are relative to the system within which the symbol is activated.

The reality of such “subbrains” may seem doubtful to some. Perhaps the following quote from M. C. Escher, as he discusses how he creates his periodic plane- filling drawings, will help to make clear what kind of phenomenon I am referring to: “While drawing I sometimes feel as if’ I were a spiritualist medium, controlled by the creatures which I am conjuring up. It is as if they themselves decide on the shape in which they choose to appear. They take little account of my critical opinion during their birth and I cannot exert much influence on the measure of their development. They are usually very difficult and obstinate creatures.”

Here is a perfect example of the near-autonomy of certain subsystems of the brain, once they are activated. Escher’s subsystems seemed to him almost to be able to override his esthetic judgment. Of course, this opinion must be taken with a grain of salt, since those powerful subsystems came into being as a result of his many years of training and submission to precisely the forces that molded his esthetic sensitivities. In short, it is wrong to divorce the subsystems in Escher’s brain from Escher himself or from his esthetic judgment. They constitute a vital part of his esthetic sense, where “he” is the complete being of the artist.

A very important side effect of the self-subsystem is that it can play the role of “soul”, in the following sense: in communicating constantly with the rest of the subsystems and symbols in the brain, it keeps track of what symbols are active, and in what way. This means that it has to have symbols for mental activity-in other words, symbols for symbols, and symbols for the actions of symbols.

Of course, this does not elevate consciousness or awareness to any “magical”, nonphysical level. Awareness here is a direct effect of the complex hardware and software we have described. Still, despite its earthly origin, this way of describing awareness-as the monitoring of brain activity by a subsystem of the brain itself-seems to resemble the nearly indescribable sensation which we all know and call “consciousness”. Certainly one can see that the complexity here is enough that many unexpected effects could be created. For instance, it is quite plausible that a computer program with this kind of structure would make statements about itself which would have a great deal of resemblance to statements which people commonly make about themselves. This includes insisting that it has free will, that it is not explicable as a “sum of its parts”, and so on. (On this subject, see the article “Matter, Mind, and Models” by M. Minsky in his book Semantic Information Processing.)

What kind of guarantee is there that a subsystem, such as I have here postulated, which represents the self, actually exists in our brains? Could a whole complex network of symbols such as has been described above evolve without a self-symbol evolving, How could these symbols and their activities play out “isomorphic” mental events to real events in the surrounding universe, if there were no symbol for the host organism, All the stimuli coming into the system are centered on one small mass in space. It would be quite a glaring hole in a brain’s symbolic structure not to have a symbol for the physical object in which it is housed, and which plays a larger role in the events it mirrors than any other object. In fact, upon reflection, it seems that the only way one could make sense of the world surrounding a localized animate object is to understand the role of that object in relation to the other objects around it. This necessitates the existence of a selfsymbol; and the step from symbol to subsystem is merely a reflection of the importance of the self- symbol’, and is not a qualitative change.

This seems like a really, really powerful explanatory framework for both tulpas and a number of other phenomena and practices that share some features, such as thinking with a multi-agent model, Dissociative Identity Disorder and Internal Family Systems Therapy. I think there could be a really big insight here. Now just to explore the implications a bit, and see where it leaves us.

Given the example from GEB about a mental model of a friend, it is pretty obvious that a good model of this sort needs to at least superficially implement some of the features Hofstradter ascribes to the self-subsystem. Namely, to simulate the thoughts of your friend, your friend-subsystem needs to be aware of what symbols are active in your model of your friend’s brain and in what ways they are active. In typical practice there can be some confusion between different subsystems here, since it is basically just piggybacking off of symbols you already have. Perhaps that is why to get a really good sense of what a friend would do or think in a given situation, what we do may feel more like putting ourselves in their shoes than having a rudimentary copy of them in our heads we can talk with, though we also experience situations more like the second approach. However, manipulating the strength of that separation one way or the other may very well be a learned skill.

It seems reasonable to wonder, then, if consciousness may be more a quantitative than qualitative sort of thing when dealing with these sorts of subsystems. Perhaps a really well developed mental model of a friend can be said to have a rudimentary and limited sort of consciousness. If so, perhaps the other phenomena I mentioned are various, more developed sorts of something similar.

Now I guess the question becomes how to test it. One thing that immediately comes to mind would be an attempt to develop one’s mental model of someone they know into something like a tulpa by practicing keeping it active for longer periods of time and working on its self-awareness. This hypothesis would suggest this would be much easier than just constructing a tulpa from scratch based off of such a model. I’m not currently advocating this particular experiment  but it’s one that comes to mind. The results could be kind of creepy, and it probably shouldn’t be attempted without careful consideration and the subject’s consent. Does anyone have any other ideas?

One other note for now: Though having these ideas in your head may make it hard to get and stay in the right state of mind to create a tulpa the ordinary way, as  MQQSE reports, I suspect that these insights should make a more effective, dare I say scientific, method possible once they are explored more fully. I have some vague, undeveloped ideas about what this might look like and intend to explore it in the next however many days.

About these ads

3 comments on “Some thoughts on “tulpamancy” and the nature of consciousness, among other things

  1. David says:

    Very interesting stuff. Keep on with the research into the scientific causes of this; it’ll be very interesting to hear of anything new you come across

  2. SmoothPorcupine Pirate says:

    “This seems like a really, really powerful explanatory framework”

    Your very words betray you. It is exactly and precisely as you say. It /seems/ like a really really etc.. Seems. Not is. It merely seems to be from your eyes because you are looking at what you already know and speculating therefrom. Having been one who has literally performed the experiment you describe, let me say this:

    “Subsystem” is not a useful symbol.

    • qrv42 says:

      Care to elaborate? Keep in mind that Hofstadter also talks about a “self-subsystem” as the part of the brain that can say things like “I think, therefor I am”. If your disagreement is based upon a belief that the term “subsystem” implies something lesser, then that is unfounded in that in this context, the thing the subsystems are “sub” to is the whole brain.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s