Target Properties

 

Home Up

 

 

 

 

 

III 7. Target Properties

 

Posted: November 15th, 2003

 

Question:  How do target properties affect performance? 

Proposal: One possible approach to this question could involve using a single coordinate to designate not one but two different targets. Does the signal line contain equal amounts of information about both? Does one target come through more accurately or reliably? Could we use this technique to isolate particular target features which tend to be more salient to a particular viewer? 

One clear advantage of using such a method is that it contains an intrinsic calibration: since the viewer's mental and physical status does not vary between the two targets, it can be reasonably assumed that all variations are due to intrinsic target characteristics, rather than subject performance.  


Replies / References

 

Pilot Study in RV Target Attractors - Proposal for Discussion

From: Lian Sidorov
Date: 1/19/2004
Time: 8:24:53 PM
Remote Name: 204.116.134.169

One of the basic problems we encounter in RV seems to be the gradual "slide", during the session, toward peripheral or contextual aspects which, for one reason or another, represent more powerful attractors than the primary target. If this goes on for long enough, especially in the absence of a monitor who can redirect the viewer, the session summary will contain primarily "off-target" information and be considered a failure. 

 What we need to develop, as viewers, is an ability to detect such psychological undercurrents and correct our course. A novice viewer follows the rigid structure given to him by his instructor in order to master the basic skills for staying afloat in a new, unfamiliar environment. Once these basic survival skills have been internalized, the rest of one's life-long training should address the problems of navigation, not swimming form: just as a sailor needs to read the wind, the tides, the shore and the stars, we need to understand the psychological influences of various "formations" we encounter in our sessions: everything we know is loaded with connotations, everything has its own gravity and psychological turbulence, its own dynamics. If we are not weary of these factors, we allow ourselves to sail blindly and are doomed to repeat the same mistakes. While no two sessions and no two viewers are ever alike, this should not discourage us from trying to derive some empirical conclusions based on our collective experience: it is a modest and awkward beginning, but that is how all true learning accumulates. 

 What do attractors typically consist of and when do they form? The available RV literature suggests that emotionally charged events may displace a viewer time-wise, while large or unusually shaped structures can cause a similar geographical slide. What we propose, as a preliminary experiment, is to create a pool of double targets, each consisting of two unrelated pictures that are as similar to each other as possible, except for a single, isolated aspect - the experimental variable. For example, these could represent two fields - one empty, one filled with soldiers engaged in battle; or two crowds - one marching peacefully, one engaged in riots; or two ships - one sailing uneventfully, the other under construction; or two planets - one much larger than the other; etc. A list of variables could include, but is not restricted to, features like size, temperature/energy generation, speed of motion, number of people involved, the strength and type of their emotions, the complexity of a target's structure, the relative survival value of an event, the stability or transient nature of a phenomenon, single versus repetitive patterns, representative versus abstract designs, etc. For each double target, picture #1 should be designated as the one in which the specific variable is represented by a greater absolute value (i.e. greater size, temperature, emotional impact) or as the one in which the experimental variable is consistently isolated (i.e. repetitive patterns): this is to ensure uniformity in the statistical analysis of these effects. 

 

To avoid additional biasing influences, these double targets should not be loaded with any specific tasking questions, but assembled ahead of time under neutral conditions, enclosed in sealed, numbered envelopes and assigned to one global pool, from which daily targets can be chosen at random. While a separate record should be kept of the particular feature isolated for each target, the analysis of the results should not be carried out until all the targets in the pool have been exhausted and all the sessions collected. Unless this precaution is taken, there is a possibility that, by knowing the class a given target belongs to, the person posting the target might insert his own expectations into the outcome of the group's sessions. There is also a possibility that the viewers themselves might identify a pattern in the type of targets they receive and develop undesirable expectations about the nature of future targets. It would therefore be preferable that target pools consist of an assortment of several experimental samples (that is, several groups of variables) from which daily sessions are pooled at random. 

 Once all the trials are complete, the sessions can be separated according to their experimental variables, and each data point can be assigned to one of 4 categories: 

 A. relevant only to picture #1; B. relevant only to picture #2; C. relevant to both; D. not relevant according to available information. 

 A score difference M can be thus calculated for each session as (A-B) and the mean of M over the entire sessions sample can be designated as [M]. Using a Student t table, we can then test a number of hypotheses with variable degrees of confidence. 

 1. [M] = 0 (there is no statistically significant difference between the number of correct perceptions relevant to targets of types 1 versus targets of types 2: the experimental variable does no represent an attractor) 

 2. [M]>0 (targets of type 1 are more likely to be correctly perceived by the viewer) 

 3. [M]<0 (targets of type 1 are less likely to be correctly perceived by the viewer) 

 A sample size of 30 or more sessions would be preferable for this test, but if we can assume that the distribution of M scores is approximately normal across the population, smaller samples should suffice. 

 Other considerations: 

 1. Can we use more than one session per experiment from any individual viewer? While it may be practically easier to collect the requisite number of sessions from a few comitted volunteers, one needs to take into account how this might affect the sample distribution: since each viewer processes the data in a highly idiosyncratic way, the distribution of M scores might be severely skewed by using only a few viewers. Since we are trying to identify whether the experimental variable represents a universal attractor, it would not be wise to base such a test on only a few participants, for whom the hypothesis may or may not hold true. By using a one session/viewer approach, we are probably safe in assuming that the M distribution is normal and therefore that a smaller sample size is sufficient. However, the sample should include at least several target pairs, in order to ensure that any detected patterns are due to the chosen experimental variable and not to some other factors particular to the individual target. 

 2. Target pool preparation: how can we ensure that there are no significant, hidden variables in addition to the one we are testing for? And how can we achieve uniformity between the two targets in a pair while preserving sufficient difference in order to separate between perceptions relevant to each of them? For example, if we use two mountains as a target pair, only one of which is in the middle of a volcanic eruption - how many perceptions are we likely to identify as relevant only to the "quiescent" target? Since every additional visual element that might help identify a target is also a potential hidden variable, it may be desirable to collect a certain amount of "neutral" collateral data about each target, such as location, season, etc and use that information to filter the raw data yielded by each session. It may also be advisable to convert all target pictures to black-and-white and crop them to the same size for feedback purposes. 

 3. Viewer protection: since each target will consist of two unrelated images, it is very important that participating viewers be warned against any attempt to try to integrate the session information. While they should be entirely blind to the nature of the experimental variable involved in the trial, they should understand that only raw, low-level data is to be provided, and that site/meaning integration is to be strongly discouraged in order to avoid the psychological frustration associated with an impossible task. 

 4. Finally, viewers should submit session summaries as described above and keep the original session notes, since feedback won't be available until the completion of a particular experiment. 

 I have started to collect potential targets for these different categories, but if anyone else is interested in helping out with the target pool selection, please let me know. I look forward to all your questions and comments and hope that we can begin the active part of our experiment within a month or so. 

 Lian


Re: Pilot Study in RV Target Attractors - Proposal for Discussion

From: Magical Nexus
Date: 1/25/2004
Time: 12:23:16 AM
Remote Name: 204.116.134.110

With respect to item #1 on your email- I am writing to offer a suggestion that we create a form that can be completed online and is attached to your web page - you could put it in a members only section to allow access only to RVers and staff... it is fairly easy to design such a form and to enable it to be printed. If it is a matter of manpower, I could design such a form if you can decide what the content should be. 

 Magical Nexus


Re: Pilot Study in RV Target Attractors - Proposal for Discus...

From:
Date: 1/25/2004
Time: 12:30:58 AM
Remote Name: 204.116.134.110

Thanks, MN - and let's think about this... Lyn Buchanan has something called "RV Data Worksheet" published in his book "The Seventh Sense", which they use to evaluate a session's score. I believe it would indeed be a great idea to use something like that for the analysis portion - first, because it would help us capture every possible type of perception in one's session (not just tangibles, but things like alignment and ambience); secondly, because it would let us look for possible patterns and correlations between particular types of perceptions and the experimental variable (i.e. the emotional content of a target might correlate strongly with the number of correct conceptuals and with the ambience, but poorly with things like texture and colors). 

 I would not, on the other hand, ask people to check off such a form during their session, or even afterwards... I would let them produce whatever data they normally produce, because the literature is clear that, whenever researchers attempted to pre-fit the data into such a grid, people became too focused on "scoring along these categories" and their psi performance dropped to almost insignificant levels (that's exactly what the point of the Dunne and Jahn paper on RV scoring in last year's JSE). 

 Your suggestion would however be an excellent way to score the data... Evaluating these sessions will have to be as much as possible standardized according to a grid of "valid perceptions" to be designed for EACH set of targets (I'm thinking of the double target protocol here): the problem will be in ensuring that we are as unbiased as possible in our analysis of these sessions, and since I can't figure out a way that the analysis for this can be done blindly, we have to at least guarantee some consistency in the way we score these results. If we use Lyn Buchanan's method (see page 266-269), then we can make a list, as soon as we have chosen a particular target pair, of all the major descriptors that would characterize this set - and then the analysts only have have to check off the submitted sessions according to this list (viewers would not have access to it). Any session data that appears to correlate with one of the two pictures but is not on the list would have to be judged on its own merits by the entire team of analysts. This would ensure that all sessions are scored as consistently as possible and that people are not excessively creative in according merit points to the targets isolating the experimental variable... after all we are human and our expectations may play subtle games with our judgment ;-) 

 This is a very important point that you have brought up and I think it deserves a bit more discussion by the group... 

 Lian


Re: Pilot Study in RV Target Attractors - Proposal for Discus...

From: Magical Nexus
Date: 1/25/2004
Time: 12:34:40 AM
Remote Name: 204.116.134.110

Hi Lian, 

 I appreciate your points regarding the inhibition of psi function by imposing a questionaire protocol that is too intensive for the RVer or inconsistent with the RVers own sensitivities. I will look over the attachment for the scoring format and offer any ideas that may arise. I also agree that consistent scoring is critical to examining results. This is tricky business. Thank you for the reference to Lynn Buchanan's book, I will take a look at it. 

 Happy to work out as many kinks as possible ahead of time.


Re: Pilot Study in RV Target Attractors - Proposal for Discussion

From: Jim Karlsson
Date: 2/2/2004
Time: 9:23:27 PM
Remote Name: 207.144.211.40

Thank you for getting all of this organized and off the ground. It is about time that we all get moving, it doesn't matter where it will lead us, the key is to get something started that generates data. 

 >>> One of the basic problems we encounter in RV seems to be the gradual > "slide", during the session, toward peripheral or contextual aspects > which, for one reason or another, represent more powerful attractors than the > primary target. If this goes on for long enough, especially in the absence > of a monitor who can redirect the viewer, the session summary will contain > primarily "off-target" information and be considered a failure. 

 I don't know where I heard it first, but in RV literature somewhere, someone claimed that during the Cold War when the fear of psychic espionage was at its heights, rooms with classified content were decorated with Disneyland-like paraphernalia, balloons, dolls etc. so that any remote viewer targeting that room would immediately associate the target with Disneyland and not a secret facility. 

 It should be fairly easy to test this type of "deceptive" targeting and it may lend some hints as to on what abstraction level we first access remote viewing data, and from that construct a model which represents the hierarchy of meaning-levels our subconscious mind takes to get to the intended target. It has to navigate through symbolism, imagination, metaphors and analytical overlay to get to target. Also, what specific items attract our subconscious attention the most? Is it familiar themes, emotional aspects that evoke some interest or memory in us that diverts our attention towards it? Is our remote viewing mind primarily drawn to objects it has a frame of reference to? Viewers consistently describe different aspects of the same target. We should test targets that are related to some of the viewers' occupations to see if these viewers describe the target more accurately than other viewers who know less of the target subject. 

 >>> What do attractors typically consist of and when do they form? The > available RV literature suggests that emotionally charged events may displace a > viewer time-wise, while large or unusually shaped structures can cause a similar > geographical slide. What we propose, as a preliminary experiment, is to > create a pool of double targets, each consisting of two unrelated pictures > that are as similar to each other as possible, except for a single, > isolated aspect - the experimental variable. 

 I would also suggest that the target cue and perspective-to-target aspect to be tested. For example, we have the same target, D-Day at Omaha Beach. One target is specifically cued to a specific soldier on this beach and this person's actions at any particular moment. Another target could be designed to intend for the viewers to describe the larger view picture of the battle. We need to discover some sort of scale to target and how it related to the creation of the target cue. We can't leave it to the viewer to go to "some important place and describe the most important aspect", the target cues have to be very exact and we should expect the viewers to deliver data that fits a very narrow window of permitted descriptors. 

 >>> 1. Can we use more than one session per experiment from any individual viewer? 

 I guess it depends on what we are testing. I can see scenarios in which we could accept more than one session from a viewer. The whole concept of sub-cueing is built on giving a viewer the same target but intending finer and finer granularity in the viewer data. If we are testing Warcollier-like effects and similar, we should probably keep it to one session per viewer. 

 >>> it may be desirable to collect a certain amount of "neutral" collateral data about each target, such as location, season, etc and use that information to filter the raw data yielded by each session. 

 I completely agree, while this may seem useless when we are doing statistical work on it, I think we may see some significant results. But then, some viewers don't like to have a limited set of descriptors, and prefer to "free-form" it. Standardization is key, so reigning in the primadonnas to conform to some standard may be quite the task ;-) 

 >>> It may also be advisable to convert all target pictures to black-and-white > and crop them to the same size for feedback purposes. 

 Agree. Anything that can be standardized is much easier to analyze statistically. 

>>> 3. Viewer protection: since each target will consist of two unrelated > images, it is very important that participating viewers be warned against > any attempt to try to integrate the session information. While they should > be entirely blind to the nature of the experimental variable involved in > the trial, they should understand that only raw, low-level data is to be > provided, and that site/meaning integration is to be strongly discouraged > in order to avoid the psychological frustration associated with an impossible > task. 

 Yes, but we don't want to discourage the "Wow, I think this is Stonehenge..." type of response that we all can have sometimes. I did, didn't write it down and has kicked myself ever since. But you are right we want to discourage the naming of target, but should provide the viewer a field to record off-the-topic and AOL-like data. 

 Speaking from a viewer's standpoint, if there are experimental design considerations to take into place, I would like to suggest that we also look at this an opportunity to train viewers and for more experienced viewers to hone their skills. I think we should start testing simple gestalt imagery and various aspects of it, and gradually increase complexity and abstraction. It would be good to early on establish a common vocabulary for all viewers. Perhaps some experiment similar to Warcollier's could be used to test viewers' vocabulary and perceptual limitations and simultaneously develop a common framework on which we can standardize descriptors and conceptual categories. 

 I am looking forward to getting started on this.

 

 

 

 

 

 

 

 

 

Copyright � 2000-2006 EmergentMind.org