Music |
On the GCP website homepage there is a link to "Background Music" and on that page is a link to a segment of the "Ultimate OM" by Jonathan Goldman. If you click it you will hear a long, harmonious chord. For many years I have wanted to make music using the EGG data, but though there have been some interesting efforts, which you can listen to on the music page, none are like the "vision" I have had. What I would like is a continuous, rich and complex "un-chord" that shifts toward harmony when the Eggs are more correlated, occasionally becoming a soaring and magical chorus that I think of as "music of the noospheres" :-) As you can imagine, this is something of a tall order, so it hasn't happened -- yet. |
We will work to develop a
potentially useful quasi-analytic method for identifying structure by listening to sounds
(music) that are transforms of the data from the Global Consciousness Network. Though
there have been some efforts of this general sort, it is largely uncharted territory. We
believe it is worth exploring, and we expect that the result will be interesting to listen
to, whether or not there is a powerful analytical result. Art and Music: Esthetic displays --
the Sound of one EGG hatching, in three movements, 9, 28, and 100 machines. Harmony of the
Noospheres: resonance, coherence, engagement.
Following are some early discussions (in 1997-98) between Roger
Nelson and Jiri Wackermann. Much of what we talk about is still viable, but it hasn't been examined in any depth, and none of the ideas have been implemented. RDN: Let us play all the data sequences from the EGG
project through music filters to make a complex chord. At first we will have 6 or 8 sites,
each sending data at one or a few trials per second to the central servers. The trials
will be bitsums, with a mean value of something like 50 or 100, with normally distributed
variation around the expectation, translatable into Z-scores, which are floating point and
on a consistent scale. JW: The idea has precedent; acoustical presentation of data was attempted in the EEG
domain, and Mayer-Kress, a guy who did a lot in non-linear dynamics of the brain in Santa
Fe, made a lot of hype around this. Quite independently, a Czech neurologist few years ago
tried the same general approach. Concerning transformation of Z-scores to sound, the
following seems to work quite well: k = gamma * log( 2 ) f = f0 * exp( k * z ) where f0 is
the frequency assigned to Z=0, and gamma is the steepness parameter; with gamma = 1,
difference in Z of +-1 is translated to doubled or halved base frequency (1 SD = 1
octave). RDN: A chord whose notes are determined by an algorithm applied to the current
datastreams and various conversions of it. Background hum that is very soft when the
composite across all sites is very little different from no structure, and increasing
volume with indications of departures from expectation. The dominant notes are the
clusters of Eggs that are most correlated, and the complexity of the chord in time is a
function of the level of global resonance. Rhythms are determined by the occurrence of
warm and cool localized areas or global patterns, where there is much more or much less
coherence and resonance than usual. The heart rhythms act as a carrier wave for life and emotion and intelligence, at least
in an important metaphoric sense. For Gaia, we may expect an equivalent rhythm and
carrier, with an expectation it will be vibrations from a different frequency domain and
in a different medium. The heartbeats of Gaia have an interval of about 5 or 10 minutes,
based on abstract calculations with results between 4.8 and 8.4 minutes, e.g., the ratio
of temporal perception units apparently operating in human and global consciousness, and
by dimensions of human and planet represented in a volumetric ratio. The random music will
need a carrier wave, and the modulated excitement in the heart of Gaia can serve. JW: I was playing a bit with the acoustic feedback I mentioned few weeks ago. There is
no sophisticated synthesis, the current deviation of the random walk trajectory is simply
converted to sound, using my old formula and the PC speaker as the output device. (I hate
the idea to program SoundBlaster chip myself.) Anyway, it works and seems to provide
usable feedback to the operator. I decided to play W[n] = 2S[n] - n (where S[n] is the
bitsum at the n-th step of the walk) rather than Z[n] = W[n] / sqrt(n), for one obvious
reason: if W is constant for a while, the corresponding Z decreases, and this is somewhat
counter-intuitive for a statistics-naive operator, given the instruction `"push the
curve as high/low as possible." However, the conversion factor is adjusted so that at
the end Z=1 corresponds to one octave, as I proposed. These preliminary experiments gave me to think about some details: (i) Frequency coding
-- I realized that it's not always easy to get the direction of the change immediately, at
least with gamma=1; with gamma=2, the feedback is much more convincing, but then the range
of ``audible'' values is limited, since the frequency changes by factor 4 by 1 standard
deviation. RDN: How about a non-linear mapping, where differences close to the expectation yield
large changes, but those closer to the end of the range produce progressively less change.
I call this the Zeno scale in the case of our newest ArtREG version, where the best the
operator can achieve at any given point is progress half-way to the goal -- so it is not
possible actually ever to get there. JW: Human hearing is non-linear too, so it may pay to think of perceptual harmonics.
(ii) Base tone and tempo seem to play major role in the aesthetic perception of the
resulting ``music''. I currently use 640 Hz for zero deviation, and the tempo is fixed by
my CCD device (12.5 bits/second); however, experiments with bitstrings playback showed
that the impression may change a lot with these parameters. The main purpose of all this
was to develop a feedback strategy not involving eye movements, and thus facilitating EEG
recordings. Anyway, I thought it might be of some relevance to your "music of
noospheres" subproject. RDN: Most relevant. I have not found time to dig into my old Amiga files, where the
source of the taped music probably resides in APL functions. It may not be very important
to do so, because the material for the MON (music of noospheres :) is going to be very
rich and deserves fresh and creative thinking. JW: BTW, I took a book by John Barrow, "The Artful Universe", for my weekend
bed reading, and there was a section devoted to music synthesized by random sources with
varying autocorrelation function (Chapter 5, "Natural History of Sound" ?? I
read the book in German translation, so I'm not sure). RDN: Did you have the impression it was beautiful music, and something we might well
use? I think we will/do have varying autocorrelation function, but usually within an
expected range. However, these variations can be "amplified" and themselves as
the source of parameter control for the synthetic music. |