Concordia: A musical XR instrument for playing the solar system

Kelly Snook, Tarik Barri, Monica Bolles, Petter Ericson, Carl Fravel, Joachim Goßmann, Susan E. Green-Mateu, Andrew Luck, Margaret Schedel & Robert Thomas


Kepler Concordia, a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms, is being designed by a diverse team of musicians, artists, scientists and engineers using audio-first principles. The core instrument modules will be launched in 2019 for the 400th anniversary of Johannes Kepler’s Harmonies of the World, in which he laid out a framework for the harmony of geometric form as well as the three laws of planetary motion. Kepler’s own experimental process can be understood as audio-first because he employed his understanding of Western Classical music theory to investigate and discover the heliocentric, elliptical behaviour of planetary orbits. Indeed, principles of harmonic motion govern much of our physical world and show up at all scales in mathematics and physics. Few physical systems, however, offer such rich harmonic complexity and beauty as our own solar system. Concordia is a musical instrument that is modular, extensible and designed to allow players to generate and explore transparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler as well as researchers who have extended Kepler’s work, such as Hartmut Warm. Its primary function is to emphasise the auditory experience by encouraging musical explorations using sonification of geometric and relational information of scientifically accurate planetary ephemeris and astrodynamics. Concordia highlights harmonic relationships of the solar system through interactive sonic immersion. This article explains how we prioritise data sonification and then add visualisations and gamification to create a new type of experience and creative distributed-ledger powered ecosystem. Kepler Concordia facilitates the perception of music while presenting the celestial harmonies through multiple senses, with an emphasis on hearing, so that, as Kepler wrote, ‘the mind can seize upon the patterns’.

1. Introduction

Kepler Concordia is a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms. Concordia is aimed at artists, musicians, educators, computer programmers and scientists that are interested in beginning to explore data through concepts of play, improvisation, multi-sensory experience and exploration. The intention is that, through their own agency inside an instrument defined by the constraints of scientific data, players may discover and explore patterns within the data that are not readily seen at human spatial or temporal scales. The instrument will create a deeper appreciation and understanding of the underlying patterns that define nature and can inspire further and deeper research into these subjects.

This paper describes the early development of the Kepler Concordia instrument, which moves beyond current standards, technology and definitions or expectations of Virtual Reality, Augmented Reality, Mixed Reality and Extended Reality (sometimes grouped under the heading XR) into territory that Kevin Bolen coined as ‘The New Realities’ (Bolen, 2019). Concordia’s contribution to ‘The New Realities’ also include the unique infrastructure of the instrument.

Concordia will eventually use a distributed peer-to- peer computing and community model which opens up a novel economy and culture in design and evaluation practices. Intended Concordia participants fall into three broad categories: (1) programmers and instrument module hardware/software creators; (2) players and performers; (3) observers and audiences. The three categories of Concordia participants together form a community that functions as a creative economy and a collaborative ecosystem based on built-in, blockchain-based, community defined smart contracts for tracking the value of contributions and use. This allows Concordia to serve as a socioeconomic experiment as it grows and evolves (Snook & Potts, 2018b).

Both the hardware and software for Concordia will be open source, developed in modules and uploaded to the Concordia blockchain and other distributed computational infrastructures. The gradual launch and series of events will enable live streamed augmented-reality performances by musicians that can be observed from anywhere in the world. Active exploration of broader socioeconomic and educational implications of this new type of instrument, particular in its distributed ledger aspects, is underway and might best be followed online (e.g. Snook & Potts, 2018a). Concordia also illustrates how the creative arts and creative economy can be a vanguard for other parts of the economy or society by early adoption of new technologies and experimental development of the institutions to support them.

Building such an experience requires the application of computational architectures and music technology to modern astrophysical data sets, to create a comprehensive, scientifically accurate solar system model. In Concordia’s version of Virtual/Extended Reality, the reality which is being extended is the literal solar system with its planets, sun, moons and orbital movements. The extension of that reality includes all of the abstract sonic and visual representations of harmonic celestial relationships, such as Spirograph-like link-line figures formed from vectors calculated between two planets as they move around the sun such as the 5-petaled figure that appears when drawing a line between Venus and Earth every 3 days (see Figure 1). These representations are then superimposed upon the sonic and visual representation of reality as the player moves through virtual space.

2. Historical context and narrative arcs

2019 marks the 400th anniversary of Johannes Kepler’s publication, Harmonices Mundi (Kepler, 1619), a treatise on geometry, astronomy and music. The book, commissioned by the City of Linz and published in 1619, details Kepler’s first three laws of planetary motion, as well as his processes of discovering them. The three laws, now well known, were not initially accepted, even by contemporary astronomer, Galileo Galilei, because they conflicted with accepted theological world views based upon the geocentric model of Ptolemy. These three laws simply stated are as follows:

  1. The orbit of every planet is elliptical (not circular or spherical) with the sun at one focus of the ellipse.

  2. The orbits sweep out equal areas in equal times as they move about the sun with slower angular velocity when farther away (aphelion) and faster when closer (perihelion).

  3. There is a concise and elegant mathematical relationship between a planet’s orbital period and its distance from sun, namely, that square of the period is directly proportional to the cube of its distance from the Sun.

Four specific aspects of Kepler’s approach were new or unique and bear mentioning in order to better understand the motivations for building the Concordia instrument. First, his writings show that he was primarily driven by the desire to expose and make accessible the harmonic order that he considered to be Nature’s organising principle (Kepler & Baumgardt, 1951).

I feel carried away and possessed by an unutterable rapture over the divine spectacle of the heavenly harmony.

This, in itself, was not unique in the early seventeenth century. But his relentless insistence on understanding and publicly exploring truth, rather than supporting any of the rival Christian dogmas of his day, led to near excommunication and rejection from the churches; he was left to seek spiritual satisfaction in astronomy, as he described in his personal letters (Kepler & Baumgardt, 1951):

You see how near I come to truth. As often as this happens, how can you doubt that I amply shed tears! For God knows, this a priori method [the scientific method developed by him and Galileo] serves to improve the study of the movements of the celestial bodies; we can put all our hopes into it if others cooperate who have observations at their disposal… I am eager to publish soon, not in my interest, dear teacher . . . I strive to publish in God’s honor who wishes to be recognised from the book of nature. But the more others continue in these endeavors, the more I shall rejoice; I am not envious of anybody. This I pledged to God, this is my decision. I had the intention of becoming a theologian. For a long time I was restless: but now see how God is, by my endeavors, also glorified in astronomy.

Rejection from his native Lutheran Church caused him deep personal grief and sorrow, and he found solace in science and his exploration of astronomy and mathematics.

Second, in addition to deep mathematical, astronomical and theological knowledge, Kepler also possessed profound understanding of the most recent music theory of his day. At a critical juncture in his development, he was intrigued by the work of Galileo’s father, Galilei (2003), who examined music theory from a broad historical and newly experimental perspective, with special attention paid to early Greek theory and tuning systems. While the Pythagorean idea of the ‘Music of the Spheres’ was widespread and even long-embedded in Christian dogma, Kepler approached music as a tool for physical investigation of reality with unprecedented rigor and enthusiasm.

Third, during his tenure as Imperial Mathematician to the Holy Roman Emperor Rudolph, Kepler had sole access to the most comprehensive and accurate data ever collected in the meticulously compiled astronomical observations of his predecessor, Tycho Brahe. His refusal to accept previously ignored or unknown discrepancies between observed data and prevailing models, whether Ptolemaic or Copernican, eventually led to his breakthroughs in discovering the correct mathematics reflected in his three laws. No one before him had so rigorously pursued precise agreement between theory and observation, one of his principal contributions to the scientific method.

Finally, the fourth and perhaps most breathtaking and admirable aspect of Kepler’s approach was his ardent prayer, granted several times over, to be relieved of any attachment to ideas that stood in the way of the truth he so tirelessly pursued. Most notably, when the unexpected mathematical truths of his three laws finally became

clear, he was forced to let go of his cherished, physically and philosophically beautiful, early model of the solar system, consisting of nested platonic solids shown in Figure 2 (Kepler, 1596),which had launched his career and sustained his passionate inquiry for decades. His own discoveries undermined not only his contemporaries’ world views, but his own favourite ideas (Wilczek, 2016). Coupled with his capacity to rethink his own theories, his ability and willingness to extrapolate truths to the broader human condition made him one of history’s greatest scientists (Kepler, 1619):

While I struggle to bring forth this process into the light of human intellect by means of the elementary form customary with geometers . . . may He prevent the darkness of our mind from bringing forth in this work anything unworthy of His Majesty . . .

In light of this brief examination of Kepler’s approach, it is evident that modern science, by distilling his contributions to their most basic mathematics, does a disservice to his multidimensional, trans-disciplinary and uniquely musical pursuit of truth, which called upon every possible human faculty and mode of perception.

Kepler, himself, issued a call to future musicians to investigate truth in this way (Kepler, 1619):

. . . There is need for louder sound while I climb along the harmonic scale of the celestial movements to higher things where the true archetype of the fabric of the world is kept hidden. Follow after, ye modern musicians, and judge the thing according to your arts, which were unknown to antiquity.

Kepler Concordia goes beyond just exploring mathematics through art; it is itself a protest against false dichotomies between art and science, music and mathematics, demonstrating the deep unity of their natures. Rooted in sound and generated by data, and expand- ing into extended reality, but far beyond a simple academic exercise in data sonification or XR experience design, it is a scientific and musical instrument designed explicitly to answer Kepler’s call. It celebrates and makes extant Kepler’s ideas as an immersive, visceral, educational experience. The creation of a modular, customisable musical instrument, as opposed to a musical performance or recorded piece of music or art, gives agency to each player to deeply explore their own personal relationship to scientific and mathematical history and concepts.

3. Tools, techniques and approach

3.1. Astronomical sonifications

There have been numerous sonifications of planetary movement and other astronomical data (Alexander et al., 2011; Ballora, 2014; Candey, Schertenleib, & Merced, 2006; Leonard, 2010; Tomlinson et al., 2017), and sonifications directly referencing Kepler’s work, notably Laurie Spiegel’s Kepler’s Harmony of the Worlds, which was included on Voyager’s Golden Record (Feisst, 2015), and Rodger’s and Ruff’s Realisation for the Ear of Johannes Kepler’s astronomical data from Harmonices Mundi’ (Ruff, Rodgers, & Kepler, 1979). These sonifications owe a conceptual debt to Pythagoras’s Music of the Spheres, a philosophical conceit that sought to describe the movement of the sun, moon and planets using harmonic proportions. In Medieval Christian Doctrine, there were three types of music: musica mundana, the music made by the rotations and revolutions of the heavenly bodies; musica humana, the harmonies holding together man’s body and spirit; and finally, musica instrumentalis, the music made by man with instruments and voice (Knott, 2016). The Music of the Spheres was not thought to be literally audible, but rather to be a harmonic, mathematical or religious concept. In Kepler’s time, instrumental music was thought to exist solely to make the inaudible music of the Cosmos and the human condition audible.

As discussed earlier, Kepler was highly influenced by the idea of the Music of the Spheres, and he sought a rational explanation of the movements of the planets, of which only six were known in his time. After exhausting his initial rudimentary calculations of the harmonic relationships of planetary orbital distances and other static parameters, he progressed to more abstract and complex relationships between angular velocities of the planets during particular times in their orbits, such as aphelion (the furthest point from the sun) and perihelion (closest point to the sun). Many recent solar system sonifications translate direct planetary measurements such as position, size, density, composition and so on into controls for the sound (Quinton, McGregor, & Benyon, 2016).

Through sonification, abstract concepts such as plan- etary movements, can be made more tangible and com- prehensible to the general public. Since many sonifica- tions have been created as artistic pieces, a certain degree of skepticism from scientists has made them wary of using sonification as a scientific tool. Barrass refers to the conflict between the more traditional scientific view with relation to data analysis and the new view, which embraces the advantages of the human auditory sys- tem and its cultural significance (Barrass, 2012). There is a fine line between sonification as a means of scien- tific exploration and merely being perceived as a popular marketing tool or mere entertainment.

Considering the large amounts of data involved in astrophysics and the limitations of over-saturating a human’s visual perception with information, the use of sonification in this field begs to be explored in more detail. The difficulty arises when trying to find the right

Figure 3. The ‘Coltrane Circle’ given by John Coltrane to Yusef Lateef in 1967 (Lateef, 1981).

balance between artistic input and sonifications that are usable in scientific data analysis (Quinton et al., 2016). Kepler was interested in the relationships between the heavenly entities, as opposed to any individual planet considered alone, so it is these relationships and their possible mappings to sound that are the focus of the first Concordia modules.

3.2. Tonal relationships

Although Kepler’s tuning systems predated the well- tempered clavier by at least a century, it simplifies soni- fication strategies to begin explorations with modern 12-tone harmonic musical systems. Concordia module sonification strategies are not necessarily limited to these constraints, as demonstrated in Section 4.2.2. Initially, however, the geometry of the interrelationship of plan- etary dynamics easily lent itself to the geometry of tones. Some recent examples and research done in this area include Jazz legend John Coltrane’s Circle of Tones (see Figure 3), in whichColtrane lays out his conceptualisa- tion of a variation on movements through the circle of fifths in his tune, Giant Steps.

In Dmitri Tymoczkos’ The Geometry of Music Chords (Tymoczko, 2006), the author writes

A musical chord can be represented as a point in a geometrical space called an orbifold. Line segments represent mappings from the notes of one chord to those of another. Composers in a wide range of styles have exploited the non-Euclidean geometry of these spaces, typically by utilising short line segments between

structurally similar chords. Such line segments exist only when chords are nearly symmetrical under translation, reflection, or permutation. Paradigmatically consonant and dissonant chords possess different near-symmetries, and suggest different musical uses.

Coltrane’s circle, orbifolds and other emerging systems of geometrical music theory (Hall, 2008) provide ample ideas to begin the exploration of mappings from plane- tary geometries to sound. As Hall points out,

Geometrical music theory suggests new directions for research in traditional music theoretical topics such as voice leading and set class similarity, new tools for teach- ing and conceptualising music, and will perhaps inspire composers. Moreover, one can envision practical appli- cations of geometrical music theory, such as in the design of music visualisation tools, interactive musical toys, or even new musical instruments.

Concordia deliberately encourages players to move beyond pitch, using a combination of chords, tim- bres and envelopes to elucidate geometrical musical understanding.

3.3. Dynamically and physically modelled sound

Virtual worlds and VR implementations to date have typ- ically used client-side visual rendering to produce player- centric points of view, but have generally used sound clips (like.mp3) or streamed sound (like Soundcast) to present audio in world. Concordia needs dynamically generated and 3-D modelled spatial audio. Initially we have pro- totyped this as server-side synthesis, controlled by ‘in- world’ user controls, and streamed into the experience. But the desired result of player-centric, location-unique sound that is immersively correct as the player moves and turns their head will require sound to be ‘rendered’ (synthesised) in the client. Years of work in physically based visual rendering using parallel-processing GPUs has put visuals far ahead of audio for immersively cor- rect experience. Concordia will be a platform and driving force for the development of ‘Point of Hearing’, physi- cally accurate audio. For WebVR this may be by means of Javascript synthesis libraries. Other VR environments that could be prototyping laboratories, such as Sansar or Unity, could make use of audio generation in C#, Chuck, supercollider or similar languages running as plug-ins in the engine, or there may be an expansion existing synthesis modules such as Unity’s Helm or MySynthe- sizer. More advanced audio rendering can be anticipated using some of the GPU’s processing power. Early proto- typing in Augmented Reality using Unity has revealed the shortcomings of the current state of the art of real- time audio synthesis as compared to visual rendering computation.

3.4. Accessibility

Accessibility, inclusion and adaptable design are impor- tant from the beginning of the design process. Generally, the audio-first and gestural control approaches help to make the Concordia player experience especially accessi- ble to the visually impaired, while the hearing-impaired can still have a meaningful experience from the visual and representations and interactivity. However, early module development has not yet directly considered accessibility at the level of prototype user interface or software naviga- tion design. This will be included in the second iteration of prototype designs.

3.5. Perception, participation, interaction

Participants in Concordia fall into three general cate- gories, listed here in decreasing degrees of agency:

(1) Contributors – Those who design, program, create and upload new sonification or visualisation mod- ules, new open source hardware (, 2019) control interface designs for others to down- load and build, or new compositions within existing modules. Contributors are also be people who facil- itate new use cases for Concordia, such as presets or ways of playing Concordia that can be used in teaching curricula.

(2) Players – Those who actively use the instrument controls to choose which Concordia modules to play, which data within given modules to play, and then to navigate, manipulate and explore the data and their mappings into the 4D immersive world of sound, visuals and haptics. Players have the option of inviting others to observe their live explorations and/or record their compositions back into Concor- dia ecosystem for others to play and remix.

(3) Observers – Those who ‘ride along’ with someone else who is playing live or with a recording inside Concordia. This category of participant can act as an audience or beneficiary of experiences others are creating.

This diversity of participatory modes gives Concordia flexibility in terms of recruiting developers, composers and performers while still allowing for traditional audi- ences.

3.5.1. Dynamic relationships

By sonifying the interactions of the planets and the dynamic relationships between them the participants of Kepler Concordia create what movement theorist Rudolf Laban called a ‘dynamosphere’. This is a virtual

mood-scape that adds depth to the virtual-sonic terrain and ‘is achieved through a transformation of parameters controlling the manipulation and development of sonic events through timbral, spectral or temporal means’ (Green-Mateu, 2018). Furthermore, players may explore ‘identity as a pattern of organically linked relationships, and search for essential forms beneath the shifting sur- face of reality’ (Smith, 2012). The sculpting of the archi- tecture of the virtual experience is brought forth by carefully composed interaction whether it be through manipulating physical controller surfaces, gestures with wearables such as the Mi.Mu gloves ( n.d.), or movement using common motion capture systems. Con- cordia becomes an accessible instrument where anyone can navigate, build and musically score virtual space through a ‘dynamic reciprocity of movement and sonic interaction’ (Green-Mateu, 2018). Concordia, as an open- source and personally customisable musical instrument, provides access to those who would otherwise face lim- itations in creating, play with or experience traditional musical instruments. It is designed to lower or remove barriers traditionally encountered in playing or experi- encing music, such as inequalities of virtuosity, education level, economic freedom, or social or cultural experiences (Karns, Dow, & Neville, 2012).

3.5.2. Inter-relationships

Concordia reaches beyond any particular musical tem- perament, theory, strategy of mapping, composition style and control surface. It also magnifies the scope of indi- viduals’ feedback and interpretation of their experience. Executive Producer of Experiences at Oculus Rift Yelena Rachitsky explains that immersive experiences are ‘about cultivating an experience . . . and about the story that you tell yourself after . . . ’ (Pape, 2018). Astronomical data sets are in themselves open source material represent- ing the quantum reality of possibilities in that everything is present but yet to be experienced. Alice Ann Dar- row explores the observations and stories individuals take away from exposure to music through a group of hearing and hearing-impaired participants (Darrow, 2006). Both were exposed to the same playlist. Afterward they were asked to describe their emotional responses. The feed- back from individuals in the hearing group was nearly identical while the hearing impaired participants came back with more diverse and individuated responses. The research team found that the cultural conditioning sur- rounding the meaning of sound prevalent in the hearing group was not present in the non-hearing group (Dar- row, 2006). Diverse interaction with the audio through open-accessibility not only facilitates a new reality for the individual, it mirrors and enriches our collective

understanding of the complex harmony of humanity as modelled by the solar system.

The approach of Concordia, mirroring Kepler’s, is to minimise the imposition of pre-formed ideas or musi- cal constructs as much as possible in the sonification algorithms, with the goal of allowing people to find map- pings that naturally expose the symmetry, beauty, har- mony and dissonance-the musicality-of the physics. For example, a Concordia contributor, as they program a new module, might favour, when choosing sonic mappings for the lines in Figure 1, mapping directly to frequency space rather than mapping to ‘pitch’ via pre-selected notes in a pre-selected scale. In short, minimising subjec- tive aesthetic impositions from the mappings will enable maximum freedom to create subjective, aesthetically rich results in the exploratory playing of the instrument. Sophisticated, model-based sonification algorithms will be favoured over direct audification, allowing for ever richer layering of information into the sounds being cre- ated. The simple, direct mappings of planetary infor- mation may be used as anchors for the subtler details that characterise the data, embracing rather than neglect- ing those particularly interesting imperfections or devia- tions from averages. For example, sound designers know that to make a truly interesting sound, things like Low Frequency Oscillation (LFO), filter cut-off frequency changes, Attack/Decay/Sustain/Release (ADSR) Enve- lope control, and thousands of other variables in synthe- sis are critical considerations. Eventually, any given com- bination of Concordia modules could employ thousands, if not hundreds of thousands, of algorithmic, model- based sonification and visualisation mappings in a form that is designed to grow and evolve through open-source contributions to the code base. Contributors and players will choose which of those relationships please their own aesthetic judgement, and how to trigger or manipulate those in time.

A primary consideration is modularity and the flexi- bility of mapping algorithms, finding the optimum bal- ance between the limitations of scientific accuracy and integrity versus the freedom of sonic exploration and expression. Another consideration in module design cen- tres around deciding which parameters to map to the visual domain and which to map to the sonic or hap- tic domains. Concordia’s hardware control interfaces and modules will be similarly flexible, with the ability to manipulate the data through a wide range of controllers, from simple switches on a tablet to complex gestural control such as with motion capture or wearables like the Mi.Mu gloves ( n.d.). As Scaletti (2018) noted, Sonification does not equal Music. The complexity and number of possible mappings and controls will ensure that Concordia is an expressive musical instrument. 

4. Prototypes and testbeds

As a proof of concept, while the underlying computa- tional and social and philosophical frameworks outlined above are in development, work on a prototype of the first Concordia module has begun. Where possible, existing tools (e.g. the data visualisation and sonification environ- ments) are being used for early experiments.

4.1. Signature of the celestial spheres and versum

Hartmut Warm has created a body of work ( n.d.; Warm, 2010) that extrapolates and extends Kepler’s ideas by representing them visu- ally and examining the geometric archetypes intrinsic to dynamic planetary relationships.

Warm has created a computer application, called ‘Sig- nature of the Celestial Spheres ’, based on open hybrid observational and theoretical ephemerides models (e.g. n.d.) that visualises a wide variety of geometric, Spirograph-like figures, such as Figure 1, as discussed in his text. Using his app, one can calculate and draw astronomical objects and events, such as relative positions, velocities, conjunctions and oppositions.

As a starting point for prototyping, two data sets (as shown in Figure 1 of planetary ephemeris) have been generated using standard examples provided in Warm’s app, Signatures of the Celestial Spheres ( n.d.) and are freely available on the Concordia website (Snook, 2018):

(1) Lines linking Venus and Earth drawn every 4 days for 100 Earth years;

(2) Lines linking Jupiter and Uranus every 72 days for 86 earth years.

The first Concordia module prototype used the strings of numbers output from Warm’s code to generate 3D visual and sonic objects in a custom 3-D audiovi- sual composition environment developed by Tarik Barri, called Versum (Barri, 2009). Prototype module creators used these existing data creation and mapping tools to experiment with visuals and sounds, including wave- forms, envelope shapes, filters, colours, visual shapes, and time-based rhythms and effects.

A Max/MSP patch was created to read Venus–Earth and Jupiter–Uranus link-line data. A slider controlled the ‘sampling rate’ of the data, corresponding to the dura- tion of time between each in a series of lines drawn between two planets – then sends OSC commands to Versum. These OSC commands carry information about the objects drawn in Versum, including Cartesian coor- dinates of lines and spheres, visual parameters such

as brightness, size, colour, and transparency, and sonic parameters such as pitch, frequency, rhythm/pulse and volume. There are hundreds of sonic and visual param- eters that can be mapped from the data into Versum via custom OSC messages that have been made visible to the authors in a special hackable version of Versum. Thus data-encoded audiovisual OSC commands can be read by Versum to draw lines and spheres in 3D, and then apply mappings to audio and visual synthesis, and enable navigation through the data. The objects created in Versum comprise lines and spheres with a large num- ber of time-variable visual and sonic parameters that can be mapped using other data-generated OSC commands with an emphasis on visual simplicity and on sonification as the main carrier of geometric and relational infor- mation. As the lines between planets are drawn, their mathematical properties are sonified in a way that allows sonic patterns to be revealed gradually, just as the visual patterns are.

The single dimension of data input control chosen was the sampling frequency of the lines because this allowed for visual and sonic explorations of geometric symmetry. As the players adjust the time interval between lines drawn and those lines are drawn inside Versum, they begin to see and hear changing symmetrical fig- ures appearing in the data, from the beautiful dense 5-petal structure of the original set, to sparser 3-fold, 4-fold, 5-fold, 6-fold, 9-fold, 17-fold, etc. symmetrical figures. This basic geometric property of varying sym- metries may be obvious to mathematicians, but it causes great delight to a first-time explorer and when mapped well sonically, can offer hours of intrigue. Of course, this shifting symmetry pattern happens also in a similar way with the Jupiter–Uranus link-lines when sampling fre- quency is changed; therefore, this parameter emerges as an example of a possibility for control by a single knob in a Concordia hardware interface module.

The player’s experience is a dynamic exploration of unfolding information as objects are made visible and audible in Versum’s displays. Perspective (location, camera-pointing) and zoom level within the data is under the control of the player, similar to an in-world avatar game, such as Second Life (Kaplan & Haenlein, 2009). In the full module, before starting the navigation or even during navigation, the player will set up their special uni- verse, via an interface such as a mobile device or hardware cockpit control, by choosing from a whole host of vari- able parameters, such as which planet to include, which moons to include, over what time periods and at what intervals.

Versum enables fluid experimentation with mappings to hundreds of sonic and visual parameters at any moment. Objects can be created, destroyed and altered in

real time either by changing the mappings from the data or by changing navigational perspective.

Currently, navigation in the Versum environment is manual and interactive using a 3D space navigator. Ver- sum uses Supercollider for audio synthesis for mappings from the data to audio, such as line length, line angle, regularity of line intersections, distance of intersections from the centre of the figure, line start and end points, points where there are multiple intersections, etc. can, themselves, be changed in real time to explore the most compelling mappings for hearing geometric regularity.

Prototype development thus far has been limited in scope. However, the broader research questions in play point to exciting futures:

  • How can players interface with the order of the solar system musically with mixed-reality technologies?

  • What are the musical properties of the solar sys-

    tem that can/should be sonifed, explored and exper-

    imented with musically?

  • In what ways can players interact and sonify and

    explore simulations of the solar system to reveal its


  • How might other data structures be introduced?

  • How is describing the cosmos musically and sonically

    a meaningful way to explore its nature?

4.2. Residencies, performances, talks and Hackathons as testbeds

Concordia is an instrument emerging from a seed planted deep underground in 2001 as a thought experiment before the technologies existed to create it. Unlike many projects of this scope that centre at one institution or company, it has taken shape primarily through indi- vidual interactions around the world, informal collab- orations, invited talks and lectures at conferences, and opportunities to hack on various elements of the idea. Largely self-funded and unstructured, but tenaciously nourished, the roots of Concordia have been growing underground while, above ground, creative technologies have bloomed. Advances in Virtual Reality, audio and visual synthesis, digital signal processing, social media, internet and web-based languages, information accessi- bility and search, broadband communications technolo- gies, distributed ledger systems, AI and machine learn- ing, wearable tech, sensor networks, digital fabrication, open-source/DIY hardware and software, data storage, commercial space exploration, brain imaging and neu- roscience, music and art technology, digital humanities, museum science, physics, astronomy, robotics, and an endless list of other fields now provide rich and fertile ground for the realisation of Concordia.

Since the first invited talk to speak about solar system sonification at the CTM Festival in Berlin in 2014, oppor- tunities to develop the idea have increased in frequency and tangibility, focusing primarily on data sonification and music technologies in prototype development.

To underscore the audio-first nature of the design of the immersive Concordia experience, here is a brief men- tion of some of the important milestones in the recent evolution of the project as it has been transitioning from idea to reality during this period that marks the 400th Kepler Harmonices Mundi anniversary.

• Connected Futures, University of Brighton School of Media, Brighton, Concordia Launch Event (Snook, Ericson, Goßmann, Schedel, Thomas, Warm, and oth- ers), March, 2018

• International Conference on Auditory Display, selected oral paper (Snook et al., 2018) (Snook, Goßmann, Schedel), June, 2018

• Music Tech Fest, Stockholm, Concordia-themed hackathon (Snook, Ericson), September, 2018

• European Jazz Conference, Lisbon, keynote address (Snook), September, 2018

• Most Wanted: Music, Berlin, invited talk and demon- stration (Snook), November, 2018

• Ableton Loop, full-day Concordia workshop (Schedel, Snook, Bolles), November, 2018

• Reality Virtually hackathon, MIT Media Lab, Cam- bridge (Snook, Bolles, Schedel), January, 2019 [more detail below]

• Denver University (Joseph and Loretta Law Institute of Arts and Technology) invited talk and spatial audio demonstration (Bolles and Snook), April, 2019

• Proto 2.0, Troms, invited talk and workshop (Snook), April, 2019

• Virginia Tech Cube residency (Schedel, Bolles, Green and others), May, 2019

• Music Tech Fest Labs, Pula (Snook), May, 2019
• Trossingen University of Music, Trossingen, invited talk and fulldome audio workshop (Snook, Goßmann,

Bolles), May, 2019
• Glasgow Jazz Promotion Network Conference, Glas-

gow, keynote talk and demonstration (Snook), June,

• Nashville MiniDome Residency for spatial audio and

dome visual experimentation (Bolles, Green, Snook

and others), July, 2019
• Making Media Matter, Denver University (Bolles),

July, 2019
• Cubefest, Virginia Tech, Blacksburg (Schedel, Snook,

Bolles, Green, and others), August, 2019
• Reeperbahn Festival Next Conference, invited keynote,

September, 2019

• L’Ecole Polytechnique Fédérale de Lausanne (EPFL), invited lecture and demonstration (Snook), Septem- ber, 2019

4.2.1. Reality virtually Hackathon, MIT Media Lab, January, 2019
Hackathons and workshops provide fertile ground for engaging wider communities in prototyping and concept testing, creating a minimum viable demonstration of the various technologies used in Concordia. They allow a unique opportunity for fast, iterative design and proofs of concept. One of the main components of Concordia is about building an open source community. By engaging people through hackathons it allows the opportunity to bring together minds from different backgrounds to work together to create versions of sonifications/visualisations of the scientific data. This process strongly represents the process of collaborative world-building and creating that Concordia intends to inspire.

For example, an early prototype for the first Concor- dia instrument module was mocked up by a small team assembled during a 2-day mixed-reality Hackathon in early 2019. As has already been discussed, the Solar Sys- tem presents itself as a highly structured system with predictable, but very interesting, chaotic elements. As discussed earlier, geometries constructed by drawing straight lines between any two planets over time provide rich data sets of high musical potential. For this exer- cise, the team chose to work with the five-pointed star of Venus–Earth in Figure 1. The third spatial Z dimension was neglected to simplify calculations. Astronomical data generated by planetary ephemeris models (Warm, 2010) were visualised and sonified using Unity and Chuck, displayed in a Magic Leap One headset environment, and played interactively as a musical interface using the Mi.Mu Gloves. The resulting AR experience was shared with the public during a public demonstration on the following day.

Objectives at this hackathon were as follows:

  1. (1)  to represent at least one planetary link-line star figure ( Venus–Earth),

  2. (2)  to spatialise visual and auditory display in the Magic Leap One Mixed-Reality headset,

  3. (3)  to demonstrate gestural control and interactivity using the Mi.Mu Gloves.

Stretch goals were sketched as a storyboard to include a progression of gamified steps through the experi- ence that would include multiple planetary data sets and additional modes of visualisation, sonification and interactivity.

Several design problems were considered during the process of story-boarding and narrowing project scope:

  • How can programmers present data structures using sound with least amount of subjective musical or aes- thetic bias?

  • What are the intrinsic musical properties of this spe- cific type of data set?

  • Are interactions merely vantage points of the data cho- sen by the player, similar to a different view point, angle, or format, or are there other interactions to consider?

    Studying the visualisation of the data facilitated efforts to define areas and properties of interest in the data that could reveal sonic equivalents to the Spirographic and symmetric visual geometries. Some ideas we explored included

  • 2D Collisions of Link-lines – returning frequencies where trajectories collide in a focused area;

  • Distance between planets – scaling and sonifying the distance between planets to frequencies;

  • Proximity – sonifying relationships between viewer and data objects (in this case, lines);

  • Conjunctions – sonifying eclipses and other planetary events of significance to trigger meaningful sound;

  • Line Angles – sonifying vector information of relative

    planetary positions, velocities, accelerations.

    Due to technical challenges in compiling software and sending and receiving Open Sound Control (OSC) mes- sages to and from the Magic Leap One headset, three sep- arate laptop computers were used for the visual (Unity), audio (Chuck) and glove control (Mi.Mu Glover) ele- ments of the system, with all machines sharing data via OSC over WIFI.

(1) Step 1. Sonification – The distance between the plan- ets (length of each link-line in Warm’s output) was chosen as a starting point for sonification. The data set included 731 measurements of distance, which were mapped to frequency through a simple ratio shifted into audible range, triggering 100ms sine wave oscillations (notes of 100 ms duration) for each line.

(2) Step 2. Visualisation and interaction design – A bright green, movable collider sphere object was introduced into the space that would trigger the oscillations of any lines within it. The player was able to move the sphere collider up or down through the visual field while viewing the Spirograph pat- tern. Using the Mi.Mu Gloves, OSC control messages

were sent by the Mi.Mu mapping software, Glover, on one machine to Unity on another to spin the link- line pattern, like a wheel, and/or to move the collider sphere. This interaction was akin to a stylus on a spinning record, allowing the player to play back cor- related frequencies of moments in time. When the collider lands on a point of interest the link-lines would trigger their distance relative frequency as a sine wave in Chuck.

(3) Step 3. Full immersion – once this is explored suffi- ciently, a player is then able to enter the next mode, which allows them to fly down into and through the figure where various properties of the data are spatially sonified and visualised around them.

The AR experience was successfully created with the simple interactions, sonifications and visualisations as designed in steps 1 and 2, but time was not available to implement step 3. Results were successfully shared with roughly 50 public visitors in a walk-up demonstration setting where each was guided through the experience.

This Hackathon demonstrated that mixed reality is an interesting and powerful tool to experience musical and mathematical abstractions in a variety of scales. Even the simple mappings for sonification and control from this experiment show great promise for exploring these types of data. Spatial computing and holographic interfaces are strong in the areas of visualising what we can’t nor- mally see. ‘Playing God’ was often a term used to describe scaling views for different perspectives during the hack sessions. The obvious missing element was a large pro- jection dome that could provide the ‘reality’ – a dynamic scale model of the solar system – upon which the data sonifications and visualisations would be superimposed.

4.2.2. Virginia Tech Cubefest, June and August, 2019

A subset of the Concordia team was invited to compose and perform a four-movement immersive piece designed for Virginia Tech’s Cubefest, a festival featuring the Cube venue, a 4 story black box venue with 140 multi-channel speaker array (Lyon et al., 2016). The physical system comprised a ring of 64 speakers at the first ear height level and smaller rings of 12–24 speakers respective to the 3 upper levels. Each of the four movements used similar link-line data sets in different ways by different artists to explore a diversity of approaches to module development. Four laser projections were developed in parallel by a laser artist to supplement the sonification designs. No other traditional projections or visualisations were used.

(1) The first movement, entitled Somnium (Green, 2019), used 400 years of Warm’s planetary data Cartesian

coordinates converted to spherical geometry for the most accurate positioning in space to determine the location from which the sound of each planet would emanate in the 360 degree array. The piece was inspired by Kepler’s short story Somnium (Chen- Morris, 2005) which tells a story about space travel to the moon. The composers expanded the space travel to the first 5 planets in the solar system. Each planet had a melody and colour assigned to it specif- ically composed using it’s orbital resonance to drive it’s relative key and time signature. Venus’s melody, for example, was written in the key of A. One day on Venus is 55000 seconds = 55.00 Hz = A1 in 12 tone equal temperament. The concept of a ‘boss planet’ was used to signify the primary melody and key cho- sen by the performers as the planet being visited at any given time. An iPhone running Mi.Mu’s Gliss app was used to choose the boss planet.

In addition to the melody, each planet’s atmospheric make-up (Zuckerwar, 2002) was used to shape the sound design and relative characteristics as well as the sound environment through which all other planets’ melodies where heard. For example, due to Mars’s thin carbon dioxide-laden atmosphere, sound does not propagate in the same way that we expe- rience it on Earth (Suchenek & Borowski, 2018). Higher frequencies have a quick decay thus pro- ducing a more mid range percussive quality. The melodies, key/time signatures and sound environ- ments were programmed through Ableton Live and processed through Max/MSP. The harmonies were created by calculating the angles each planet made to one another at any given moment and performers chose the chromatism of each planet’s harmonies in real time–from octaves, fourths and fifths, through major and minor thirds and sixths, through all twelve tones using orbital data in Max/MSP. Using Mi.Mu Gloves performers were able to control the volume for each planet as well as the level of har- monic complexity generated by the data.

The visual component created an ever changing starfield through the use of lasers. The number of stars in space and the density of the starfield was produced by the sum of the volumes and the sum of the chromaticisms respectively. The colour of the stars as well as stage lights matched the ‘boss planet’. All controllers were mapped via MIDI cc and OSC to Ableton Live, Max/MSP harmony and spatialisa- tion patches, and the lasers. Given the many layers of programs running for this piece, the amount of CPU power required was distributed between two computers. One computer decoded the gestures, cal- culated the harmonies and created the sounds while

a second machine received 6 audio channels and spa- tialised the incoming sound throughout the Cube. For the VR version of Somnium, the physical restric- tions of the Cube are removed, enabling the place- ment of the sound of each planet in its proper ellip- tical orbit by spawning speakers where and when needed. Locations of the next sound are predicted and sound sources created that spawn, speak and perish, releasing CPU power after the sound is made. Instead of using a sound spread over multiple speak- ers as an expressive parameter, the performer is given control over the resolution of the point-sources. This is distinct from the musical sampling parameter that chooses which note to play; in VR a single note can be heard moving through multiple locations. The performer retains control over the volume of each planet. However, we added attenuation based on the avatar’s distance from the sound source. The VR experience can be for one or two players. Due to the wired tethering of most VR headsets, an inexpensive Leap Motion control is used instead of the Mi.Mu Gloves. The Leap Motion is already integrated well into Unity so performers can see hands modelled in VR.

The visual component is a very abstract landscape inspired by the paintings of Hilma Af Klint. When a performer chooses a planet the colour, texture, and horizon point of the visuals change, giving an abstract sense of the planet while retaining the sound as the primary sensory channel. In two-player mode the performance is cooperative; in order to change planets the first person displays a hand posture that initiates the request to change primary planet. Once the second person gives permission through the per- mission posture, the first person can change the boss planet at their discretion, ideally at a musically satisfying time.

The second movement, entitled Voyager was a departure from the other three, in that it did not sonify link-line data directly, but instead offered an ambient, spatialised tour through actual space sound samples using trajectories defined by relational plan- etary data. There are no plans at this time to translate this movement into VR.

The third movement, entitled Linklines, enabled exploration of the angular relationships between elliptical planetary orbits in an interactive auditory environment for a performer or by small groups of participants in an installation or concert setting. Geometric properties of link-lines were mapped onto circular structures inherent to music and lis- tening, which were in turn used to dynamically con- trol processes of audio synthesis and multi-channel

spatial sound projection. This offered the performer and the audience shared opportunities to experi- ment with the expression of planetary movements through musically mediated auditory scenes.

A central property of spatial planetary movement is their near-circularity and harmonic symmetry. The repeating elliptical paths around the sun form the basis of Kepler’s laws. In the human perception of vibrations and frequency, circular structures can also be found. While frequency itself is a linear dimen- sion, musical pitch is inherently cyclical, repeating once in every octave (Herremans & Chuan, 2017). With a strategy like the Shepard Tone to eliminate the linear progression of frequency, pitch classes can be transformed into a continuous circle (Shep- ard, 1964). A well-known cyclical structure is the system of musical keys in western musical harmony: Under the condition of tempered tuning, musical keys form a closed circle of fifths, shown in Figure 4. While the systems of pitch classes and keys emerge from analysis, Coltrane’s Tone Circle represents a structural vision of pitch relationships that is at the same time a condensation of deep musical experi- ence. It can serve as a compass for musicians search- ing to build chords and melodies they can use for orientation during composition and improvisation, and thus it represents a platform for creation and a tool for analysis by equal measure.

Numerical structures can also be used to subvert musical intuition and provide a source of surprise; if we look at the numerical values of Fibonacci series represented in the decimal system, we find that the last digit of each number develops a cyclical pattern that repeats every 60 numbers and has many inter- esting properties. When plotted around a circle for example, it reveals other patterns, such as numbers opposite each other summing to 10 (Jain, 2019).

In a performance setting, this initial set of map- pings of the circle was implemented as processing modules in Max/MSP to be articulated by the geo- metric properties of the progressing planetary orbits. The resulting control values could then be dynam- ically applied to synthesis processes, similar to the patching of an analog synthesiser. The sound gen- erating process ranged from the simple mappings of link-line properties to sine tone events wrapped in amplitude envelopes, to complex additive accu- mulations of these tones into focused events and streams, to spatially distributed cross-modulations (FM) among these layers for the creation of dynam- ically de-correlated enveloping drones filling the physical space. The implemented audio render- ing modules employed concepts of auditory scene

Figure 4. Circular, cyclical patterns in music (circle of fifths, left) and mathematics (fibonacci sequence, last digit repeating pattern, right).

analysis (Bregman, 1990), reversing them to achieve an emergent synthesis of auditory scenes.
The features of Jupiter–Uranus link-line geometries (Figure 1) that were used included link-line angle, link-line intersecting node angle, Sun–Jupiter angle, Sun–Uranus angle.

These angles were then matched to one or more of each of the musical geometries, including
• the circle of fifths by mapping angle to nearest

note in the circle, with C at the top of the circle, • the Coltrane Tone Circle by mapping angle to the nearest note in the Coltrane Circle with C at the

top of the circle,
• the Fibonacci circular pattern by mapping the

numeric digit to major scale degree, where zero

is silence and 9 is equal in scale degree to 2.
For the VR version of this movement, a setup simi- lar to (Gossmann, 2005, figure 5) is planned which creates a space focused on a central table to which players can come from every angle to participate in shared exploration while being immersed in a spatial sound field. Ideally, this space will feature a dome layout of loudspeakers evenly surrounding the participants, or the binaural equivalent rendered in headphones for remote participants, who will gather around an instrument setup in the centre of the room. The controls will consist of simple interaction devices and GUI strategies. The spatial world emerg- ing from the mapping of astrodynamic geometries will be rendered as augmented-reality superimposed audiovisual data overlayed on a scientifically accu- rate solar system ‘reality’ as traditionally represented in a planetarium setting.

(4) The fourth and final movement, entitled Orbits, was built on a sonification strategy explored in an earlier work for a 16-channel speaker array in the Hyper- cube at Denver University. It employed a series of

sub-modules in Max/MSP that interpret the rota- tional patterns of Venus and Earth around the Sun to create a variety of tones that are layered into a rich soundscape through live performance and improvi- sation. The Max/MSP modules were controlled in real-time by an iPad via the Mira App and the Mi.Mu Gloves. The outputs were mapped to the Cube’s 140 channel speaker array. The main data set used was, again, the link-lines between Venus and Earth as provided by H. Warm’s app. The Max/MSP patch read in a text file of Warm’s ephemeris data and sent link-line endpoint coordinates to other sub- modules.

An additional HITRAN (Gordon et al., 2017) molec- ular spectroscopy data set was used for sound design of planetary identity, driven for each planet by its atmospheric composition. The HITRAN data set provides spectroscopic parameters of a variety of molecules that can be used to simulate the trans- mission and emission of light in the atmosphere. By scaling and normalising the data points so that the frequency spectrum was within the audible fre- quency range, a ‘tone’ was created using sine waves and basic additive synthesis methods, that was rep- resentative of the HITRAN data sets. Waveforms were created for each individual molecule and then combined to create a signature ‘tone’ for Venus and Earth based on the percentage of the presence of each molecule in the molecular composition of each atmosphere.

Five submodules were created for this movement. (a) Triangle Tone: This module generated a series of triangle waves whose fundamental frequencies were determined by the distance between Venus and Earth, or the length of the link-line. Every new data point that was introduced to the sys- tem generated a triangle tone at the frequency

that corresponded to that data point. Each of these tones were mapped to a sequential list of outputs. In the Cube this translated to playing 134 data points at the same time. The spatial build up allows a participant to walk around the space and hear how the progression of moving closer and farther apart changes through their own physical orbit in the space. The distance data was also used to modulate the duty cycle of the triangle wave. This added another level of tonal flavour.

(b) Melody Modules: The melody modules cycle through the data set provided playing one note at a time and outputting through a cyclical pro- gression of chosen speakers. The pitch of the note is determined by the data set. The tim- bre is constructed by using a sine wave to cycle through a wavetable constructed from a sam- pled tone designed from the HITRAN data. There were three melody modules. The data val- ues used were the link-line length, Earth–Sun distance, and Venus–Sun distance. Variables that are available to control are:

  • ADSR curve of the synthesised tone

  • A frequency multiplier that uses an integer multiple to increase the frequency values of

    the data stream to generate a higher pitch

  • The note length. (i.e. half note, quarter note,

    eighth note).
    (c) Molecule Dance: The Molecule Dance mod-

    ule uses the HITRAN data tones designed for Earth and Venus to generate the initial buffer of sixteen 2-dimensional wavetables. The lati- tude value of the planets are used to modulate the value of deviation from a centre frequency controlled by the length value of each planet of 16 sawtooth waves that cycling through 16 different instances of the x and y axes of the wavetable. For Earth, the Earth generated HITRAN tone is used and the Earth values are being used to control the x axis while the Venus values are being used to control the y axis. The same is done with Venus, with the Venus HITRAN tone being utilised and Venus values control the x axis while Earth values control the y axis. This module has no variable controls.

(d) Triangle of Death: This module generates a series of triangle waves, the frequency values are again defined by the distance data set. What makes this module unique is that it uses the distance data set also to define the frequency component of a sine wave, where the signal is multiplied and then summed by two variable

values and used to modulate the duty cycle of

the triangle waves.
(e) Ambisonics: This module used ambisonics to

decode the spatial positioning of the planets in physical space. In this module, two differ- ent sound clusters are generated comprising 16 triangle wave sound particles with frequen- cies dependent on the last 16 data points of Earth–Sun distance and Venus–Sun distance.

November, 2019

Concordia launch pad, 19 August – 3

Concurrent with the publication of this paper is an inten- sive research and development effort called Concordia Launch Pad. This period of time is focused on developing and testing ideas around module design and implemen- tation, developing ideas for community development and platform scope, and for designing a culmination immersive experience so to publicly showcase the first prototypes. Decentralised teams of artists, storytellers, sound designers, technicians and other such experts are working on exploring the underlying themes, systems, and structures of Concordia while a smaller subsets of the team are gathered physically to produce and share physical prototypes. The team is focusing on ways to use immersive, information-rich audio and augmented reality to connect the human to the technological and inspire a connection between people and patterns in nature. After Concordia Launch Pad, the instrument will be developed under the auspices of a new Insti- tute for Investigative Arts, seeking to create worlds that empower people to investigate reality through music and art.

The objectives of Concordia Launch Pad project are fourfold:

(1) To create a new musical instrument by develop- ing and testing at least two prototype Concordia software and hardware modules, and incorporating Mixed Reality, gestural control, and spatial audiovi- sual displays

(2) To co-develop a public installation experience using these prototypes to be shared live in a series of public events during the fourth quarter of 2019

(3) To develop and scope the broader Concordia techni- cal and conceptual frameworks using insights gained from developing the prototypes

(4) To serve as a learning experience for Investigative Music that will inform other projects that could be developed within the newly established Harmony Institute for Investigative Arts.

A wide variety of sonification strategies, control inter- faces, headsets, displays, and dome projection systems are being explored, as well as other immersive audio languages such as Faust. Tools for development at Con- cordia Launch Pad include a large physical space in which to work, a 12-channel spatial Meyer Sound loudspeaker array, a full recording studio, and a mixed-reality head- set and computing platform. Customised versions of code from Versum, Signature of the Celestial Spheres – the Program, the modules created for Cubefest and other hackathons, and world-building in a new decentralised virtual platform, called Decentraland (Ordano, Meilich, Jardi, & Araoz, 2017), are being integrated, expanded and explored by artists. Particular attention is being paid to developing meaningful and machine-learning-assisted ways of creating multiplayer gamified experiences across the physical versus virtual divides.

5. Conclusion and future work

It is exciting to continue discovering new perspectives to see and hear the data of the cosmos by designing new interactions that will drive the exploration and discov- ery of Nature. Extended reality is an excellent tool for exploring our universe beyond our solar system through visually and sonically compelling experiences. Bolstered by early experiences, a dedicated team is actively engaged in developing prototypes of the first Concordia modules and installations that will be shared with artists and the wider public to begin the open-sourcing process this year.

Work on the next phase of development of Concor- dia instrument and its underlying computational, inter- active, economic, social and organisational frameworks will begin at the end of Concordia Launch Pad with the establishment of the project within the new, decentralised Institute for the Investigative Arts.

As Concordia matures, and as additional scientific dis- coveries are made and new data becomes available, the original data sets upon which the instrument is based may change, requiring version tracking that links each performance, composition and module with the correct versions of each other and the data itself. By careful version control and matching, no composition or perfor- mance becomes obsolete and a full history of the evo- lution of the instrument, and any compositions made with it over time, is accessible, allowing the instrument to be sustainable and all compositions to be easily re- performed (Schedel & DeMartelly, 2008). It should be noted that this is a departure from many electronic music frameworks such as digital audio workstations (DAWs) or plug-ins, in which compositions or sessions must be mixed down to stereo or multi-channel audio files to preserve the details of the performances so that when

operating systems or compatibility changes, the sessions can still be heard.

From an economic perspective, Concordia appears to be the world’s first distributed-ledger based musical instrument, and because of that, it will also be one of the first musical instruments with its own built-in economy. This makes Concordia of interest not only to musicians, musicologists and ethnomusicologists, but also to cul- tural economists and innovation economists. Concordia reconceptualises an instrument from a private good or unit of capital to a distributed infrastructure (a proto- col), and therefore enables a different economic model, namely instrument as a common pool resource. This works over the protocol and design of the instrument and its settings and other metadata, and also to the data inputs and outputs. This then changes both the economic incen- tives at work, the business models that are effective, and the policy settings available. This new economic model also implies new policy models, based on institutional design of protocols and governance. Integrating all of this with VR will be a large focus of future work.

Concordia is being built with the intention of enabling universal accessibility and agency in experiencing the Music of the Spheres, with hearing being a central and dominant sense in the experience. It will require signif- icant advances on the state of the art within the current suite of tools available for use with XR headsets and con- trollers, especially in the audio domain. The approach of Concordia, mirroring Kepler’s, is to minimise the impo- sition of preformed ideas or musical constructs as much as possible from the sonification and visualisation algo- rithms, with the goal of allowing people to find mappings that naturally expose the symmetry, beauty, harmony and dissonance – the ‘musicality’ – of the physics. Concor- dia offers an infinite variety of understandings of various realities on the virtual plane. As the instrument and body of compositions grow, experiences afforded by it will expand both in artistic potential, scientific depth, and appreciation of nature.

Support Concordia

Concordia development is funded by collaborators, benefactors, and by your support.