Sharing Silouettes.
  • About

  • Group

  • Time Line

  • The Software

  • Inspiration

  • Forum

  • More

    Use tab to navigate through the menu items.
    To see this working, head to your live site.
    • All Posts
    • My Posts
    lexmoakler
    Nov 05, 2018

    Connecting Processing and Max/MSP with Spout

    in Changelog

    One goal for this project is to develop an entire software ecosystem. This means our run-time environment is spread out among multiple applications developed with more than one coding paradigm. Our results may be more resource-intensive, but this strategy allows us to prototype rapidly with access to a wide variety of pre-built development tools.


    The two applications we're connecting here are Processing and Max/MSP. Since we're planning on using a Windows setup, our open source tool of choice is Spout:


    http://spout.zeal.co/


    First we install the Spout framework on the Windows machine that will be used for demonstration. Here we include the optional Spout Controls content, because it includes a Processing example. After installation the operating system is ready, but we still need the application-specific libraries.


    In Processing, we open up either our own sketch or the example code and go to Sketch > Import Library > Add Library. Search for "Spout" by Lynn Jarvis and Martin Froehlich.


    The setup in Max is similar: go to File > Show Package Manager, and in that window search for "Spout" by Lynn Jarvis and Bob Jarvis. That's everything!


    The screenshot below shows a working example of Spout being used for a system flow like ours. The Processing sketch renders a virtual scene, shown on the left. Simultaneously, a Max patch receives and re-renders that same scene on the right. The rotating cube is an example scene made by Dave Bollinger.



    On a Mac environment, the same multi-application setup can be created as well. Instead of Spout, we would be using an open source tool called Syphon:


    http://syphon.v002.info/


    With this system workflow, we will be able to process visual elements twice: once in Processing and once in Max/MSP. The first step has direct access to the Kinect data, and will be focused on representing the participant. The second step will add the results of the first step to an interactive virtual environment. The 3D rendered composite will be combined with a thematically-appropriate soundscape, and output back to the participant through a projector and surround sound.

    0 comments
    0
    0 comments

    © 2018 LE/EECS4700 Digital Media Project(Full Year 2018-2019) Sharing Silhouette group.