<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://thirzadado.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://thirzadado.com/" rel="alternate" type="text/html" /><updated>2026-02-17T10:55:53+00:00</updated><id>https://thirzadado.com/feed.xml</id><title type="html">Thirza Dado</title><subtitle>An amazing website.</subtitle><author><name>Thirza Dado</name></author><entry><title type="html">PhD dissertation</title><link href="https://thirzadado.com/thesis/" rel="alternate" type="text/html" title="PhD dissertation" /><published>2025-10-26T00:00:00+00:00</published><updated>2025-10-26T00:00:00+00:00</updated><id>https://thirzadado.com/thesis</id><content type="html" xml:base="https://thirzadado.com/thesis/"><![CDATA[<p>2025, 26 October</p>

<div class="thesis-wrapper">
  <a href="https://drive.google.com/file/d/13YLfnJ3g3XK1VP7wFys7CoNMhjtQC4un/view?usp=drive_link" target="_blank" class="thesis-link">
    <div class="thesis-caption">Full thesis (PDF)</div>
    <img src="/assets/images/misc/cover_.jpg" alt="PhD Thesis Cover: Neural Coding with Synthesized Reality" class="thesis-cover" />
  </a>
</div>

<style>
  .thesis-wrapper {
    text-align: center;
    margin: 40px 0;
  }

  .thesis-link {
    display: inline-block;
    text-decoration: none;
    color: #DD4124;
    cursor: pointer;
  }

  .thesis-caption {
    font-size: 1em;
    font-weight: 500;
    margin-bottom: 10px;
  }

  .thesis-cover {
    display: block;
    margin: 0 auto;
    width: 340px;
    max-width: 85%;
  }


  @media (max-width: 600px) {
    .thesis-cover {
      width: 280px;
    }
  }
</style>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2025, 26 October]]></summary></entry><entry><title type="html">Procedural game demo</title><link href="https://thirzadado.com/procgame/" rel="alternate" type="text/html" title="Procedural game demo" /><published>2025-07-17T00:00:00+00:00</published><updated>2025-07-17T00:00:00+00:00</updated><id>https://thirzadado.com/procgame</id><content type="html" xml:base="https://thirzadado.com/procgame/"><![CDATA[<p>2025, 17 July</p>

<p>The world only exists where you go &lt;3</p>

<video autoplay="" loop="" muted="" playsinline="" width="600">
  <source src="/assets/procgame.mp4" type="video/mp4" />
  Your browser does not support the video tag.
</video>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2025, 17 July]]></summary></entry><entry><title type="html">Tooneye</title><link href="https://thirzadado.com/tooneye/" rel="alternate" type="text/html" title="Tooneye" /><published>2024-01-05T00:00:00+00:00</published><updated>2024-01-05T00:00:00+00:00</updated><id>https://thirzadado.com/tooneye</id><content type="html" xml:base="https://thirzadado.com/tooneye/"><![CDATA[<p>2024, 05 January</p>

<p>In the master’s course <a href="https://www.ru.nl/courseguides/socsci/courses-osiris/ai/sow-mki95-computer-graphics-computer-vision/">Computer Graphics &amp; Computer Vision</a>, students implement vertex and pixel shaders. Here, I used them to color Coraline’s eyes that are following the jumping mouse, which brings her through the tunnel to the Other World.</p>

<div id="unity-container" class="unity-desktop">
    <canvas id="unity-canvas" width="960" height="600"></canvas>
    <div id="unity-loading-bar">
    <div id="unity-logo"></div>
    <div id="unity-progress-bar-empty">
        <div id="unity-progress-bar-full"></div>
    </div>
    </div>
    <div id="unity-mobile-warning">
    WebGL builds are not supported on mobile devices.
    </div>
    <div id="unity-footer">
    <div id="unity-webgl-logo"></div>
    <div id="unity-fullscreen-button"></div>
    </div>
</div>

<script>
    var buildUrl = "../../assets/unity/tooneye/Build";
    var loaderUrl = buildUrl + "/tooneye.loader.js";
    var config = {
    dataUrl: buildUrl + "/tooneye.data",
    frameworkUrl: buildUrl + "/tooneye.framework.js",
    codeUrl: buildUrl + "/tooneye.wasm",
    streamingAssetsUrl: "StreamingAssets",
    companyName: "DefaultCompany",
    productName: "Tooneye",
    productVersion: "1",
    };

    var container = document.querySelector("#unity-container");
    var canvas = document.querySelector("#unity-canvas");
    var loadingBar = document.querySelector("#unity-loading-bar");
    var progressBarFull = document.querySelector("#unity-progress-bar-full");
    var fullscreenButton = document.querySelector("#unity-fullscreen-button");
    var mobileWarning = document.querySelector("#unity-mobile-warning");

    // By default Unity keeps WebGL canvas render target size matched with
    // the DOM size of the canvas element (scaled by window.devicePixelRatio)
    // Set this to false if you want to decouple this synchronization from
    // happening inside the engine, and you would instead like to size up
    // the canvas DOM size and WebGL render target sizes yourself.
    // config.matchWebGLToCanvasSize = false;

    if (/iPhone|iPad|iPod|Android/i.test(navigator.userAgent)) {
    container.className = "unity-mobile";
    // Avoid draining fillrate performance on mobile devices,
    // and default/override low DPI mode on mobile browsers.
    config.devicePixelRatio = 1;
    mobileWarning.style.display = "block";
    setTimeout(() => {
        mobileWarning.style.display = "none";
    }, 5000);
    } else {
    canvas.style.width = "960px";
    canvas.style.height = "600px";
    }
    loadingBar.style.display = "block";

    var script = document.createElement("script");
    script.src = loaderUrl;
    script.onload = () => {
    createUnityInstance(canvas, config, (progress) => {
        progressBarFull.style.width = 100 * progress + "%";
    }).then((unityInstance) => {
        loadingBar.style.display = "none";
        fullscreenButton.onclick = () => {
        unityInstance.SetFullscreen(1);
        };
    }).catch((message) => {
        alert(message);
    });
    };
    document.body.appendChild(script);
</script>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2024, 05 January]]></summary></entry><entry><title type="html">Inktober</title><link href="https://thirzadado.com/inktober/" rel="alternate" type="text/html" title="Inktober" /><published>2023-11-01T00:00:00+00:00</published><updated>2023-11-01T00:00:00+00:00</updated><id>https://thirzadado.com/inktober</id><content type="html" xml:base="https://thirzadado.com/inktober/"><![CDATA[<p>2023, 01 November</p>

<style>
  /* Carousel container styling */
  .carousel-container {
    position: relative;
    width: 100%;
    max-width: 800px;
    margin: 0 auto;
    overflow: hidden;
    padding-top: 20px;
  }

  /* Carousel styling */
  .carousel {
    display: flex;
    transition: transform 0.5s ease;
  }

  .carousel-item {
    min-width: 100%;
    box-sizing: border-box;
    text-align: center;
  }

  .carousel-item img {
    width: 250px;
    height: 250px;
    object-fit: contain;
    border: 2px solid #e0e0e0;
    image-rendering: pixelated; /* keeps pixel art sharp */
  }

  .carousel-title {
    font-size: 1.2em;
    margin-top: 10px;
    color: #333;
  }

  /* --- Pixel Art Style Navigation Buttons --- */
  .carousel-button {
    position: absolute;
    top: 50%;
    transform: translateY(-50%);
    width: 48px;
    height: 48px;
    background-color: #fff8f6;
    color: #DD4124;
    border: 3px solid #DD4124;
    font-size: 1.8em;
    line-height: 1;
    font-family: monospace;
    cursor: pointer;
    box-shadow: 3px 3px 0 #a53b24;
    image-rendering: pixelated;
    z-index: 5; /* ensures always on top */
    transition: all 0.15s ease;
  }

  .carousel-button:hover {
    background-color: #DD4124;
    color: #fff;
    box-shadow: 1px 1px 0 #742616;
    transform: translateY(-50%) scale(1.05);
  }

  .carousel-button.left {
    left: 15px;
    border-radius: 0 4px 4px 0;
  }

  .carousel-button.right {
    right: 15px;
    border-radius: 4px 0 0 4px;
  }

  /* Responsive: buttons slightly smaller on mobile */
  @media (max-width: 600px) {
    .carousel-button {
      width: 40px;
      height: 40px;
      font-size: 1.4em;
      box-shadow: 2px 2px 0 #a53b24;
    }
  }
</style>

<p><a href="https://inktober.com/">Inktober</a> is an annual art challenge where participants create and share one drawing every day. Use the arrow keys to browse through each drawing.</p>

<p><br /><br /></p>

<!-- Carousel container -->
<div class="carousel-container">
    <button class="carousel-button left" onclick="moveSlide(-1)">&#10094;</button>
    <button class="carousel-button right" onclick="moveSlide(1)">&#10095;</button>

    <!-- Carousel items -->
    <div class="carousel" id="carousel">
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/31-fir.png" alt="31-fire" />
            <div class="carousel-title">31 - fire</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/30-rush.png" alt="30-rush" />
            <div class="carousel-title">30 - rush</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/29-massiv.png" alt="29-massive" />
            <div class="carousel-title">29 - massive</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/28-sparkl.png" alt="28-sparkle" />
            <div class="carousel-title">28 - sparkle</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/27-beast.png" alt="27-beast" />
            <div class="carousel-title">27 - beast</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/26-rm.png" alt="26-remove" />
            <div class="carousel-title">26 - remove</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/25-dangerous.png" alt="25-dangerous" />
            <div class="carousel-title">25 - dangerous</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/24-shallow.png" alt="24-shallow" />
            <div class="carousel-title">24 - shallow</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/23-celestial.png" alt="23-celestial" />
            <div class="carousel-title">23 - celestial</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/22-scratchy.png" alt="22-scratchy" />
            <div class="carousel-title">22 - scratchy</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/21-chains.png" alt="21-chains" />
            <div class="carousel-title">21 - chains</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/20-frost.png" alt="20-frost" />
            <div class="carousel-title">20 - frost</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/19-plump.png" alt="19-plump" />
            <div class="carousel-title">19 - plump</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/18-saddle.png" alt="18-saddle" />
            <div class="carousel-title">18 - saddle</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/17-devil.png" alt="17-devil" />
            <div class="carousel-title">17 - devil</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/16-angel.png" alt="16-angel" />
            <div class="carousel-title">16 - angel</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/15-dagger.png" alt="15-dagger" />
            <div class="carousel-title">15 - dagger</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/14-castle.png" alt="14-castle" />
            <div class="carousel-title">14 - castle</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/13-rise.png" alt="13-rise" />
            <div class="carousel-title">13 - rise</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/12-spicy.png" alt="12-spicy" />
            <div class="carousel-title">12 - spicy</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/11-wander.png" alt="11-wander" />
            <div class="carousel-title">11 - wander</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/10-fortune.png" alt="10-fortune" />
            <div class="carousel-title">10 - fortune</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/9-bounce.png" alt="09-bounce" />
            <div class="carousel-title">09 - bounce</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/8-toad.png" alt="08-toad" />
            <div class="carousel-title">08 - toad</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/7-drip.png" alt="07-drip" />
            <div class="carousel-title">07 - drip</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/6-golden.png" alt="06-golden" />
            <div class="carousel-title">06 - golden</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/5-map.png" alt="05-map" />
            <div class="carousel-title">05 - map</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/4-dodge.jpeg" alt="04-dodge" />
            <div class="carousel-title">04 - dodge</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/3-path.jpeg" alt="03-path" />
            <div class="carousel-title">03 - path</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/2-spiders.jpeg" alt="02-spiders" />
            <div class="carousel-title">02 - spiders</div>
        </div>
        <div class="carousel-item">
            <img src="/assets/images/misc/inktober/1-dream.jpeg" alt="01-dream" />
            <div class="carousel-title">01 - dream</div>
        </div>
    </div>
</div>

<script>
    let slideIndex = 0;
    const carousel = document.getElementById("carousel");
    const carouselItems = carousel.children;
    const totalSlides = carouselItems.length;

    // Clone the first and last slides
    const firstSlideClone = carouselItems[0].cloneNode(true);
    const lastSlideClone = carouselItems[totalSlides - 1].cloneNode(true);
    carousel.appendChild(firstSlideClone); // Add clone of the first slide to the end
    carousel.insertBefore(lastSlideClone, carouselItems[0]); // Add clone of the last slide to the beginning

    // Adjust transition and position to start at the first actual slide
    carousel.style.transform = `translateX(-100%)`;

    function moveSlide(direction) {
        slideIndex += direction;

        // Apply smooth transition for regular movement
        carousel.style.transition = 'transform 0.5s ease';
        carousel.style.transform = `translateX(-${(slideIndex + 1) * 100}%)`;

        // Handle wrapping after the transition ends
        carousel.addEventListener('transitionend', function handleTransitionEnd() {
            if (slideIndex < 0) {
                // If at the beginning (loop to last slide)
                slideIndex = totalSlides - 1;
                carousel.style.transition = 'none'; // Disable transition for the jump
                carousel.style.transform = `translateX(-${(slideIndex + 1) * 100}%)`;
            } else if (slideIndex >= totalSlides) {
                // If at the end (loop to first slide)
                slideIndex = 0;
                carousel.style.transition = 'none'; // Disable transition for the jump
                carousel.style.transform = `translateX(-${(slideIndex + 1) * 100}%)`;
            }
            carousel.removeEventListener('transitionend', handleTransitionEnd);
        });
    }
</script>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2023, 01 November]]></summary></entry><entry><title type="html">Limits of representation</title><link href="https://thirzadado.com/limitations/" rel="alternate" type="text/html" title="Limits of representation" /><published>2023-07-12T00:00:00+00:00</published><updated>2023-07-12T00:00:00+00:00</updated><id>https://thirzadado.com/limitations</id><content type="html" xml:base="https://thirzadado.com/limitations/"><![CDATA[<p>2023, 12 July</p>

<p><img src="/assets/images/blog/the-treachery-of-images.jpeg" alt="The treachery of images" />
<em><a href="https://www.renemagritte.org/the-treachery-of-images.jsp#">The Treachery of Images, 1929, by Rene Magritte</a> challenges us to recognize the limitations of representations. The painting urges us to understand the distinction between representation and reality - a key theme in the field of neural decoding.</em></p>

<p>Neural decoding involves the translation of neuronal activity into meaningful representations, such as images, language or sounds, so that we can interpret the information encoded in the brain. While we may be excited by the potential of this pioneering field, it’s also important to remain grounded. There are inherent limitations and potential biases that may arise within the decoding process that we must consider. As Alfred Korzybski famously said “<em>the map is not the territory</em>” we seek to explore.</p>

<h5 id="models-are-abstractions-of-reality">Models are abstractions of reality</h5>

<p>Light, upon entering the eyes, triggers a domino effect of electrical impulses in the visual stream of the brain.  This process effectively encodes the visual stimuli from our environment into neural signals. The objective of <em>decoding</em> is then to model the reverse transformation from the neural responses back to the real-world concepts they represent. <mark style="background-color: lightblue">Let's acknowledge a fundamental truth: no matter how sophisticated or refined, models are simplifications of reality.</mark> These models, while offering significant insights, are essentially approximations that attempt to capture the complexities of a vast system using a set of assumptions and measured parameters.</p>

<blockquote>
  <p>Forgetting that the map is not the territory can lead us into the traps of reductionism and oversimplification.</p>
</blockquote>

<p>Let’s consider an illustrative example:</p>

<p>Imagine researchers have developed a neural decoding model trained to reconstruct visual images from brain activity. This model is designed to “see” what a person sees by interpreting their neural signals. To ensure it works well across a variety of settings, it has been trained with a diverse array of images.</p>

<p>However, visual perception is not just about seeing — it’s deeply personal and influenced by emotions and memories. For instance, our emotional state can affect our attention, causing us to focus more on certain aspects of a visual scene or interpret colors and shapes in a way that aligns with our feelings. As a result, the model would likely reconstruct the painting in terms of its general outlines seen by the eyes, but it lacks the subjective depth of the actual experience that colors human perception. Thus, it does not fully represent the territory of the individual’s perceptual landscape.</p>

<h5 id="abstraction-can-be-powerful">Abstraction can be powerful</h5>

<p>But there is another side to the coin. As Joan Robinson noted, “<em>a model which took account of all the variegation of reality would be of no more use than a map at the scale of one to one</em>”. Recognizing the limitations of models doesn’t imply we should aim for a perfect depiction of reality. Capturing every detail and variation within a model is not only impractical but would result in a tool too complex and cumbersome. It would lead to confusion rather than clarity. As such, generalization remains a powerful tool.</p>

<blockquote>
  <p>Models serve as a foundation for further exploration and discovery.</p>
</blockquote>

<p>By simplifying the complexity of reality and identifying overarching principles, models provide a structured approach to grasp the world around us. They enable us to generate insights and make predictions about broader situations. Much like a map helps us navigate unfamiliar terrain, neural decoding models help us investigate the neural landscape — capturing its essential elements and illuminating critical relationships. While these models don’t encompass the full complexity of reality, they offer invaluable guidance. It’s crucial to strike a balance between representation provided by models and the actual reality they aim to simulate. Acknowledging the limitations and generalizations inherent in models is important, but we should also appreciate how they enhance our understanding and drive scientific inquiry forward. In essence, models are not just tools for representation; models serve as a foundation for further exploration and discovery. Through a continual cycle of hypothesis testing, experimentation, and revision, they enable us to ask precise questions and expand our knowledge.</p>

<h5 id="our-mental-model-is-also-an-abstraction-of-reality">Our mental model is also an abstraction of reality</h5>

<p>The notion that representations are simplifications of reality goes even further. It follows the core of our own mental model and the way we perceive the world. <mark style="background-color: lightblue">Our neural representation of the world is, in fact, an abstraction of the real world. We can never have direct access to the full richness of reality but can only catch bits and pieces through our senses and cognitive processes.</mark> Our brains constantly construct and interpret an internal representation of the world based on this limited information colored by our subjective beliefs, expectations and biases. That said, it is not only important to be critical of the limitations of decoding models themselves but also of our own mental model that influences the interpretation of the results. This recognition invites us to humility and continuous reflection on the complex nature of our own thinking.</p>

<h5 id="conclusion">Conclusion</h5>

<p>The promise of neural decoding is enticing. The prospect of unlocking thoughts and intentions purely by analyzing neural activity raises hope for the treatment of neurological disorders, the restoration of sensory and motor skills, and even the possibility of mind-reading. Yet, it’s essential to remember that this field is still in its infancy. Premature conclusions could lead to distorted results and misinterpretations. While we have made progress in decoding sensory inputs, fully understanding abstract thoughts, subjective experiences, and higher cognitive functions remains largely beyond our reach. The human mind is an elusive entity, and our current technologies still have a long way to go before they can truly grasp the full spectrum of our inner lives.</p>

<p>So let us, amidst the excitement about the possibilities of neural decoding, maintain humility. We must be mindful of the limitations and biases inherent in this field. In a world craving quick answers and immediate gratification, it is more important than ever to approach our research with patience and methodical diligence.</p>

<p>That’s all!</p>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2023, 12 July]]></summary></entry><entry><title type="html">Upfall</title><link href="https://thirzadado.com/upfall/" rel="alternate" type="text/html" title="Upfall" /><published>2022-10-02T00:00:00+00:00</published><updated>2022-10-02T00:00:00+00:00</updated><id>https://thirzadado.com/upfall</id><content type="html" xml:base="https://thirzadado.com/upfall/"><![CDATA[<p>2022, 02 October</p>

<p><em>Upfall</em> symbolizes the paradoxical essence of progress, capturing the simultaneous presence of advancement and challenges, successes and setbacks. Acclaimed singer-songwriter <a href="https://www.instagram.com/mayashanti_/?hl=en">Maya Shanti</a> teamed up with generative AI, where she wrote the lyrics of first verse and let GPT-3 complete the second. The vocals were incorporated into <a href="https://www.aisongcontest.com/participants-2022/wavy-weights-and-bassy-biases">the final song that made it to the finals of the AI Song Contest 2022</a>.</p>

<audio src="/assets/upfall.mp3" controls="" preload=""></audio>

<p><strong>Maya Shanti</strong>:</p>

<p><em>I’m hanging upside down</em><br />
<em>And see the world is changing</em><br />
<em>I don’t know what to think of now</em><br />
<em>But it feels like I’m fading</em><br />
<em>So many questions but I know it’s always been there</em><br />
<em>I know, does she know, does he know</em><br />
<em>I know that I’m upfalling</em><br />
<em>I know that I’m upfalling</em><br />
<em>I know that I’m upfalling into this new world</em></p>

<p><strong>GPT-3</strong>:</p>

<p><em>I close my eyes and dream</em> <br />
<em>Of a world that’s brand new</em> <br />
<em>I hope that one day you’ll see</em> <br />
<em>That changes can be good</em> <br />
<em>I can be good</em> <br />
<em>I can be good</em></p>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2022, 02 October]]></summary></entry><entry><title type="html">Uncanny valley</title><link href="https://thirzadado.com/uncanny/" rel="alternate" type="text/html" title="Uncanny valley" /><published>2022-09-29T00:00:00+00:00</published><updated>2022-09-29T00:00:00+00:00</updated><id>https://thirzadado.com/uncanny</id><content type="html" xml:base="https://thirzadado.com/uncanny/"><![CDATA[<p>2022, 29 September</p>

<h3 id="decoding-syntheticity-from-the-brain">Decoding syntheticity from the brain</h3>

<p><em>Written by Thirza Dado &amp; Umut Güçlü.</em></p>

<p><img src="/assets/images/blog/synth3.png" alt="Top" />
<em>The uncanny cliff hypothesizes that artificial figures remain either on the cliff (i.e., perceived as human-like) or they fall from the cliff (i.e., perceived as fake with disturbing feelings of eeriness and unease).</em></p>

<p>Generative Adversarial Networks (GANs) are powerful generative models trained to synthesize (“<em>fake</em>”) data that seem indistinguishable from “<em>real</em>” data. <a href="https://www.nature.com/articles/s41598-021-03938-w">A recent study</a> demonstrated with an experiment how GANs can be used to reconstruct literal pictures of what volunteers in the brain scanner were seeing by neural decoding of their brain recordings. Concretely, these volunteers were looking at <strong>pictures of faces of people</strong>; faces that <a href="https://thispersondoesnotexist.com/">do not really exist</a> but are instead synthesized by a 🤖<a href="https://github.com/tkarras/progressive_growing_of_gans">Progressively Grown GAN (PGGAN)</a> for faces.</p>

<p>The goal of neural decoding is to discover what information (about a stimulus) is present in the brain. That said, any property of a stimulus could potentially be decoded from the brain.</p>

<p><img src="/assets/images/blog/grad2.png" alt="Gradient" />
<em>A selection of faces from the used dataset illustrates that some faces look more natural or real and some faces look more fake. We call this property “syntheticity”.</em></p>

<p>Although the presented faces in the experiment look hyperreal to human observers in general (but are all fake and do not exist), close inspection of the used face stimuli revealed that some faces look less real than others, resulting in disturbing feelings of eeriness and unease. That is, these data have a property of “syntheticity” that denotes how real or fake a face looks.</p>

<p>The <a href="https://web.ics.purdue.edu/~drkelly/MoriTheUncannyValley1970.pdf">uncanny valley theory</a> hypothesizes that the affinity response of a human observer towards an artificial figure becomes more and more positive when it looks more and more human-like but up to a certain point where it abruptly switches from empathy to revulsion. Alternatively, it has been proposed that the valley is more like an <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4415111&amp;casa_token=VDO-WB9Ov5EAAAAA:JQY0Zg4MJUUOpjTcuTpoXxaGc51VdSXqjboQhzgD-3yijNYFhg2F69jr198-GiK_r8XRZFKKknSo&amp;tag=1">uncanny cliff</a>:</p>

<blockquote>
  <p><em>“We are not certain if this section [of the uncanny valley between the deepest dip and the human level] actually exists thereby prompting us to suggest that the uncanny valley should be considered more of a cliff than a valley, where robots strongly resembling humans could either fall from the cliff or they could be perceived as being human.”</em></p>

  <p><cite>Bartneck, C., Kanda, T., Ishiguro, H., &amp; Hagita, N. (2007)</cite></p>
</blockquote>

<p>Here, we will decode syntheticity from neural data and see whether the results indicate a gradient or a cliff!</p>

<p><img src="/assets/images/blog/exp.png" alt="Experiment" />
<em>Behavioral experiment to sort faces on syntheticity.</em></p>

<h5 id="behavioral-data">Behavioral data</h5>

<p>To quantify the behavioral phenomenon of perceived syntheticity, we can do a behavioral experiment where a volunteer attributes <em>syntheticity scores</em> to all the faces. Concretely, the volunteer (i.e., <em>me</em>, I volunteered) sees two face images at a time and is asked “which face looks more real” and, if this question was too difficult to answer, “which face do you like better”. As such, the faces get scored on syntheticity and we can sort them from real- to fake-looking.</p>

<p>To be more efficient than presenting combinations of face pairs with a worst-case performance of O(n²) with n=1086, we can go and implement the divide and conquer <a href="https://www.geeksforgeeks.org/merge-sort/">merge sort</a> algorithm that iteratively merges smaller lists of sorted images until one big sorted list of fake-to-real-looking faces remains. This algorithm sorts each face in relation to other faces but <em>not all the other faces</em> because we can already infer knowledge from earlier decisions, resulting in a worst-case bound of O(n log n) (i.e., it still took me three days to make 10136 comparisons😬). The experiment was implemented in <a href="https://unity.com/">Unity</a> and C#.</p>

<p>You can find the result files here: <em>ridx</em> contains the initial (unsorted) index order that determined which pair of faces would be presented to the screen and <em>sidx</em> is the index list sorted on syntheticity by the volunteer. That is, we sort the original <em>ridx</em> by the “sorted” <em>sidx</em>. In the last for-loop in the code snippet below, we then sort again from 0 to 1086 (fake- to real-looking).</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">path</span> <span class="o">+</span> <span class="s">"sidx_T_7_10136.txt"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">sidx</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">([</span> <span class="nb">int</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">f</span> <span class="p">])</span>

<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">path</span> <span class="o">+</span> <span class="s">"ridx_T_7_10136.txt"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">ridx</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">([</span> <span class="nb">int</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">f</span> <span class="p">])[:</span><span class="nb">len</span><span class="p">(</span><span class="n">sidx</span><span class="p">)]</span>

<span class="nb">sorted</span> <span class="o">=</span> <span class="n">ridx</span><span class="p">[</span><span class="n">sidx</span><span class="p">]</span>
<span class="n">scores</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">zeros</span><span class="p">(</span><span class="mi">1086</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1086</span><span class="p">):</span>
    <span class="n">scores</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">where</span><span class="p">(</span><span class="nb">sorted</span> <span class="o">==</span> <span class="n">i</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
</code></pre></div></div>

<h5 id="neural-data">Neural data</h5>
<p>We use the hyper dataset of fMRI measurements to PGGAN-generated face stimuli (<a href="https://openneuro.org/datasets/ds004280/versions/1.0.0">whole dataset</a>). In this blog post, we use the same 4096-voxel selection as the original hyper study which can be found <a href="https://drive.google.com/drive/u/1/folders/1OW0cfnoP8_tZBGWLbpiPPX81QH9pusjv">here</a> but it would also be interesting to have a look at whole-brain or different brain areas. Let’s hyperalign and average the brain data of the two participants (just like we did in a <a href="https://medium.com/neural-coding-lab/neural-decoding-w-synthesized-reality-5eeb476f399">previous blog post</a>). Alternatively, you can also just pick the brain responses of either subject 1 or 2.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">!</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">install</span> <span class="n">swig</span>
<span class="err">!</span> <span class="n">pip</span> <span class="n">install</span> <span class="o">-</span><span class="n">U</span> <span class="n">pymvpa2</span>

<span class="kn">import</span> <span class="nn">mvpa2.datasets</span>
<span class="kn">import</span> <span class="nn">mvpa2.algorithms.hyperalignment</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">import</span> <span class="nn">pickle</span>


<span class="k">def</span> <span class="nf">hyperalignment</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">):</span>
    <span class="n">x</span> <span class="o">=</span> <span class="p">[</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">]</span>
    <span class="n">dataset</span> <span class="o">=</span> <span class="p">[</span><span class="n">mvpa2</span><span class="p">.</span><span class="n">datasets</span><span class="p">.</span><span class="n">Dataset</span><span class="p">(</span><span class="n">x_</span><span class="p">)</span> <span class="k">for</span> <span class="n">x_</span> <span class="ow">in</span> <span class="n">x</span><span class="p">]</span>
    <span class="n">hyperalignment</span> <span class="o">=</span> <span class="n">mvpa2</span><span class="p">.</span><span class="n">algorithms</span><span class="p">.</span><span class="n">hyperalignment</span><span class="p">.</span><span class="n">Hyperalignment</span><span class="p">()(</span><span class="n">dataset</span><span class="p">)</span>
    <span class="n">y</span> <span class="o">=</span> <span class="p">[</span><span class="n">hyperalignment</span><span class="p">[</span><span class="n">j</span><span class="p">].</span><span class="n">forward</span><span class="p">(</span><span class="n">dataset</span><span class="p">[</span><span class="n">j</span><span class="p">]).</span><span class="n">samples</span> <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">dataset</span><span class="p">))]</span>
    <span class="k">return</span> <span class="p">(</span><span class="n">y</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="n">y</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span> <span class="o">/</span> <span class="mi">2</span>


<span class="n">path</span> <span class="o">=</span> <span class="s">"/yourpath/"</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">path</span> <span class="o">+</span> <span class="s">"data_1.dat"</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">X_tr1</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">X_te1</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">pickle</span><span class="p">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>

<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">path</span> <span class="o">+</span> <span class="s">"data_2.dat"</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">X_tr2</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">X_te2</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">pickle</span><span class="p">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>

<span class="n">X_1</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te1</span> <span class="o">+</span> <span class="n">X_tr1</span><span class="p">)</span>
<span class="n">X_2</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te2</span> <span class="o">+</span> <span class="n">X_tr2</span><span class="p">)</span>
<span class="n">X</span> <span class="o">=</span> <span class="n">hyperalignment</span><span class="p">(</span><span class="n">X_1</span><span class="p">,</span> <span class="n">X_2</span><span class="p">)</span>
</code></pre></div></div>

<h5 id="neural-decoding">Neural decoding</h5>
<p>We do  a linear mapping from brain responses to syntheticity scores. The original dataset order (test + train) is permuted to ensure various degrees of syntheticity because the quality in the original test set (36 faces) was quite good. The syntheticity scores in the test set would otherwise all be more or less similar (i.e., very real-looking). We use a 90:10 split where 90% of the data is used as the training set and the remaining 10% is used as the held-out test set to evaluate model performance.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="kn">import</span> <span class="n">zscore</span>

<span class="n">np</span><span class="p">.</span><span class="n">random</span><span class="p">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>
<span class="n">permutation</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">random</span><span class="p">.</span><span class="n">permutation</span><span class="p">(</span><span class="mi">1086</span><span class="p">)</span>
<span class="n">_X</span> <span class="o">=</span> <span class="n">X</span><span class="p">[</span><span class="n">permutation</span><span class="p">]</span>
<span class="n">_T</span> <span class="o">=</span> <span class="n">scores</span><span class="p">[</span><span class="n">permutation</span><span class="p">]</span>

<span class="n">n</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="mi">1086</span> <span class="o">/</span> <span class="mi">100</span> <span class="o">*</span> <span class="mi">10</span><span class="p">)</span>
<span class="n">x_te</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">_X</span><span class="p">[:</span><span class="n">n</span><span class="p">])</span>
<span class="n">x_tr</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">_X</span><span class="p">[</span><span class="n">n</span><span class="p">:])</span>
<span class="n">t_te</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">_T</span><span class="p">[:</span><span class="n">n</span><span class="p">])</span>
<span class="n">t_tr</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">_T</span><span class="p">[</span><span class="n">n</span><span class="p">:])</span>

<span class="n">reg</span> <span class="o">=</span> <span class="n">LinearRegression</span><span class="p">().</span><span class="n">fit</span><span class="p">(</span><span class="n">x_tr</span><span class="p">,</span> <span class="n">t_tr</span><span class="p">)</span>
<span class="n">y_te</span> <span class="o">=</span> <span class="n">reg</span><span class="p">.</span><span class="n">predict</span><span class="p">(</span><span class="n">x_te</span><span class="p">)</span>
</code></pre></div></div>

<h5 id="evaluation">Evaluation</h5>
<p>To evaluate the model performance of our linear decoder, we can look at the correlation between the predicted scores from brain data and the syntheticity scores from the behavioral experiment.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">scipy</span> <span class="kn">import</span> <span class="n">stats</span>
<span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="kn">import</span> <span class="n">t</span>

<span class="k">def</span> <span class="nf">pearson_correlation_coefficient</span><span class="p">(</span><span class="n">x</span><span class="p">:</span> <span class="n">np</span><span class="p">.</span><span class="n">ndarray</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="n">np</span><span class="p">.</span><span class="n">ndarray</span><span class="p">,</span> <span class="n">axis</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">np</span><span class="p">.</span><span class="n">ndarray</span><span class="p">:</span>
    <span class="n">r</span> <span class="o">=</span> <span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">nan_to_num</span><span class="p">(</span><span class="n">stats</span><span class="p">.</span><span class="n">zscore</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="o">*</span> <span class="n">np</span><span class="p">.</span><span class="n">nan_to_num</span><span class="p">(</span><span class="n">stats</span><span class="p">.</span><span class="n">zscore</span><span class="p">(</span><span class="n">y</span><span class="p">))).</span><span class="n">mean</span><span class="p">(</span><span class="n">axis</span><span class="p">)</span>
    <span class="n">p</span> <span class="o">=</span> <span class="mi">2</span> <span class="o">*</span> <span class="n">t</span><span class="p">.</span><span class="n">sf</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="nb">abs</span><span class="p">(</span><span class="n">r</span> <span class="o">/</span> <span class="n">np</span><span class="p">.</span><span class="n">sqrt</span><span class="p">((</span><span class="mi">1</span> <span class="o">-</span> <span class="n">r</span> <span class="o">**</span> <span class="mi">2</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">x</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">-</span> <span class="mi">2</span><span class="p">))),</span> <span class="n">x</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">-</span> <span class="mi">2</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">r</span><span class="p">,</span> <span class="n">p</span>


<span class="n">r</span><span class="p">,</span> <span class="n">p</span> <span class="o">=</span> <span class="n">pearson_correlation_coefficient</span><span class="p">(</span><span class="n">y_te</span><span class="p">,</span> <span class="n">t_te</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="n">r</span><span class="p">.</span><span class="n">mean</span><span class="p">(),</span> <span class="n">p</span><span class="p">.</span><span class="n">mean</span><span class="p">())</span>
</code></pre></div></div>

<p>This results in r=0.4298, p=3.46e-06, meaning that we can indeed predict continuous syntheticity scores from the fMRI recordings from the hyper experiment. This means that the neural representations of the perceived faces contain continuous- rather than binary information on syntheticity. Otherwise, if the encoded information would have been binary (i.e., either real- or fake-looking), it would not have been possible to decode these continuous values from the brain. <strong>In conclusion, our result supports the uncanny valley theory rather than the uncanny cliff.</strong></p>

<p>It would be cool to make comparisons with other metrics such as feature maps and/or (classification) scores of discriminator networks. Further, we could also use a searchlight to identify where and with what magnitude syntheticity is encoded in the brain.</p>

<p>That’s all.</p>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2022, 29 September]]></summary></entry><entry><title type="html">Hyper</title><link href="https://thirzadado.com/hyper/" rel="alternate" type="text/html" title="Hyper" /><published>2022-01-22T00:00:00+00:00</published><updated>2022-01-22T00:00:00+00:00</updated><id>https://thirzadado.com/hyper</id><content type="html" xml:base="https://thirzadado.com/hyper/"><![CDATA[<p>2022, 22 January</p>

<h3 id="hyperrealistic-reconstruction-of-perceived-faces-from-fmri-data">HYperrealistic reconstruction of PERceived faces from fMRI data</h3>

<p><em>Written by Thirza Dado &amp; Umut Güçlü.</em></p>

<p><img src="/assets/images/blog/top.png" alt="Top" />
<em>Stimuli (top row) and their reconstructions from brain data (bottom row).</em></p>

<p>Neural decoding seeks to find what information about a perceived external stimulus is present in the corresponding brain response. In particular, the original stimulus can be reconstructed based on brain data alone. This study resulted in the most accurate reconstructions of face perception to date by decoding the brain recordings of two individual participants separately. To get even closer, we repeated this approach with the averaged brain responses.</p>

<p>Here, we show you how we did it.</p>

<h5 id="hyper">HYPER</h5>
<p>In the original paper, two participants in the brain scanner were presented with face stimuli that elicited specific functional responses in their brains. This experiment resulted in a (faces, responses) dataset that taught a decoder to map brain responses to the corresponding faces. This trained decoder could now transform unseen (held-out) brain data back into the perceived stimuli. The model was called HYPER (HYperrealistic reconstruction of PERception).</p>

<p><img src="/assets/images/blog//m1.png" alt="Experiment" />
<em>The face stimulus was presented to the participant in the MRI scanner that recorded the corresponding neural responses. Neural decoding of these responses then reconstructed what the participant was originally seeing.</em></p>

<p>The secret ingredient was the following: the face stimuli were artificially synthesized by the generator network of a progressively grown GAN for faces from randomly sampled latent vectors; the people in the presented images did not really exist. As such, the latents underlying these faces were known (because they were used for generation in the first place) whereas those of real face images can never be directly accessed - only approximated which entails information loss. Note that these results are legitimate reconstructions of visual perception regardless of the nature of the stimuli themselves.</p>

<p><img src="/assets/images/blog/m2.png" alt="Pipeline" />
<em>Schematic workflow of HYPER. A latent is fed to the GAN to generate a face image that is presented to a participant in the MRI scanner. From the recorded brain response to this stimulus, we predict a latent that is also fed to the GAN for (re-)generation.</em></p>

<p>The high resemblance indicates a linear relationship between latents and brain recordings. Simply put, latents and brains effectively captured the same defining stimulus features (e.g., age, gender, hair color, pose) so that latents could be predicted as a linear combination of the brain data and fed to the generator for (re-)generation of what was perceived.</p>

<h5 id="hyperalignment-brainremix">Hyperalignment (brain remix)</h5>

<p>The original study trained a separate decoder for each individual participant. To get even closer to the external stimulus, we can capture the shared neural information across participants by applying an additional preprocessing step to the brain data. This step involves aligning and reslicing the functional brain responses with hyperalignment - a remixing process that iteratively maps brain data of multiple participants to a common functional space. Note that we are working in the functional domain which is about brain function rather than the topography in the anatomical domain. The responses of different brains are now comparable in function and the average brain response per stimulus can be taken to train one general decoder.</p>

<p>All the stimulus-reconstruction pairs in this post result from HYPER with hyperaligned and averaged data.</p>

<p><img src="/assets/images/blog/1.png" alt="Recon1" />
<em>Stimuli (top row) and their reconstructions from brain data (bottom row).</em></p>

<p><img src="/assets/images/blog/2.png" alt="Recon2" />
<em>Stimuli (top row) and their reconstructions from brain data (bottom row).</em></p>

<h3 id="tutorial">Tutorial</h3>
<p>Hyperalignment can be implemented using <a href="http://www.pymvpa.org/">PyMVPA</a>:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">!</span><span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">install</span> <span class="n">swig</span>
<span class="err">!</span><span class="n">pip</span> <span class="n">install</span> <span class="o">-</span><span class="n">U</span> <span class="n">pymvpa2</span>
<span class="kn">import</span> <span class="nn">mvpa2.datasets</span>
<span class="kn">import</span> <span class="nn">mvpa2.algorithms.hyperalignment</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">import</span> <span class="nn">pickle</span>

<span class="k">def</span> <span class="nf">hyperalignment</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">):</span>    
    <span class="n">x</span> <span class="o">=</span> <span class="p">[</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">]</span>    
    <span class="n">dataset</span> <span class="o">=</span> <span class="p">[</span><span class="n">mvpa2</span><span class="p">.</span><span class="n">datasets</span><span class="p">.</span><span class="n">Dataset</span><span class="p">(</span><span class="n">x_</span><span class="p">)</span> <span class="k">for</span> <span class="n">x_</span> <span class="ow">in</span> <span class="n">x</span><span class="p">]</span>    
    <span class="n">hyperalignment</span> <span class="o">=</span> <span class="n">mvpa2</span><span class="p">.</span><span class="n">algorithms</span><span class="p">.</span><span class="n">hyperalignment</span><span class="p">.</span><span class="n">Hyperalignment</span><span class="p">()(</span><span class="n">dataset</span><span class="p">)</span>    
    <span class="n">y</span> <span class="o">=</span> <span class="p">[</span><span class="n">hyperalignment</span><span class="p">[</span><span class="n">j</span><span class="p">].</span><span class="n">forward</span><span class="p">(</span><span class="n">dataset</span><span class="p">[</span><span class="n">j</span><span class="p">]).</span><span class="n">samples</span> <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">dataset</span><span class="p">))]</span>    
    <span class="k">return</span> <span class="p">(</span><span class="n">y</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="n">y</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span> <span class="o">/</span> <span class="mi">2</span>
</code></pre></div></div>

<p>Load in the data (publicly accessible in <a href="https://drive.google.com/drive/u/1/folders/1NEblHtlRFvUyD5CA2sqSVfcGlfJBqw_T">Google Drive</a>). The test and training set consist of 36 and 1050 trials of 4096 (flattened) voxel responses, respectively. Concatenate the test and training data before hyperalignment.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"yourpath/data_1.dat"</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">X_tr1</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">X_te1</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">pickle</span><span class="p">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"yourpath/data_2.dat"</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">X_tr2</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">X_te2</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">pickle</span><span class="p">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">X_1</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te1</span> <span class="o">+</span> <span class="n">X_tr1</span><span class="p">)</span>
<span class="n">X_2</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te2</span> <span class="o">+</span> <span class="n">X_tr2</span><span class="p">)</span>
<span class="n">X_hyperaligned</span> <span class="o">=</span> <span class="n">hyperalignment</span><span class="p">(</span><span class="n">X_1</span><span class="p">,</span> <span class="n">X_2</span><span class="p">)</span>
</code></pre></div></div>

<p>Train a neural decoder to predict latents from brain data. This decoder is implemented in MXNet. Let’s import the required libraries.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">!</span><span class="n">pip</span> <span class="n">install</span> <span class="n">mxnet</span><span class="o">-</span><span class="n">cu101</span>
<span class="kn">from</span> <span class="nn">__future__</span> <span class="kn">import</span> <span class="n">annotations</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">from</span> <span class="nn">typing</span> <span class="kn">import</span> <span class="n">Tuple</span><span class="p">,</span> <span class="n">Union</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="n">mx</span>
<span class="kn">from</span> <span class="nn">mxnet</span> <span class="kn">import</span> <span class="n">autograd</span><span class="p">,</span> <span class="n">gluon</span><span class="p">,</span> <span class="n">nd</span><span class="p">,</span> <span class="n">symbol</span>
<span class="kn">from</span> <span class="nn">mxnet.gluon.nn</span> <span class="kn">import</span> <span class="n">Conv2D</span><span class="p">,</span> <span class="n">Dense</span><span class="p">,</span> <span class="n">HybridBlock</span><span class="p">,</span>       
    <span class="n">HybridSequential</span><span class="p">,</span> <span class="n">LeakyReLU</span>
<span class="kn">from</span> <span class="nn">mxnet.gluon.parameter</span> <span class="kn">import</span> <span class="n">Parameter</span>
<span class="kn">from</span> <span class="nn">mxnet.initializer</span> <span class="kn">import</span> <span class="n">Zero</span>
<span class="kn">from</span> <span class="nn">mxnet.io</span> <span class="kn">import</span> <span class="n">NDArrayIter</span>
<span class="kn">from</span> <span class="nn">PIL</span> <span class="kn">import</span> <span class="n">Image</span>
<span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="kn">import</span> <span class="n">zscore</span>
</code></pre></div></div>

<p>Below you can find a MXNet implementation of the PGGAN generator. It takes a 512-dimensional latent and transforms it into a 1024 ×1024 RGB image.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">Pixelnorm</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">epsilon</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">1e-8</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="bp">None</span><span class="p">:</span>
        <span class="nb">super</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">,</span> <span class="bp">self</span><span class="p">).</span><span class="n">__init__</span><span class="p">()</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">eps</span> <span class="o">=</span> <span class="n">epsilon</span>
    <span class="k">def</span> <span class="nf">hybrid_forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">F</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">nd</span><span class="p">:</span>
        <span class="k">return</span> <span class="n">x</span> <span class="o">*</span> <span class="n">F</span><span class="p">.</span><span class="n">rsqrt</span><span class="p">(</span><span class="n">F</span><span class="p">.</span><span class="n">mean</span><span class="p">(</span><span class="n">F</span><span class="p">.</span><span class="n">square</span><span class="p">(</span><span class="n">x</span><span class="p">),</span> <span class="mi">1</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span> <span class="o">+</span> <span class="bp">self</span><span class="p">.</span><span class="n">eps</span><span class="p">)</span>

<span class="k">class</span> <span class="nc">Bias</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">shape</span><span class="p">:</span> <span class="n">Tuple</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="bp">None</span><span class="p">:</span>
        <span class="nb">super</span><span class="p">(</span><span class="n">Bias</span><span class="p">,</span> <span class="bp">self</span><span class="p">).</span><span class="n">__init__</span><span class="p">()</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">shape</span> <span class="o">=</span> <span class="n">shape</span>
        <span class="k">with</span> <span class="bp">self</span><span class="p">.</span><span class="n">name_scope</span><span class="p">():</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">b</span> <span class="o">=</span> <span class="bp">self</span><span class="p">.</span><span class="n">params</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="s">"b"</span><span class="p">,</span> <span class="n">init</span><span class="o">=</span><span class="n">Zero</span><span class="p">(),</span> <span class="n">shape</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span>
    <span class="k">def</span> <span class="nf">hybrid_forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">F</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">nd</span><span class="p">:</span>
        <span class="k">return</span> <span class="n">F</span><span class="p">.</span><span class="n">broadcast_add</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">b</span><span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="p">:,</span> <span class="bp">None</span><span class="p">,</span> <span class="bp">None</span><span class="p">])</span>

<span class="k">class</span> <span class="nc">Block</span><span class="p">(</span><span class="n">HybridSequential</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">in_channels</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="bp">None</span><span class="p">:</span>
        <span class="nb">super</span><span class="p">(</span><span class="n">Block</span><span class="p">,</span> <span class="bp">self</span><span class="p">).</span><span class="n">__init__</span><span class="p">()</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">channels</span> <span class="o">=</span> <span class="n">channels</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">in_channels</span> <span class="o">=</span> <span class="n">in_channels</span>
        <span class="k">with</span> <span class="bp">self</span><span class="p">.</span><span class="n">name_scope</span><span class="p">():</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">channels</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="n">in_channels</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">())</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">channels</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="n">channels</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">())</span>
    <span class="k">def</span> <span class="nf">hybrid_forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">F</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">nd</span><span class="p">:</span>
        <span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="p">.</span><span class="n">repeat</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
        <span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="p">.</span><span class="n">repeat</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span>
        <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="p">)):</span>
            <span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="p">[</span><span class="n">i</span><span class="p">](</span><span class="n">x</span><span class="p">)</span>
        <span class="k">return</span> <span class="n">x</span>

<span class="k">class</span> <span class="nc">Generator</span><span class="p">(</span><span class="n">HybridSequential</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="bp">None</span><span class="p">:</span>
        <span class="nb">super</span><span class="p">(</span><span class="n">Generator</span><span class="p">,</span> <span class="bp">self</span><span class="p">).</span><span class="n">__init__</span><span class="p">()</span>
        <span class="k">with</span> <span class="bp">self</span><span class="p">.</span><span class="n">name_scope</span><span class="p">():</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">())</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Dense</span><span class="p">(</span><span class="mi">8192</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> <span class="n">in_units</span><span class="o">=</span><span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Bias</span><span class="p">((</span><span class="mi">512</span><span class="p">,)))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">())</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Conv2D</span><span class="p">(</span><span class="mi">512</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Pixelnorm</span><span class="p">())</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">512</span><span class="p">,</span> <span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">512</span><span class="p">,</span> <span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">512</span><span class="p">,</span> <span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">256</span><span class="p">,</span> <span class="mi">512</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">128</span><span class="p">,</span> <span class="mi">256</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">64</span><span class="p">,</span> <span class="mi">128</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">64</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Block</span><span class="p">(</span><span class="mi">16</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Conv2D</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">16</span><span class="p">))</span>
    <span class="k">def</span> <span class="nf">hybrid_forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">F</span><span class="p">:</span> <span class="n">Union</span><span class="p">(</span><span class="n">nd</span><span class="p">,</span> <span class="n">symbol</span><span class="p">),</span> <span class="n">x</span><span class="p">:</span> <span class="n">nd</span><span class="p">,</span> <span class="n">layer</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">nd</span><span class="p">:</span>
        <span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="p">.</span><span class="n">Reshape</span><span class="p">(</span><span class="bp">self</span><span class="p">[</span><span class="mi">1</span><span class="p">](</span><span class="bp">self</span><span class="p">[</span><span class="mi">0</span><span class="p">](</span><span class="n">x</span><span class="p">)),</span> <span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">512</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">))</span>
        <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="p">)):</span>
            <span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="p">[</span><span class="n">i</span><span class="p">](</span><span class="n">x</span><span class="p">)</span>
            <span class="k">if</span> <span class="n">i</span> <span class="o">==</span> <span class="n">layer</span> <span class="o">+</span> <span class="mi">7</span><span class="p">:</span>
                <span class="k">return</span> <span class="n">x</span>
        <span class="k">return</span> <span class="n">x</span>
</code></pre></div></div>

<p>A dense (decoding) layer then transforms the 4096-dimensional functional responses into 512-dimensional latents. Only train the weights of this layer and keep the generator weights fixed.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">Linear</span><span class="p">(</span><span class="n">HybridSequential</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">n_in</span><span class="p">,</span> <span class="n">n_out</span><span class="p">):</span>
        <span class="nb">super</span><span class="p">(</span><span class="n">Linear</span><span class="p">,</span> <span class="bp">self</span><span class="p">).</span><span class="n">__init__</span><span class="p">()</span>
        <span class="k">with</span> <span class="bp">self</span><span class="p">.</span><span class="n">name_scope</span><span class="p">():</span>
            <span class="bp">self</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">Dense</span><span class="p">(</span><span class="n">n_out</span><span class="p">,</span> <span class="n">in_units</span><span class="o">=</span><span class="n">n_in</span><span class="p">))</span>
</code></pre></div></div>

<p>Before training, all data have to be transformed to be of type <a href="https://mxnet.apache.org/versions/1.6/api/python/docs/api/ndarray/index.html">NDArray</a> (make sure to also store on GPU if you have access). The weight parameters of the generator (MXNet) can be found on Drive. Note that we are using gradient descent to fit the weights of the dense layer whereas ordinary least squares would yield a similar solution. However, the current setup allows you to experiment and try different things to make more sophisticated models (e.g., predict intermediate layer activations of PGGAN and include this in your loss function).</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Set parameters.
</span><span class="n">batch_size</span> <span class="o">=</span> <span class="mi">30</span>
<span class="n">max_epoch</span> <span class="o">=</span> <span class="mi">1500</span>
<span class="n">n_lat</span> <span class="o">=</span> <span class="mi">512</span>
<span class="n">n_vox</span> <span class="o">=</span> <span class="mi">4096</span>

<span class="c1"># Make dataset to take batches from during training.
</span><span class="k">def</span> <span class="nf">load_dataset</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">):</span>
    <span class="k">return</span> <span class="n">NDArrayIter</span><span class="p">({</span> <span class="s">"x"</span><span class="p">:</span> <span class="n">nd</span><span class="p">.</span><span class="n">stack</span><span class="p">(</span><span class="o">*</span><span class="n">x</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="p">},</span> <span class="p">{</span> <span class="s">"t"</span><span class="p">:</span> <span class="n">nd</span><span class="p">.</span><span class="n">stack</span><span class="p">(</span><span class="o">*</span><span class="n">t</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="p">},</span> <span class="n">batch_size</span><span class="p">,</span> <span class="bp">True</span><span class="p">)</span>

<span class="c1"># Latents.
</span><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"yourpath/data_1.dat"</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">_</span><span class="p">,</span> <span class="n">T_tr</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">T_te</span> <span class="o">=</span> <span class="n">pickle</span><span class="p">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>

<span class="c1"># Z-score the brain data.
</span><span class="n">X_te</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">X_hyperaligned</span><span class="p">[:</span><span class="mi">36</span><span class="p">])</span>
<span class="n">X_tr</span> <span class="o">=</span> <span class="n">zscore</span><span class="p">(</span><span class="n">X_hyperaligned</span><span class="p">[</span><span class="mi">36</span><span class="p">:])</span>
<span class="n">train</span> <span class="o">=</span> <span class="n">load_dataset</span><span class="p">(</span><span class="n">nd</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">T_tr</span><span class="p">),</span> <span class="n">nd</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_tr</span><span class="p">),</span> <span class="n">batch_size</span><span class="p">)</span>
<span class="n">test</span> <span class="o">=</span>  <span class="n">load_dataset</span><span class="p">(</span><span class="n">nd</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">T_te</span><span class="p">),</span> <span class="n">nd</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">36</span><span class="p">)</span>

<span class="c1"># Initialize generator.
</span><span class="n">generator</span> <span class="o">=</span> <span class="n">Generator</span><span class="p">()</span>
<span class="n">generator</span><span class="p">.</span><span class="n">load_parameters</span><span class="p">(</span><span class="s">"yourpath/generator.params"</span><span class="p">)</span>
<span class="n">mean_squared_error</span> <span class="o">=</span> <span class="n">gluon</span><span class="p">.</span><span class="n">loss</span><span class="p">.</span><span class="n">L2Loss</span><span class="p">()</span>

<span class="c1"># Initialize linear model.
</span><span class="n">vox_to_lat</span> <span class="o">=</span> <span class="n">Linear</span><span class="p">(</span><span class="n">n_vox</span><span class="p">,</span> <span class="n">n_lat</span><span class="p">)</span>
<span class="n">vox_to_lat</span><span class="p">.</span><span class="n">initialize</span><span class="p">()</span>
<span class="n">trainer</span> <span class="o">=</span> <span class="n">gluon</span><span class="p">.</span><span class="n">Trainer</span><span class="p">(</span><span class="n">vox_to_lat</span><span class="p">.</span><span class="n">collect_params</span><span class="p">(),</span> <span class="s">"Adam"</span><span class="p">,</span> <span class="p">{</span><span class="s">"learning_rate"</span><span class="p">:</span> <span class="mf">0.00001</span><span class="p">,</span> <span class="s">"wd"</span><span class="p">:</span> <span class="mf">0.01</span><span class="p">})</span>

<span class="c1"># Training.
</span><span class="n">epoch</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">results_tr</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">results_te</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">while</span> <span class="n">epoch</span> <span class="o">&lt;</span> <span class="n">max_epoch</span><span class="p">:</span>
    <span class="n">train</span><span class="p">.</span><span class="n">reset</span><span class="p">()</span>
    <span class="n">test</span><span class="p">.</span><span class="n">reset</span><span class="p">()</span>
    <span class="n">loss_tr</span> <span class="o">=</span> <span class="mi">0</span>
    <span class="n">loss_te</span> <span class="o">=</span> <span class="mi">0</span>
    <span class="n">count</span> <span class="o">=</span> <span class="mi">0</span>
    <span class="k">for</span> <span class="n">batch_tr</span> <span class="ow">in</span> <span class="n">train</span><span class="p">:</span>
        <span class="k">with</span> <span class="n">autograd</span><span class="p">.</span><span class="n">record</span><span class="p">():</span>
            <span class="n">lat_Y</span> <span class="o">=</span> <span class="n">vox_to_lat</span><span class="p">(</span><span class="n">batch_tr</span><span class="p">.</span><span class="n">data</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
            <span class="n">loss</span> <span class="o">=</span> <span class="n">mean_squared_error</span><span class="p">(</span><span class="n">lat_Y</span><span class="p">,</span> <span class="n">batch_tr</span><span class="p">.</span><span class="n">label</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
        <span class="n">loss</span><span class="p">.</span><span class="n">backward</span><span class="p">()</span>
        <span class="n">trainer</span><span class="p">.</span><span class="n">step</span><span class="p">(</span><span class="n">batch_size</span><span class="p">)</span>
        <span class="n">loss_tr</span> <span class="o">+=</span> <span class="n">loss</span><span class="p">.</span><span class="n">mean</span><span class="p">().</span><span class="n">asnumpy</span><span class="p">()</span>
        <span class="n">count</span> <span class="o">+=</span> <span class="mi">1</span>
    <span class="k">for</span> <span class="n">batch_te</span> <span class="ow">in</span> <span class="n">test</span><span class="p">:</span>
        <span class="n">lat_Y</span> <span class="o">=</span> <span class="n">vox_to_lat</span><span class="p">(</span><span class="n">batch_te</span><span class="p">.</span><span class="n">data</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
        <span class="n">loss</span> <span class="o">=</span> <span class="n">mean_squared_error</span><span class="p">(</span><span class="n">lat_Y</span><span class="p">,</span> <span class="n">batch_te</span><span class="p">.</span><span class="n">label</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
        <span class="n">loss_te</span> <span class="o">+=</span> <span class="n">loss</span><span class="p">.</span><span class="n">mean</span><span class="p">().</span><span class="n">asnumpy</span><span class="p">()</span>
    <span class="n">loss_tr_normalized</span> <span class="o">=</span> <span class="n">loss_tr</span> <span class="o">/</span> <span class="n">count</span>
    <span class="n">results_tr</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">loss_tr_normalized</span><span class="p">)</span>
    <span class="n">results_te</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">loss_te</span><span class="p">)</span>
    <span class="n">epoch</span> <span class="o">+=</span> <span class="mi">1</span>
    <span class="k">print</span><span class="p">(</span><span class="s">"Epoch %i: %.4f / %.4f"</span> <span class="o">%</span> <span class="p">(</span><span class="n">epoch</span><span class="p">,</span> <span class="n">loss_tr_normalized</span><span class="p">,</span> <span class="n">loss_te</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">figure</span><span class="p">()</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">epoch</span><span class="p">,</span> <span class="n">epoch</span><span class="p">),</span> <span class="n">results_tr</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">epoch</span><span class="p">,</span> <span class="n">epoch</span><span class="p">),</span> <span class="n">results_te</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span>
</code></pre></div></div>

<p>After training, reconstruct faces from the test set responses. Note that the test data is not used for training (you only computed the test loss per epoch for plotting purposes) such that the model never encountered this brain data before.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Testing and reconstructing
</span><span class="n">lat_Y</span> <span class="o">=</span> <span class="n">vox_to_lat</span><span class="p">(</span><span class="n">nd</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_te</span><span class="p">))</span>
<span class="nb">dir</span> <span class="o">=</span> <span class="s">"yourpath/reconstructions"</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">os</span><span class="p">.</span><span class="n">path</span><span class="p">.</span><span class="n">exists</span><span class="p">(</span><span class="nb">dir</span><span class="p">):</span>
    <span class="n">os</span><span class="p">.</span><span class="n">mkdir</span><span class="p">(</span><span class="nb">dir</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">latent</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">lat_Y</span><span class="p">):</span>
    <span class="n">face</span> <span class="o">=</span> <span class="n">generator</span><span class="p">(</span><span class="n">latent</span><span class="p">[</span><span class="bp">None</span><span class="p">],</span> <span class="mi">9</span><span class="p">).</span><span class="n">asnumpy</span><span class="p">()</span>
    <span class="n">face</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">clip</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">rint</span><span class="p">(</span><span class="mf">127.5</span> <span class="o">*</span> <span class="n">face</span> <span class="o">+</span> <span class="mf">127.5</span><span class="p">),</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">255.0</span><span class="p">)</span>
    <span class="n">face</span> <span class="o">=</span> <span class="n">face</span><span class="p">.</span><span class="n">astype</span><span class="p">(</span><span class="s">"uint8"</span><span class="p">).</span><span class="n">transpose</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
    <span class="n">Image</span><span class="p">.</span><span class="n">fromarray</span><span class="p">(</span><span class="n">face</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="s">'RGB'</span><span class="p">).</span><span class="n">save</span><span class="p">(</span><span class="nb">dir</span> <span class="o">+</span> <span class="s">"/%d.png"</span> <span class="o">%</span> <span class="n">i</span><span class="p">)</span>
</code></pre></div></div>

<p>In the end, one decoding model was trained on averaged functional neural responses which resulted in face reconstructions spectacularly analogous to the originally perceived faces. This raises the question of how close neural decoding can get to objective reality if we average the brain data of an even larger pool of eyewitnesses.</p>

<p>That’s all folks!</p>

<p><img src="/assets/images/blog/3.png" alt="Recon3" /></p>

<p><img src="/assets/images/blog/4.png" alt="Recon4" />
<em>Stimuli (top row) and their reconstructions from brain data (bottom row).</em></p>

<p>Dado, T., Güçlütürk, Y., Ambrogioni, L. et al. Hyperrealistic neural decoding for reconstructing faces from fMRI activations via the GAN latent space. Sci Rep 12, 141 (2022). https://doi.org/10.1038/s41598-021-03938-w</p>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2022, 22 January]]></summary></entry><entry><title type="html">Wave function collapse</title><link href="https://thirzadado.com/wfc/" rel="alternate" type="text/html" title="Wave function collapse" /><published>2021-04-07T00:00:00+00:00</published><updated>2021-04-07T00:00:00+00:00</updated><id>https://thirzadado.com/wfc</id><content type="html" xml:base="https://thirzadado.com/wfc/"><![CDATA[<p>2021, 07 April</p>

<p>The synthesis algorithm <a href="https://github.com/mxgmn/WaveFunctionCollapse">wave function collapse</a> is loosely inspired on the concept from quantum mechanics; it uses a set of probabilities to describe all the possible states of a texture in superposition, which are then collapsed one-by-one to generate the resulting output pattern.</p>

<table>
  <tr>
    <td><img src="/assets/images/misc/wfc/Skyline_.png" /></td>
    <td><img src="/assets/images/misc/wfc/Skyline_1.gif" /></td>
    <td><img src="/assets/images/misc/wfc/Skyline_3.gif" /></td>
  </tr>
    <tr>
    <td><img src="/assets/images/misc/wfc/Skyline2_.png" /></td>
    <td><img src="/assets/images/misc/wfc/Skyline2_1.gif" /></td>
    <td><img src="/assets/images/misc/wfc/Skyline2_2.gif" /></td>
  </tr>
</table>

<p><i>Starting in superposition, each state collapse brings the image closer to a fully-collapsed pattern that looks locally similar to the input sample image (shown as the smaller image on the left).</i></p>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2021, 07 April]]></summary></entry><entry><title type="html">Landscape generation</title><link href="https://thirzadado.com/landscape/" rel="alternate" type="text/html" title="Landscape generation" /><published>2021-03-07T00:00:00+00:00</published><updated>2021-03-07T00:00:00+00:00</updated><id>https://thirzadado.com/landscape</id><content type="html" xml:base="https://thirzadado.com/landscape/"><![CDATA[<p>2021, 07 March</p>

<p>In the master’s course <a href="https://www.ru.nl/courseguides/socsci/courses-osiris/ai/sow-mki95-computer-graphics-computer-vision/">Computer Graphics &amp; Computer Vision</a>, students procedurally generate their own mountain-like terrain in Unity using Perlin noise. This was my landscape.</p>

<p>Click on any button in the window below to start playing.</p>

<p><i>It ends at infinity, so it never ends.</i></p>

<div id="unity-container" class="unity-desktop">
    <canvas id="unity-canvas" width="960" height="600"></canvas>
    <div id="unity-loading-bar">
    <div id="unity-logo"></div>
    <div id="unity-progress-bar-empty">
        <div id="unity-progress-bar-full"></div>
    </div>
    </div>
    <div id="unity-mobile-warning">
    WebGL builds are not supported on mobile devices.
    </div>
    <div id="unity-footer">
    <div id="unity-webgl-logo"></div>
    <div id="unity-fullscreen-button"></div>
    </div>
</div>
<script>
    var buildUrl = "../../assets/unity/landscape7/Build";
    var loaderUrl = buildUrl + "/landscape7.loader.js";
    var config = {
    dataUrl: buildUrl + "/landscape7.data",
    frameworkUrl: buildUrl + "/landscape7.framework.js",
    codeUrl: buildUrl + "/landscape7.wasm",
    streamingAssetsUrl: "StreamingAssets",
    companyName: "DefaultCompany",
    productName: "LandscapeGeneration",
    productVersion: "1",
    };

    var container = document.querySelector("#unity-container");
    var canvas = document.querySelector("#unity-canvas");
    var loadingBar = document.querySelector("#unity-loading-bar");
    var progressBarFull = document.querySelector("#unity-progress-bar-full");
    var fullscreenButton = document.querySelector("#unity-fullscreen-button");
    var mobileWarning = document.querySelector("#unity-mobile-warning");

    // By default Unity keeps WebGL canvas render target size matched with
    // the DOM size of the canvas element (scaled by window.devicePixelRatio)
    // Set this to false if you want to decouple this synchronization from
    // happening inside the engine, and you would instead like to size up
    // the canvas DOM size and WebGL render target sizes yourself.
    // config.matchWebGLToCanvasSize = false;

    if (/iPhone|iPad|iPod|Android/i.test(navigator.userAgent)) {
    container.className = "unity-mobile";
    // Avoid draining fillrate performance on mobile devices,
    // and default/override low DPI mode on mobile browsers.
    config.devicePixelRatio = 1;
    mobileWarning.style.display = "block";
    setTimeout(() => {
        mobileWarning.style.display = "none";
    }, 5000);
    } else {
    canvas.style.width = "960px";
    canvas.style.height = "600px";
    }
    loadingBar.style.display = "block";

    var script = document.createElement("script");
    script.src = loaderUrl;
    script.onload = () => {
    createUnityInstance(canvas, config, (progress) => {
        progressBarFull.style.width = 100 * progress + "%";
    }).then((unityInstance) => {
        loadingBar.style.display = "none";
        fullscreenButton.onclick = () => {
        unityInstance.SetFullscreen(1);
        };
    }).catch((message) => {
        alert(message);
    });
    };
    document.body.appendChild(script);
</script>]]></content><author><name>Thirza Dado</name></author><summary type="html"><![CDATA[2021, 07 March]]></summary></entry></feed>