Tal Halperin | Web Developer
  • Introduction
  • Projects
    • SoundDrop
    • AlloSphere
    • Stoca
    • TalHalperin.com
Powered by GitBook
On this page
  • Project Overview
  • → AI Interaction and Sentiment Analysis
  • ChatGPT Integration
  • Dynamic Color Palette Generation
  • → Dynamic Visual Elements with React Three Fiber
  • Gradient Background
  • Camera Transitions
  • Stoic Philosopher Statue
  • → Adaptive Audio Experience
  • Musical Composition and Key Modulation
  • Interactive Typing Sounds
  1. Projects

Stoca

An immersive web experience that combines AI-driven philosophical conversation with real-time adaptive visuals and music, creating a personalized journey of self-reflection.

PreviousAlloSphereNextTalHalperin.com

Last updated 8 months ago

Project Overview

Stoca is a web app that pairs you with an AI stoic philosopher. It analyzes your conversation in real-time, morphing both visuals and audio to match your mood.

Key Features:

  • AI-powered philosophical dialogue using ChatGPT 3.5

  • Sentiment-reactive color palette generation

  • Dynamic background gradient using custom shaders

  • Adaptive musical composition that shifts with conversation tone

  • Interactive audio elements responding to user input

Tech Stack:

  • React

  • OpenAI API (ChatGPT 3.5)

  • React Three Fiber (three.js)

  • Blender

  • GSAP

  • Custom WebGL Shaders

  • Web Audio API


→ AI Interaction and Sentiment Analysis

ChatGPT Integration

The heart of Stoca's interactivity lies in its integration with OpenAI's API. We use a carefully crafted prompt to set the context for our AI stoic philosopher:

const [finalPrompt, setFinalPrompt] = useState(
  "The following is a conversation with an AI Stoic Philosopher. The philosopher is helpful, creative, clever, friendly, gives brief stoic advice, and asks deep questions.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How are you feeling? "
);

"The following is a conversation with an AI Stoic Philosopher. The philosopher is helpful, creative, clever, friendly, gives brief stoic advice, and asks deep questions.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How are you feeling? "

This prompt is then used in our API call to generate responses:

const completion = await openai.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [
    { role: "system", content: finalPrompt },
    { role: "user", content: inputt },
  ],
  temperature: 0.9,
  max_tokens: 300,
  top_p: 1,
  frequency_penalty: 0.76,
  presence_penalty: 0.75,
});

Dynamic Color Palette Generation

We use the user's input to generate a mood-appropriate color palette:

const colorCompletion = await openai.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [
    {
      role: "system",
      content:
        "You are a color palette generator. Provide five different hex value colors that create a palette matching the input's sentiment. Then describe that sentiment as either optimistic or pessimistic.",
    },
    { role: "user", content: inputt },
  ],
});

"You are a color palette generator. Provide five different hex value colors that create a palette matching the input's sentiment. Then describe that sentiment as either optimistic or pessimistic."

We then take the array of hex values [#000000, #000000, #000000, #000000, #000000] and animate the background gradient, smoothly transitioning between the colors to reflect the user's sentiment. Based on whether the sentiment analysis indicates a positive or negative tone, we adjust the background music to align with the user's emotional state.


→ Dynamic Visual Elements with React Three Fiber

The visual experience in Stoca is powered by React Three Fiber, allowing for seamless integration of 3D graphics with React. This section showcases the advanced techniques used to create a responsive and immersive environment.

Gradient Background

The dynamic gradient background is a key visual element that responds to the AI-generated color palette. It's implemented using a custom GLSL shader:

Full GLSL shader code
const GradientMaterial = shaderMaterial(
  {
    time: 0,
    uColor: newPalette,
    resolution: new THREE.Vector4(),
    opacity: 0,
    modifier: 1,
  },
  /* glsl */ `
    uniform float modifier;
    uniform float time;
    // varying vec2 vUv;
    varying vec3 vNormal;
    varying vec3 vPosition;
    uniform vec3 uColor[5];
    varying vec3 vColor;
    uniform vec2 pixels;
    float PI = 3.141592653589793238;
    uniform float opacity;
    //	Simplex 3D Noise 
    //	by Ian McEwan, Ashima Arts
    //
    vec4 permute(vec4 x){return mod(((x*34.0)+1.0)*x, 289.0);}
    vec4 taylorInvSqrt(vec4 r){return 1.79284291400159 - 0.85373472095314 * r;}
float snoise(vec3 v){ 
  const vec2  C = vec2(1.0/6.0, 1.0/3.0) ;
  const vec4  D = vec4(0.0, 0.5, 1.0, 2.0);

// First corner
  vec3 i  = floor(v + dot(v, C.yyy) );
  vec3 x0 =   v - i + dot(i, C.xxx) ;

// Other corners
  vec3 g = step(x0.yzx, x0.xyz);
  vec3 l = 1.0 - g;
  vec3 i1 = min( g.xyz, l.zxy );
  vec3 i2 = max( g.xyz, l.zxy );

  //  x0 = x0 - 0. + 0.0 * C 
  vec3 x1 = x0 - i1 + 1.0 * C.xxx;
  vec3 x2 = x0 - i2 + 2.0 * C.xxx;
  vec3 x3 = x0 - 1. + 3.0 * C.xxx;

// Permutations
  i = mod(i, 289.0 ); 
  vec4 p = permute( permute( permute( 
             i.z + vec4(0.0, i1.z, i2.z, 1.0 ))
           + i.y + vec4(0.0, i1.y, i2.y, 1.0 )) 
           + i.x + vec4(0.0, i1.x, i2.x, 1.0 ));

// Gradients
// ( N*N points uniformly over a square, mapped onto an octahedron.)
  float n_ = 1.0/7.0; // N=7
  vec3  ns = n_ * D.wyz - D.xzx;

  vec4 j = p - 49.0 * floor(p * ns.z *ns.z);  //  mod(p,N*N)

  vec4 x_ = floor(j * ns.z);
  vec4 y_ = floor(j - 7.0 * x_ );    // mod(j,N)

  vec4 x = x_ *ns.x + ns.yyyy ;
  vec4 y = y_ *ns.x + ns.yyyy ;
  vec4 h = 1.0 - abs(x) - abs(y);

  vec4 b0 = vec4( x.xy, y.xy );
  vec4 b1 = vec4( x.zw, y.zw );

  vec4 s0 = floor(b0)*2.0 + 1.0 ;
  vec4 s1 = floor(b1)*2.0 + 1.0;
  vec4 sh = -step(h, vec4(0.0));

  vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy ;
  vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww ;

  vec3 p0 = vec3(a0.xy,h.x);
  vec3 p1 = vec3(a0.zw,h.y);
  vec3 p2 = vec3(a1.xy,h.z);
  vec3 p3 = vec3(a1.zw,h.w);

//Normalise gradients
  vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
  p0 *= norm.x ;
  p1 *= norm.y;
  p2 *= norm.z;
  p3 *= norm.w;

// Mix final noise value
  vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
  m = m * m ;
  return 42.0 * dot( m*m, vec4( dot(p0,x0), dot(p1,x1), 
                                dot(p2,x2), dot(p3,x3) ) );
}

void main() {


  

  vec3 noiseCoord = normal;

  float tilt = -0.8*normal.y ;

  float incline  = normal.x*0.1;
  // float incline  = uv.x*10.;


  float offset = incline*mix(-.25,0.25,normal.x);
  // float offset = incline*mix(-2.,2.,normal.y);




  float noise = snoise(vec3(noiseCoord.x + time*3.,noiseCoord.y, time * 10.));

  noise = max(0.,noise);

  // vec3 pos = vec3(position.x,position.y,position.z + noise * 0.3 +tilt + incline + offset);
  vec3 pos = vec3(position.x,position.y,position.z );

  // vec3 pos = vec3(position.x,position.y,position.z + noise * 200. +tilt + incline + offset);


  // uColor[0] = uColor[0] *0.5;
  // uColor[1] = uColor[0] *0.5;
  // uColor[2] = uColor[0] *0.5;



  vColor = uColor[4] * modifier;

  for(int i = 0; i < 4; i++) {

    float noiseFlow  = 5. + float(i)*0.3 ;
    float noiseSpeed  = 10. + float(i)*0.3;

    float noiseSeed = 1. + float(i)*10.;
    vec2 noiseFreq = vec2(1.,1.4)*.4 ;

    float noiseFloor = 0.1;
    float noiseCeil = 0.6 + float(i)*0.07;



    float noise = smoothstep(noiseFloor,noiseCeil,
      snoise(
        vec3(
          noiseCoord.x*noiseFreq.x + time*noiseFlow,
          noiseCoord.y*noiseFreq.y, 
          time / 2.0 * noiseSpeed + noiseSeed
        )
      )
    );

    vColor = mix(vColor,uColor[i],noise);

    
  }

  // vUv = uv;
  vNormal = normal;
  gl_Position = projectionMatrix * modelViewMatrix * vec4( pos, 1.0 );
}

  `,
   /* glsl */ `
    
  uniform float opacity;
    uniform float time;
    uniform float progress;
    uniform sampler2D texture1;
    uniform vec4 resolution;
    // varying vec2  vUv;
    varying vec3 vNormal;
    varying vec3 vPosition;
    varying vec3 vColor;
    float PI = 3.141592653589793238;
    void main()	{
        // vec2 newUV = (vUv - vec2(0.5))*resolution.zw + vec2(0.5);
        // gl_FragColor = vec4(vNormal,1.);
        gl_FragColor = vec4(vColor,opacity);
    }
      `
);

This shader creates a fluid, animated gradient by:

  1. Using a 3D Simplex noise function to generate organic patterns

  2. Mixing between the five colors provided by the AI based on the noise value

  3. Animating the noise over time to create a flowing effect

The colors are passed to the shader as uniforms and updated in real-time as the AI generates new color palettes based on the conversation's sentiment.

Camera Transitions

We implement smooth transitions of the 3D camera that respond to user interactions. These transitions are managed in the Env.js file:

Full camera transition code
export default function Env(props) {
  let vec = new THREE.Vector3();
  let enterIncrement = 0;
  useFrame((state) => {
    enterIncrement = props.enterIncrement % 13;

    if (enterIncrement === 2) {
      state.camera.position.lerp(vec.set(0, 3, 15), 0.01);
      state.camera.lookAt(0, 0, 0);
    }
    if (enterIncrement === 3) {
      state.camera.position.lerp(vec.set(15, 0, 12.5), 0.01);
      state.camera.lookAt(0, 0, 0);
    }
    if (enterIncrement === 4) {
      state.camera.position.lerp(vec.set(-15, 0, 12.5), 0.01);
      state.camera.lookAt(0, 0, 0);
    }
    if (enterIncrement === 5) {
      state.camera.position.lerp(
        vec.set(-6.0358791643389145, 3.028888268496038, 6.405432772282838),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
    if (enterIncrement === 6) {
      state.camera.position.lerp(
        vec.set(5.248097238306234, 2.5015889415213106, 5.4666839498488295),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
    if (enterIncrement === 7) {
      state.camera.position.lerp(
        vec.set(0, 4.332061055971331, 6.700236003219422),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
    if (enterIncrement === 8) {
      state.camera.position.lerp(
        vec.set(0, -0.902270925328769, 7.929117645891684),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
    if (enterIncrement === 9) {
      state.camera.position.lerp(
        vec.set(0, 2.522576945620514e-15, 41.19680788578111),
        0.01
      );
      state.camera.lookAt(0, 0, 0);
    }
    if (enterIncrement === 10) {
      state.camera.position.lerp(
        vec.set(10.830953118825398, 0.6206651180632762, -0.40251601096885026),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
    if (enterIncrement === 11) {
      state.camera.position.lerp(
        vec.set(0, -0.902270925328769, 7.929117645891684),
        0.01
      );
      state.camera.lookAt(0, 0, 0);
    }
    if (enterIncrement === 12) {
      state.camera.position.lerp(
        vec.set(-10.830953118825398, 0.6206651180632762, -0.40251601096885026),
        0.01
      );
      state.camera.lookAt(0, 1, 0);
    }
  });

  return <Environment preset={"night"} blur={0.65} background={false} />;
}

This code:

  1. Smoothly interpolates (lerps) the camera to new positions based on user interactions

  2. Creates a dynamic, cinematic feel as users progress through the conversation

Stoic Philosopher Statue

A central visual element in Stoca is the stoic philosopher statue, which I created using Blender. This 3D model serves as a focal point in the scene, embodying the AI's presence in a visually striking way.

The process of creating the statue involved:

  1. Conceptualizing a pose and style that represents stoic philosophy

  2. Modeling the basic form in Blender

  3. Sculpting details to add character and depth

  4. UV unwrapping and texturing for realistic materials

  5. Exporting as GLB

The statue is then imported into the React Three Fiber scene:

import { useGLTF } from '@react-three/drei'

function Statue() {
  const { nodes, materials } = useGLTF('/path/to/statue.glb')
  return (
    <group>
      <mesh
        geometry={nodes.Statue.geometry}
        material={materials.StatueMaterial}
      />
    </group>
  )
}

→ Adaptive Audio Experience

The audio in Stoca is meticulously designed to be both interactive and mood-responsive, creating a rich sonic landscape that enhances the user's emotional journey.

Musical Composition and Key Modulation

The audio experience is built around two complementary key signatures:

  • A minor (pessimistic mood)

  • C major (optimistic mood, the relative major of A minor)

This musical design allows for smooth transitions between moods while maintaining harmonic consistency. Here's how it works:

  1. Composition: I composed arpeggios and synth pad loops in both A minor and C major.

  2. Sentiment-based Playback: Depending on the AI's sentiment analysis of the user's input:

    • Optimistic sentiment: C major stems play prominently

    • Pessimistic sentiment: A minor stems are introduced while C major fades

Here's a snippet from AudioController.js that manages this transition:

useEffect(() => {
  const optimisticSettings = { padC: 0.5, padF: 0, arpC: 0.5 };
  const pessimisticSettings = { padC: 0, padF: 0.5, arpC: 0 };
  const settings = word === "optimistic" ? optimisticSettings : pessimisticSettings;

  Object.entries(settings).forEach(([trackName, targetVolume]) => {
    gsap.to(audioStates[trackName], {
      volume: targetVolume,
      duration: 5,
      onUpdate: () => updateVolume(trackName, audioStates[trackName].volume),
    });
  });
}, [word]);

This code smoothly transitions between optimistic and pessimistic audio settings over 5 seconds, creating a gradual shift in mood.

Interactive Typing Sounds

To make the typing experience more engaging and musically integrated, I implemented a unique arpeggio playback system:

  1. Arpeggio Design: I composed a 21-note arpeggio sequence that works harmonically in both A minor and C major.

  2. Note Isolation: Each note of the arpeggio was exported as an individual audio file.

  3. Playback Mechanism: As the user types, the system plays through this arpeggio sequence:

    • Each new word (detected by a space key press) triggers the next note in the sequence.

    • The sequence loops after all 21 notes have been played.

Here's the relevant code from InputBar.js:

const handleKeyPress = (event) => {
  setFirstInteraction((prev) => prev + 1);
  if (!isMobile) {
    if (event.key === " ") {
      keySound.play();
      setCurrentNote((prev) => (prev + 1) % 22);
    }
    if (event.key === "Enter") {
      enterSound.play();
      setEntered(true);
    }
  }
};

// Audio setup
const keySound = new Howl({
  src: [`/audio/${currentNote}.mp3`],
  volume: 0.4,
});
const enterSound = new Howl({
  src: ["/audio/enter.mp3"],
  volume: 0.5,
});

This implementation turns the act of typing into a musical experience, as if the user is "playing an instrument of their feelings."

Finalization Sound

To provide a sense of completion when the user submits their input:

  • An audio sample of a singing bowl is played when the Enter key is pressed.

  • This sound is designed to harmonize with both the A minor and C major key signatures.

Technical Implementation

The audio system is built using the following technologies:

  • Web Audio API: For low-level audio control and mixing.

  • Howler.js: A JavaScript audio library for simplified audio management.

  • GSAP (GreenSock Animation Platform): For smooth volume transitions.

Here's an overview of the audio tracks setup from AudioController.js:

const audioTracks = [
  { name: "padC", src: "/audio/padC.mp3", initialVolume: 0 },
  { name: "padF", src: "/audio/padFAndArp.mp3", initialVolume: 0 },
  { name: "casette", src: "/audio/casette.mp3", initialVolume: 0 },
  { name: "arpC", src: "/audio/arpC.mp3", initialVolume: 0.5 },
];

These tracks are then managed and mixed dynamically based on user interaction and AI sentiment analysis.

By combining musical theory with interactive audio programming, Stoca creates a unique, responsive soundscape that enhances the user's emotional journey through their conversation with the AI.

Visit site ↗
View full code on Github ↗
Visit site ↗
View full code on Github ↗
Page cover image