Stoca
An immersive web experience that combines AI-driven philosophical conversation with real-time adaptive visuals and music, creating a personalized journey of self-reflection.




Project Overview
Stoca is a web app that pairs you with an AI stoic philosopher. It analyzes your conversation in real-time, morphing both visuals and audio to match your mood.
Key Features:
AI-powered philosophical dialogue using ChatGPT 3.5
Sentiment-reactive color palette generation
Dynamic background gradient using custom shaders
Adaptive musical composition that shifts with conversation tone
Interactive audio elements responding to user input
Tech Stack:
React
OpenAI API (ChatGPT 3.5)
React Three Fiber (three.js)
Blender
GSAP
Custom WebGL Shaders
Web Audio API
→ AI Interaction and Sentiment Analysis
ChatGPT Integration
The heart of Stoca's interactivity lies in its integration with OpenAI's API. We use a carefully crafted prompt to set the context for our AI stoic philosopher:
const [finalPrompt, setFinalPrompt] = useState(
"The following is a conversation with an AI Stoic Philosopher. The philosopher is helpful, creative, clever, friendly, gives brief stoic advice, and asks deep questions.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How are you feeling? "
);
"The following is a conversation with an AI Stoic Philosopher. The philosopher is helpful, creative, clever, friendly, gives brief stoic advice, and asks deep questions.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How are you feeling? "
This prompt is then used in our API call to generate responses:
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: finalPrompt },
{ role: "user", content: inputt },
],
temperature: 0.9,
max_tokens: 300,
top_p: 1,
frequency_penalty: 0.76,
presence_penalty: 0.75,
});
Dynamic Color Palette Generation
We use the user's input to generate a mood-appropriate color palette:
const colorCompletion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content:
"You are a color palette generator. Provide five different hex value colors that create a palette matching the input's sentiment. Then describe that sentiment as either optimistic or pessimistic.",
},
{ role: "user", content: inputt },
],
});
"You are a color palette generator. Provide five different hex value colors that create a palette matching the input's sentiment. Then describe that sentiment as either optimistic or pessimistic."
We then take the array of hex values [#000000, #000000, #000000, #000000, #000000] and animate the background gradient, smoothly transitioning between the colors to reflect the user's sentiment. Based on whether the sentiment analysis indicates a positive or negative tone, we adjust the background music to align with the user's emotional state.
→ Dynamic Visual Elements with React Three Fiber
The visual experience in Stoca is powered by React Three Fiber, allowing for seamless integration of 3D graphics with React. This section showcases the advanced techniques used to create a responsive and immersive environment.
Gradient Background
The dynamic gradient background is a key visual element that responds to the AI-generated color palette. It's implemented using a custom GLSL shader:
This shader creates a fluid, animated gradient by:
Using a 3D Simplex noise function to generate organic patterns
Mixing between the five colors provided by the AI based on the noise value
Animating the noise over time to create a flowing effect
The colors are passed to the shader as uniforms and updated in real-time as the AI generates new color palettes based on the conversation's sentiment.

Camera Transitions
We implement smooth transitions of the 3D camera that respond to user interactions. These transitions are managed in the Env.js
file:
This code:
Smoothly interpolates (lerps) the camera to new positions based on user interactions
Creates a dynamic, cinematic feel as users progress through the conversation
Stoic Philosopher Statue
A central visual element in Stoca is the stoic philosopher statue, which I created using Blender. This 3D model serves as a focal point in the scene, embodying the AI's presence in a visually striking way.
The process of creating the statue involved:
Conceptualizing a pose and style that represents stoic philosophy
Modeling the basic form in Blender
Sculpting details to add character and depth
UV unwrapping and texturing for realistic materials
Exporting as GLB
The statue is then imported into the React Three Fiber scene:
import { useGLTF } from '@react-three/drei'
function Statue() {
const { nodes, materials } = useGLTF('/path/to/statue.glb')
return (
<group>
<mesh
geometry={nodes.Statue.geometry}
material={materials.StatueMaterial}
/>
</group>
)
}

→ Adaptive Audio Experience
The audio in Stoca is meticulously designed to be both interactive and mood-responsive, creating a rich sonic landscape that enhances the user's emotional journey.
Musical Composition and Key Modulation
The audio experience is built around two complementary key signatures:
A minor (pessimistic mood)
C major (optimistic mood, the relative major of A minor)
This musical design allows for smooth transitions between moods while maintaining harmonic consistency. Here's how it works:
Composition: I composed arpeggios and synth pad loops in both A minor and C major.
Sentiment-based Playback: Depending on the AI's sentiment analysis of the user's input:
Optimistic sentiment: C major stems play prominently
Pessimistic sentiment: A minor stems are introduced while C major fades
Here's a snippet from AudioController.js
that manages this transition:
useEffect(() => {
const optimisticSettings = { padC: 0.5, padF: 0, arpC: 0.5 };
const pessimisticSettings = { padC: 0, padF: 0.5, arpC: 0 };
const settings = word === "optimistic" ? optimisticSettings : pessimisticSettings;
Object.entries(settings).forEach(([trackName, targetVolume]) => {
gsap.to(audioStates[trackName], {
volume: targetVolume,
duration: 5,
onUpdate: () => updateVolume(trackName, audioStates[trackName].volume),
});
});
}, [word]);
This code smoothly transitions between optimistic and pessimistic audio settings over 5 seconds, creating a gradual shift in mood.
Interactive Typing Sounds
To make the typing experience more engaging and musically integrated, I implemented a unique arpeggio playback system:
Arpeggio Design: I composed a 21-note arpeggio sequence that works harmonically in both A minor and C major.
Note Isolation: Each note of the arpeggio was exported as an individual audio file.
Playback Mechanism: As the user types, the system plays through this arpeggio sequence:
Each new word (detected by a space key press) triggers the next note in the sequence.
The sequence loops after all 21 notes have been played.
Here's the relevant code from InputBar.js
:
const handleKeyPress = (event) => {
setFirstInteraction((prev) => prev + 1);
if (!isMobile) {
if (event.key === " ") {
keySound.play();
setCurrentNote((prev) => (prev + 1) % 22);
}
if (event.key === "Enter") {
enterSound.play();
setEntered(true);
}
}
};
// Audio setup
const keySound = new Howl({
src: [`/audio/${currentNote}.mp3`],
volume: 0.4,
});
const enterSound = new Howl({
src: ["/audio/enter.mp3"],
volume: 0.5,
});
This implementation turns the act of typing into a musical experience, as if the user is "playing an instrument of their feelings."
Finalization Sound
To provide a sense of completion when the user submits their input:
An audio sample of a singing bowl is played when the Enter key is pressed.
This sound is designed to harmonize with both the A minor and C major key signatures.
Technical Implementation
The audio system is built using the following technologies:
Web Audio API: For low-level audio control and mixing.
Howler.js: A JavaScript audio library for simplified audio management.
GSAP (GreenSock Animation Platform): For smooth volume transitions.
Here's an overview of the audio tracks setup from AudioController.js
:
const audioTracks = [
{ name: "padC", src: "/audio/padC.mp3", initialVolume: 0 },
{ name: "padF", src: "/audio/padFAndArp.mp3", initialVolume: 0 },
{ name: "casette", src: "/audio/casette.mp3", initialVolume: 0 },
{ name: "arpC", src: "/audio/arpC.mp3", initialVolume: 0.5 },
];
These tracks are then managed and mixed dynamically based on user interaction and AI sentiment analysis.
By combining musical theory with interactive audio programming, Stoca creates a unique, responsive soundscape that enhances the user's emotional journey through their conversation with the AI.
Last updated