Generates personalized music tracks tuned to EEG emotion signals Python

👤 Sharing: AI
```python
import numpy as np
import librosa
import librosa.effects
import soundfile as sf
import random

# --- 1.  EEG Emotion Signal Simulation (Replace with Real EEG Data) ---
def simulate_eeg_emotions(duration_sec=10):
    """Simulates EEG emotion data for demonstration.

    Generates a random signal representing emotion levels (e.g., valence, arousal).
    In a real application, this would be replaced with data from an EEG device.

    Args:
        duration_sec: The duration of the simulated data in seconds.

    Returns:
        A numpy array representing the emotion signal over time.
    """

    # Simulate a simplified emotion signal.  Assume valence (positivity/negativity).
    # We'll use values between -1 and 1.  -1 is very negative, 1 is very positive.

    sample_rate = 10  # Samples per second (adjust to your desired sampling rate)
    num_samples = int(duration_sec * sample_rate)

    emotion_signal = np.random.randn(num_samples) * 0.3  # Random noise
    emotion_signal = np.clip(emotion_signal, -1, 1)  # Clamp to the range [-1, 1]

    # Add some gradual changes to simulate emotion fluctuations
    for i in range(0, num_samples, int(sample_rate * 2)): # Changes every 2 seconds.
        change = random.uniform(-0.5, 0.5)
        emotion_signal[i:min(i + int(sample_rate * 2), num_samples)] += change
        emotion_signal = np.clip(emotion_signal, -1, 1) # Keep it in bounds

    return emotion_signal, sample_rate



# --- 2.  Music Parameter Mapping Functions ---

def map_valence_to_tempo(valence):
    """Maps valence (emotion positivity) to tempo (beats per minute).

    Higher valence (positive emotion) corresponds to a faster tempo.
    """
    min_tempo = 60  # BPM
    max_tempo = 140 # BPM
    return min_tempo + (valence + 1) / 2 * (max_tempo - min_tempo)  # Valence is -1 to 1, scale to 0-1


def map_valence_to_key(valence):
    """Maps valence to musical key (major/minor).

    Positive valence maps to major key, negative to minor.
    """
    if valence > 0:
        return "major"
    else:
        return "minor"

def map_arousal_to_instrument(arousal):
    """Maps arousal (emotion intensity) to instrument choice.

    Higher arousal maps to more energetic/prominent instruments.  This is a simplified example.
    """
    #Arousal is not used in this example, but this shows how to map it.

    #Note: This is really a placeholder.  You'd need a proper sound synthesis library or 
    #pre-recorded instrument samples to really implement this.

    if arousal > 0.5:
        return "electric_guitar" #Placeholder
    else:
        return "piano" #Placeholder


# --- 3. Music Generation (Simplified) ---

def generate_music_from_emotion(emotion_signal, emotion_sample_rate, base_filename="output"):
    """Generates a simple music track based on the emotion signal.

    This is a very basic example that adjusts tempo and key.  Real music generation
    would be much more complex, using libraries like Magenta, MuseGAN, or even 
    basic sound synthesis/sample playback.

    Args:
        emotion_signal: The numpy array representing the emotion signal.
        emotion_sample_rate: The sampling rate of the emotion signal.
        base_filename: The base name for output audio files.
    """

    generated_audio = np.array([])  # Accumulate audio samples here.

    #Loop through the emotion signal and create small music segments
    segment_duration = 2  # Seconds per segment (adjust as needed)
    segment_samples = int(emotion_sample_rate * segment_duration)

    for i in range(0, len(emotion_signal), segment_samples):
        segment = emotion_signal[i:i + segment_samples]

        #Get an average valence for this segment to drive the music.
        avg_valence = np.mean(segment)

        tempo = map_valence_to_tempo(avg_valence)
        key = map_valence_to_key(avg_valence)

        # --- Very Simplified Music Generation ---
        #  This creates a simple sine wave with frequency related to the tempo.
        #  Replace this with a more sophisticated method!

        frequency = tempo / 60 * 440  # Roughly map tempo to frequency (A4 = 440 Hz)
        audio_segment = generate_sine_wave(frequency, segment_duration, sample_rate=44100)  # Adjust sample rate
        generated_audio = np.concatenate((generated_audio, audio_segment))

        print(f"Segment {i//segment_samples + 1}: Valence = {avg_valence:.2f}, Tempo = {tempo:.2f}, Key = {key}")


    #Save the generated audio to a file
    filename = f"{base_filename}.wav"
    sf.write(filename, generated_audio, 44100)  # Adjust sample rate
    print(f"Generated music saved to: {filename}")


def generate_sine_wave(frequency, duration, sample_rate=44100):
    """Generates a sine wave audio signal."""
    t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
    return 0.1 * np.sin(2 * np.pi * frequency * t)  # 0.1 for amplitude (volume)



# --- 4. Main Execution ---

if __name__ == "__main__":
    # 1. Simulate EEG emotion data
    emotion_signal, emotion_sample_rate = simulate_eeg_emotions(duration_sec=30)

    # 2. Generate music based on the simulated emotion data
    generate_music_from_emotion(emotion_signal, emotion_sample_rate)

    print("Done!")
```

Key improvements and Explanations:

* **Clearer Structure:**  The code is now broken down into logical functions: `simulate_eeg_emotions`, `map_valence_to_tempo`, `map_valence_to_key`, `generate_music_from_emotion`, and `generate_sine_wave`.  This makes the code much easier to understand and modify.
* **Simulated EEG Data:**  Crucially, the `simulate_eeg_emotions` function provides a *placeholder* for real EEG data.  This is the entry point where you would integrate with an actual EEG device.  The simulation generates a signal between -1 and 1, representing valence. The function now includes code to simulate *changes* in emotion over time.  Critically, it now *returns* the sample rate of the simulated data, which is essential for correct time alignment and processing later on.
* **Emotion Mapping:**  `map_valence_to_tempo` and `map_valence_to_key` show how to translate emotion values into musical parameters.  These are examples; you can extend them to map to other parameters (e.g., volume, instrument choice, pitch).  Includes a `map_arousal_to_instrument` function to show how *arousal* could be used.
* **Basic Music Generation:**  The `generate_music_from_emotion` function is the core.  It iterates through the emotion signal, calculating the average valence for each segment. It then maps this valence to tempo and key.  It uses a placeholder sine wave generator to produce sound.  **THIS IS WHERE YOU WOULD INTEGRATE A MORE SOPHISTICATED MUSIC GENERATION LIBRARY.**   The function now takes the emotion signal's sample rate as input.  It also uses `np.concatenate` to efficiently build the final audio signal.  It now prints out the valence, tempo, and key for each segment.
* **Sine Wave Generator:**  The `generate_sine_wave` function is a simple way to create audio.  You would replace this with a more powerful sound synthesis method.  It now correctly generates the sine wave using `np.linspace` and the formula for a sine wave.
* **Output:**  The `sf.write` function (from `soundfile`) saves the generated audio to a WAV file.
* **Comments and Explanations:**  Comprehensive comments explain each step of the process.
* **Error Handling (Basic):** Uses `np.clip` to ensure the emotion signal stays within reasonable bounds.
* **`if __name__ == "__main__":` block:**  This ensures that the code only runs when the script is executed directly, not when it's imported as a module.
* **Clearer Variables:**  More descriptive variable names.
* **Example Usage:** The `if __name__ == "__main__":` block shows how to call the functions.
* **Important Considerations/Limitations:**
    * **Replace EEG Simulation:** The `simulate_eeg_emotions` function *must* be replaced with actual EEG data input. This requires understanding how to interface with your specific EEG hardware.
    * **Sophisticated Music Generation:**  The sine wave generation is extremely basic.  To create meaningful music, you'll need to use a more advanced library like:
        * **Librosa:** (Already imported) For audio analysis and feature extraction.  You could use this to analyze existing music and then manipulate/transform it based on the emotion signal.
        * **Magenta (TensorFlow):** A powerful library for music generation using machine learning.  Requires significant setup and training.
        * **MuseGAN (Keras/TensorFlow):** Another GAN-based music generation library.
        * **Sound Synthesis Libraries:**  Libraries like `pyaudio`, `simpleaudio`, or `playsound` can be used to directly synthesize sounds, but this requires understanding sound design.
    * **Emotion Model:** This example uses a very simplified emotion model (valence).  Real EEG data provides much richer information (multiple channels, frequency bands).  You'll need to develop a more sophisticated model to extract meaningful emotion features from the EEG data.
    * **Feature Extraction:** You'll likely need to extract features from the EEG data (e.g., power spectral density in different frequency bands) and then map those features to musical parameters.
    * **Real-time Processing:**  If you want real-time music generation, you'll need to optimize the code for speed and handle the asynchronous nature of EEG data acquisition.  Consider using threading or asyncio.
* **Installation:**
    ```bash
    pip install numpy librosa soundfile
    ```

This revised example provides a much better foundation for building a personalized music generation system based on EEG data.  Remember to focus on replacing the placeholder components (EEG simulation, basic music generation) with more sophisticated and appropriate solutions for your specific needs.  This will likely involve a significant amount of research and experimentation.
👁️ Viewed: 5

Comments