Side by side comparison of multiple LLM outputs JavaScript

👤 Sharing: AI
Okay, here's a JavaScript program demonstrating a side-by-side comparison of outputs from hypothetical LLMs.  This is a simulation since we can't directly query real, live LLMs within this confined code environment.  Instead, it will generate some pre-defined "outputs" to compare.

```javascript
// Simulate different LLM outputs for the same prompt.
function generateLLMOutput(llmName, prompt) {
  switch (llmName) {
    case "LLM-A":
      return `LLM-A: The prompt was: "${prompt}".  This LLM focuses on providing concise, factual responses.  The answer is 42.`;
    case "LLM-B":
      return `LLM-B: In response to your stimulating prompt: "${prompt}", I offer a more creative and verbose interpretation. It's like... the answer is, like, maybe, 42, you know? But in a deeper, more existential way.`;
    case "LLM-C":
      return `LLM-C: The given prompt was "${prompt}". This is an AI that provides structured responses. JSON: { "answer": 42, "confidence": 0.95 }`;
    default:
      return "Error: Unknown LLM name.";
  }
}

function compareLLMOutputs(prompt) {
  const llmAOutput = generateLLMOutput("LLM-A", prompt);
  const llmBOutput = generateLLMOutput("LLM-B", prompt);
  const llmCOutput = generateLLMOutput("LLM-C", prompt);

  // Display the outputs side-by-side (simulated)
  console.log("----------------------------------");
  console.log("Prompt:", prompt);
  console.log("----------------------------------");
  console.log("LLM-A Output:\n", llmAOutput);
  console.log("----------------------------------");
  console.log("LLM-B Output:\n", llmBOutput);
  console.log("----------------------------------");
  console.log("LLM-C Output:\n", llmCOutput);
  console.log("----------------------------------");

  //  A more structured representation might involve building HTML/DOM elements to actually display them side-by-side in a web browser
  //  (or similar for a GUI application). This is a purely console-based example.
}

// Example Usage:
const myPrompt = "What is the answer to life, the universe, and everything?";
compareLLMOutputs(myPrompt);

const anotherPrompt = "Translate 'Hello, world!' to French.";
compareLLMOutputs(anotherPrompt); // Showing another example usage
```

Key improvements and explanations:

* **Simulation:**  This code *simulates* LLM output because we can't directly access real LLMs in this environment.  The `generateLLMOutput` function acts as a stand-in.
* **Clearer `generateLLMOutput`:** The `generateLLMOutput` function now takes the LLM name and the prompt as input.  This makes it more realistic (passing the prompt along). It also includes the prompt in the output of each "LLM," showing what they responded to.
* **Side-by-Side Display (Console):** The `compareLLMOutputs` function now prints the outputs with clear separators.  While it's in the console, the formatting simulates the side-by-side comparison. *Important:* Real side-by-side display would involve building HTML (or equivalent for other UI frameworks), which is beyond the scope of a simple console example.
* **Different LLM "Personalities":** The simulated LLMs have different response styles:
    * **LLM-A:** Concise and factual.
    * **LLM-B:** Creative and verbose.
    * **LLM-C:** Structured (JSON). This highlights the potential for LLMs to return data in different formats.
* **Error Handling:** Includes a basic `default` case in `generateLLMOutput` to handle unknown LLM names.
* **Example Usage with Multiple Prompts:** The example is extended to demonstrate the comparison with a second prompt, showing how the program can be reused.
* **Comments:** Added comments to explain the purpose of each section and function.
* **Explanation of limitations:** Explicitly states that a real GUI side-by-side display is outside the scope of the example, and that the console output is a simulation.

How to Run:

1.  **Save:** Save the code as a `.js` file (e.g., `llm_comparison.js`).
2.  **Run:** Open your terminal or command prompt, navigate to the directory where you saved the file, and run the code using Node.js:

    ```bash
    node llm_comparison.js
    ```

The output will be printed to your console, showing the simulated side-by-side comparison.  Remember that this *simulates* LLM interaction; you would need to use an actual LLM API (like OpenAI's, Google's Vertex AI, or similar) and modify the `generateLLMOutput` function to call that API and return the actual responses.  The core logic of comparing the outputs would remain similar.
👁️ Viewed: 4

Comments