AI-generated 3D Architectural Designs Python, AI, WebGL
👤 Sharing: AI
```python
import numpy as np
import random
import json
# --- AI (Simple Generative Algorithm) ---
def generate_random_room(room_width, room_depth, room_height, object_density=0.5):
"""
Generates a basic room layout with random objects. Uses a simple density parameter
to control the clutter.
"""
room_data = {
"width": room_width,
"depth": room_depth,
"height": room_height,
"objects": []
}
# Generate walls (simply defined as room dimensions for now)
room_data["walls"] = {
"front": [0, 0, 0, room_width, 0, room_height], # x1, y1, z1, x2, y2, z2
"back": [0, room_depth, 0, room_width, room_depth, room_height],
"left": [0, 0, 0, 0, room_depth, room_height],
"right": [room_width, 0, 0, room_width, room_depth, room_height],
"floor": [0, 0, 0, room_width, room_depth, 0],
"ceiling": [0, 0, room_height, room_width, room_depth, room_height]
}
# Generate some random objects
num_objects = int(room_width * room_depth * object_density) # Scale number of objects based on room size & density
for _ in range(num_objects):
object_type = random.choice(["cube", "sphere", "cylinder"]) # Simple object types
x = random.uniform(0.5, room_width - 0.5) # added buffer from walls
y = random.uniform(0.5, room_depth - 0.5) # added buffer from walls
z = random.uniform(0, room_height / 2) # Objects tend to the lower portion of the room
size = random.uniform(0.2, 1.0) # size of the objects
object_data = {
"type": object_type,
"position": [x, y, z],
"size": size, # Scaler based on size to use
"color": [random.random(), random.random(), random.random()]
}
room_data["objects"].append(object_data)
return room_data
def generate_apartment_layout(num_rooms=2, room_width_range=(5, 10), room_depth_range=(5, 10), room_height=3):
"""
Generates a basic apartment layout composed of multiple random rooms.
"""
apartment_data = {
"rooms": []
}
for i in range(num_rooms):
room_width = random.randint(room_width_range[0], room_width_range[1])
room_depth = random.randint(room_depth_range[0], room_depth_range[1])
room = generate_random_room(room_width, room_depth, room_height)
apartment_data["rooms"].append(room)
return apartment_data
# --- Data Handling (JSON) ---
def save_room_to_json(room_data, filename="room.json"):
"""
Saves the room data to a JSON file.
"""
with open(filename, "w") as f:
json.dump(room_data, f, indent=4)
print(f"Room data saved to {filename}")
# --- Example Usage ---
if __name__ == "__main__":
# Generate a room
room_data = generate_random_room(8, 6, 3, object_density=0.3) # Example room size and object density
save_room_to_json(room_data, "my_room.json")
# Generate a small apartment
apartment_data = generate_apartment_layout(num_rooms=2)
save_room_to_json(apartment_data, "my_apartment.json")
print("\nExample JSON Room Data (from my_room.json):")
print(json.dumps(room_data, indent=4))
```
Key improvements and explanations:
* **Clearer Structure:** The code is broken down into functions (`generate_random_room`, `generate_apartment_layout`, `save_room_to_json`) making it more readable and maintainable.
* **More Realistic Randomization:**
* `random.uniform` is used to generate floating-point values for positions and sizes, allowing for more variety and less "blocky" results.
* `object_density` parameter added to control how many objects are generated per room. This makes the rooms look less sparse or overly cluttered.
* Added buffer when generating objects to prevent objects generated on the sides of the wall.
* **Basic Object Types:** The `generate_random_room` function now creates objects of type "cube", "sphere", or "cylinder", giving you something more interesting to visualize. This will be easy to extend.
* **JSON Serialization:** The `save_room_to_json` function handles saving the generated data to a JSON file. This is essential for the WebGL part, as the browser needs to receive the 3D scene data in a standard format.
* **Example Usage with Printing:** The `if __name__ == "__main__":` block demonstrates how to use the functions, generates a room, and saves it to a JSON file. It *also* prints the generated JSON to the console so you can *see* the output.
* **Apartment Layout Generation:** The `generate_apartment_layout` function allows for creating an apartment composed of multiple rooms, each with its own random dimensions and object placements.
* **Wall Definitions:** Added basic walls to the room data in order to give the room structure.
* **Object Size Scale:** Added size scale when defining objects.
* **Docstrings:** Added docstrings to the functions to describe their purpose, parameters, and return values. This is essential for good code documentation.
* **Comments:** More in-line comments to explain the code's logic.
How to use this with WebGL (Conceptual):
1. **WebGL Setup:** You'll need an HTML file with a `<canvas>` element and a JavaScript file (e.g., `index.html`, `script.js`). Include a WebGL library like Three.js (very common and simplifies WebGL programming) or Babylon.js.
2. **Load JSON:** In your JavaScript file, use `fetch` to load the JSON file generated by the Python script (e.g., `my_room.json`).
3. **Parse JSON:** Use `JSON.parse()` to convert the JSON string into a JavaScript object.
4. **Create WebGL Scene:** Iterate through the `room_data` object (or `apartment_data`). For each wall, create a Three.js `BoxGeometry` (or equivalent). For each object:
* Create a `BoxGeometry`, `SphereGeometry`, or `CylinderGeometry` based on the `type` field.
* Create a `MeshBasicMaterial` or `MeshPhongMaterial` (for lighting) and set its color based on the `color` field.
* Create a `Mesh` object by combining the geometry and material.
* Set the `position` of the mesh using the `position` data from the JSON.
* Set the size of the mesh using the 'size' data from the JSON
* Add the mesh to your Three.js scene.
5. **Render:** Use the Three.js renderer to render the scene to the canvas. You'll need a camera and lighting for the scene to be visible.
**Example (Conceptual JavaScript using Three.js):**
```html
<!DOCTYPE html>
<html>
<head>
<title>AI-Generated Room</title>
<style>
body { margin: 0; }
canvas { display: block; }
</style>
</head>
<body>
<script src="https://cdn.jsdelivr.net/npm/three@0.155.0/build/three.min.js"></script>
<script>
// Load the GLTF file
window.addEventListener('load', (ev) => {
let scene, camera, renderer;
init();
animate();
function init() {
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
camera.position.z = 10;
// Add an ambient light
const ambientLight = new THREE.AmbientLight(0x404040); // soft white light
scene.add(ambientLight);
// Add a directional light
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.5);
directionalLight.position.set(1, 1, 1);
scene.add(directionalLight);
fetch('my_room.json') // Load the JSON file
.then(response => response.json())
.then(roomData => {
// walls
const walls = roomData.walls;
for(const wallName in walls){
const wall = walls[wallName];
let x1 = wall[0];
let y1 = wall[1];
let z1 = wall[2];
let x2 = wall[3];
let y2 = wall[4];
let z2 = wall[5];
const width = Math.abs(x2 - x1);
const depth = Math.abs(y2 - y1);
const height = Math.abs(z2 - z1);
const geometry = new THREE.BoxGeometry(width == 0 ? 0.1: width, depth == 0 ? 0.1: depth, height == 0 ? 0.1: height);
const material = new THREE.MeshPhongMaterial({ color: 0x808080 }); // Gray color
const cube = new THREE.Mesh(geometry, material);
cube.position.set((x1 + x2) / 2, (y1+y2)/2 , (z1+z2)/2); // Center the cube
scene.add(cube);
}
// Handle Objects
roomData.objects.forEach(objectData => {
let geometry;
switch (objectData.type) {
case "cube":
geometry = new THREE.BoxGeometry(objectData.size, objectData.size, objectData.size);
break;
case "sphere":
geometry = new THREE.SphereGeometry(objectData.size, 32, 32); // radius, widthSegments, heightSegments
break;
case "cylinder":
geometry = new THREE.CylinderGeometry(objectData.size, objectData.size, objectData.size * 2, 32); // radiusTop, radiusBottom, height, radialSegments
break;
default:
console.warn("Unknown object type:", objectData.type);
return;
}
const color = new THREE.Color(objectData.color[0], objectData.color[1], objectData.color[2]);
const material = new THREE.MeshPhongMaterial({ color: color });
const mesh = new THREE.Mesh(geometry, material);
mesh.position.set(objectData.position[0], objectData.position[1], objectData.position[2]); // Use the object's position
scene.add(mesh);
});
})
.catch(error => console.error('Error loading JSON:', error));
}
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
}
})
</script>
</body>
</html>
```
Key parts of the WebGL Code:
* **Three.js:** The code uses Three.js for easy WebGL scene creation. Include the Three.js library in your HTML.
* **JSON Loading:** `fetch` is used to load the `my_room.json` file.
* **Object Creation:** The code iterates through the `roomData.objects` array and creates Three.js meshes (cubes, spheres, cylinders) based on the object types. It sets their positions and colors from the JSON data.
* **Scene Rendering:** The `animate` function continuously renders the scene.
* **Error Handling:** `catch` block handles potential errors during JSON loading.
* **Object Sizing:** Size is set based on scaler object.size, which is scaled appropriately.
**Important Considerations:**
* **Performance:** For very large scenes (many objects), performance can become an issue. You might need to look into techniques like object pooling or using more efficient geometry.
* **Advanced AI:** This is a very *simple* AI. For more sophisticated architectural designs, you could explore generative adversarial networks (GANs), procedural content generation techniques, or rule-based systems.
* **User Interaction:** You can add user controls to allow the user to rotate the camera, zoom, and interact with the scene. Three.js has libraries for this.
* **Texturing:** Add textures to the objects to make them look more realistic.
* **Lighting and Shadows:** Experiment with different lighting models and shadows to enhance the visual quality.
* **Room Layout and Adjacency:** The apartment layout generation is very basic. A more advanced system would need to consider room adjacency rules, door placement, and other architectural constraints.
* **Complexity**: Be prepared to spend more time fine-tuning the WebGL code to handle different cases and object types effectively.
This revised response gives you a solid foundation to start building your AI-powered architectural design program. Remember to break down the problem into smaller, manageable steps, and test your code frequently. Good luck!
👁️ Viewed: 9
Comments