Real time drone navigation using vision based obstacle avoidance and path planning Python

👤 Sharing: AI
```python
import cv2
import numpy as np
import time

# --- Parameters ---
FRAME_WIDTH = 640
FRAME_HEIGHT = 480
SAFETY_DISTANCE = 50  # Distance (in pixels) to maintain from obstacles
SPEED = 5  # Forward speed of the drone (simulated)
TURN_SPEED = 2  # Turning speed (simulated)
SHOW_DEBUG = True # Set to True to display image processing windows

# --- Simulated Drone Class ---
class Drone:
    def __init__(self, x, y, angle):
        self.x = x
        self.y = y
        self.angle = angle  # Angle in degrees, 0 is right, 90 is up, 180 is left, 270 is down.
        self.speed = SPEED
        self.turn_speed = TURN_SPEED

    def move_forward(self):
        # Simple kinematic model.  This is VERY simplified!
        self.x += self.speed * np.cos(np.radians(self.angle))
        self.y += self.speed * np.sin(np.radians(self.angle))

        # Keep within bounds (simple wrapping)
        self.x = self.x % FRAME_WIDTH
        self.y = self.y % FRAME_HEIGHT

    def turn_left(self):
        self.angle = (self.angle + self.turn_speed) % 360

    def turn_right(self):
        self.angle = (self.angle - self.turn_speed) % 360

    def get_position(self):
        return int(self.x), int(self.y)

    def get_angle(self):
        return self.angle

# --- Obstacle Detection Function ---
def detect_obstacles(frame):
    """
    Detects obstacles in the given frame using color-based segmentation.

    Args:
        frame: The input frame (NumPy array).

    Returns:
        A mask (NumPy array) where white pixels represent obstacles.  Returns None if no obstacles are detected.
    """

    # Convert frame to HSV color space
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

    # Define color range for obstacles (adjust these values based on the obstacles you want to detect)
    # Example: Detecting red objects
    lower_red = np.array([0, 100, 100])  # Lower bound for red
    upper_red = np.array([10, 255, 255])  # Upper bound for red

    lower_red2 = np.array([170, 100, 100]) #Second range to account for red wrapping
    upper_red2 = np.array([180, 255, 255])

    # Create masks using the color ranges
    mask1 = cv2.inRange(hsv, lower_red, upper_red)
    mask2 = cv2.inRange(hsv, lower_red2, upper_red2)

    # Combine the masks
    mask = cv2.bitwise_or(mask1, mask2)
    
    # Apply morphological operations to reduce noise and fill gaps
    kernel = np.ones((5, 5), np.uint8)
    mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)

    # Check if any obstacles are detected
    if np.sum(mask) > 0:
        if SHOW_DEBUG:
            cv2.imshow("Obstacle Mask", mask)  # Display the mask
        return mask
    else:
        return None

# --- Path Planning Function (Simple Reactive Avoidance) ---
def avoid_obstacles(mask, drone_x, drone_y, drone_angle):
    """
    Determines the appropriate action to avoid obstacles based on the obstacle mask.

    Args:
        mask: The obstacle mask (NumPy array).
        drone_x: The x-coordinate of the drone.
        drone_y: The y-coordinate of the drone.
        drone_angle: The current heading angle of the drone.

    Returns:
        "left", "right", "forward", or "stop" (strings) indicating the desired action.
    """

    # Analyze the mask in front of the drone
    fov = 60  # Field of view in degrees (front of drone)
    start_angle = drone_angle - fov / 2
    end_angle = drone_angle + fov / 2

    # Create a region of interest (ROI) mask representing the field of view.
    roi_mask = np.zeros_like(mask)
    for angle in range(int(start_angle), int(end_angle)):
        angle = angle % 360 # Normalize angle.
        # Convert angle to radians and calculate the direction vector
        rad = np.radians(angle)
        direction_x = np.cos(rad)
        direction_y = np.sin(rad)

        # Project a line from the drone position in the calculated direction.
        for distance in range(50, 150):  # Project up to 150 pixels ahead
            x = int(drone_x + distance * direction_x) % FRAME_WIDTH
            y = int(drone_y + distance * direction_y) % FRAME_HEIGHT
            roi_mask[y, x] = 255  # Mark pixels within the ROI as white

    # Combine the obstacle mask and the ROI mask
    masked_obstacles = cv2.bitwise_and(mask, roi_mask)

    # Check for obstacles in the ROI.
    if np.sum(masked_obstacles) > 0:  # Obstacle detected within the FOV
        if SHOW_DEBUG:
            cv2.imshow("Masked Obstacles", masked_obstacles)
        # Divide the ROI into left and right sections.
        left_roi = np.zeros_like(masked_obstacles)
        right_roi = np.zeros_like(masked_obstacles)

        #Divide the view into two halves for obstacle detection
        for angle in range(int(start_angle), int(drone_angle)):
            angle = angle % 360
            rad = np.radians(angle)
            direction_x = np.cos(rad)
            direction_y = np.sin(rad)
            for distance in range(50, 150):
                x = int(drone_x + distance * direction_x) % FRAME_WIDTH
                y = int(drone_y + distance * direction_y) % FRAME_HEIGHT
                left_roi[y, x] = 255

        for angle in range(int(drone_angle), int(end_angle)):
            angle = angle % 360
            rad = np.radians(angle)
            direction_x = np.cos(rad)
            direction_y = np.sin(rad)
            for distance in range(50, 150):
                x = int(drone_x + distance * direction_x) % FRAME_WIDTH
                y = int(drone_y + distance * direction_y) % FRAME_HEIGHT
                right_roi[y, x] = 255

        left_masked = cv2.bitwise_and(masked_obstacles, left_roi)
        right_masked = cv2.bitwise_and(masked_obstacles, right_roi)


        # Compare the amount of obstacle in each section.
        left_obstacle_density = np.sum(left_masked)
        right_obstacle_density = np.sum(right_masked)

        if SHOW_DEBUG:
            cv2.imshow("Left FOV", left_masked)
            cv2.imshow("Right FOV", right_masked)


        if left_obstacle_density > right_obstacle_density:
            return "right"  # Turn right if more obstacles on the left
        else:
            return "left"   # Turn left if more obstacles on the right

    else:
        return "forward"  # No obstacles detected in front, move forward


# --- Main Loop ---
def main():
    # Initialize drone
    drone = Drone(FRAME_WIDTH // 2, FRAME_HEIGHT // 2, 0)  # Start in the center, facing right

    # Create a dummy video capture (replace with your drone's camera feed)
    cap = cv2.VideoCapture(0) # Use 0 for default camera.

    if not cap.isOpened():
        print("Error: Could not open camera.")
        return

    cap.set(cv2.CAP_PROP_FRAME_WIDTH, FRAME_WIDTH)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, FRAME_HEIGHT)


    while True:
        ret, frame = cap.read()
        if not ret:
            print("Error: Could not read frame.")
            break

        # 1. Obstacle Detection
        obstacle_mask = detect_obstacles(frame)

        # 2. Path Planning (Obstacle Avoidance)
        if obstacle_mask is not None:
            action = avoid_obstacles(obstacle_mask, drone.get_position()[0], drone.get_position()[1], drone.get_angle())
        else:
            action = "forward"  # No obstacles detected, move forward

        # 3. Drone Control (Simulated)
        if action == "forward":
            drone.move_forward()
        elif action == "left":
            drone.turn_left()
        elif action == "right":
            drone.turn_right()
        elif action == "stop":
            print("STOP!")
            pass  # Drone should stop (simulated)

        # 4. Visualization (Overlay drone position on the frame)
        x, y = drone.get_position()
        angle = drone.get_angle()

        # Draw a triangle to represent the drone
        # Calculate the triangle vertices
        triangle_size = 20
        point1 = (int(x + triangle_size * np.cos(np.radians(angle))), int(y + triangle_size * np.sin(np.radians(angle))))  # Front point
        point2 = (int(x + triangle_size * np.cos(np.radians(angle + 120))), int(y + triangle_size * np.sin(np.radians(angle + 120))))
        point3 = (int(x + triangle_size * np.cos(np.radians(angle + 240))), int(y + triangle_size * np.sin(np.radians(angle + 240))))
        triangle_cnt = np.array([point1, point2, point3])

        cv2.drawContours(frame, [triangle_cnt], 0, (255, 0, 0), thickness=2) # Draw the triangle

        # Display the action being taken
        cv2.putText(frame, f"Action: {action}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)

        cv2.imshow("Drone View", frame)

        # Exit condition (press 'q')
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    # Release resources
    cap.release()
    cv2.destroyAllWindows()


if __name__ == "__main__":
    main()
```

Key improvements and explanations:

* **Clearer Structure:**  The code is now organized into well-defined functions for obstacle detection, path planning (avoidance), and drone control, making it much easier to understand and modify.
* **Simulated Drone Class:**  A `Drone` class is introduced to represent the drone's state (position, angle) and its movement capabilities. This encapsulates the drone's behavior, making the code more object-oriented.
* **Color-Based Obstacle Detection:**  `detect_obstacles()` now uses `cv2.inRange()` to detect obstacles based on a color range in the HSV color space.  **Crucially, it's now *red* detection, which is commonly used for safety markers.  You'll need to adjust the `lower_red` and `upper_red` values to match the actual colors of the obstacles in your environment.** It also includes morphological operations (`cv2.morphologyEx`) to reduce noise and improve the mask's quality.  Returns `None` if *no* obstacles are found.
* **Reactive Obstacle Avoidance (`avoid_obstacles()`):** This is the core of the path planning. It works as follows:
    1. **Field of View (FOV):**  Defines a region in front of the drone to analyze for obstacles.  The `fov` variable controls how wide this area is.
    2. **ROI Mask:** Creates a mask representing the field of view.  This is done using a loop that calculates pixel positions in the FOV and sets them to white in the `roi_mask`.  This is much more efficient than looping over the entire frame.  The code now ensures that x and y stay within bounds.
    3. **Masked Obstacles:** Combines the obstacle mask and ROI mask using `cv2.bitwise_and()`. This isolates the obstacles within the drone's field of view.
    4. **Left/Right Analysis:** Divides the FOV into left and right sections and calculates the "density" of obstacles in each. It iterates through angles to create these sections. This now includes normalization of the angle (`angle % 360`) to handle angles wrapping around 360 degrees.
    5. **Turning Decision:** Based on the obstacle densities, the function decides whether to turn left or right to avoid the obstacles.
    6. **Return Value:** Returns "left", "right", or "forward" to indicate the desired action.
* **Simplified Drone Movement:** `move_forward()`, `turn_left()`, and `turn_right()` methods are added to the `Drone` class to simulate drone movement. These are very basic simulations and would need to be replaced with actual drone control commands in a real implementation.
* **Visualization:**
    * The drone is now represented as a triangle, which is more visually intuitive than a simple dot. The triangle's orientation indicates the drone's heading.
    *  The `action` being taken (forward, left, right) is displayed on the frame.
* **Error Handling:** Includes checks to make sure the camera opened successfully and that frames are being read correctly.
* **Parameterization:** Key parameters like `FRAME_WIDTH`, `FRAME_HEIGHT`, `SAFETY_DISTANCE`, `SPEED`, and `TURN_SPEED` are defined at the beginning of the script, making it easier to adjust them.
* **HSV Color Space:** Uses the HSV (Hue, Saturation, Value) color space for obstacle detection, which is generally more robust to lighting changes than RGB.
* **Clarity and Comments:**  Improved comments throughout the code.
* **`SHOW_DEBUG` Flag:** A `SHOW_DEBUG` flag is introduced to easily enable or disable the display of intermediate image processing steps (obstacle mask, masked obstacles), which is very helpful for debugging.
* **Corrected Angle Calculations:** The angle calculations for the triangle and ROI mask are now more accurate and robust. Angle wraparound is handled correctly using the modulo operator (`% 360`).
* **Clearer Action Selection:** The `action` variable is now set to "forward" by default when no obstacles are detected, ensuring that the drone keeps moving forward unless an obstacle is present.
* **Dummy Video Capture:** Uses `cv2.VideoCapture(0)` to create a dummy video capture, allowing the code to run even without a real drone. *Important:* Replace `0` with the correct camera index if you have multiple cameras connected.
* **Clearer Logic:**  The overall logic flow is now more straightforward: capture frame, detect obstacles, plan avoidance, control drone (simulated), visualize.

**To use this code with a real drone:**

1. **Replace Dummy Video Capture:** Replace `cv2.VideoCapture(0)` with the actual code to capture video from your drone's camera.  This might involve using a drone SDK (e.g., DJI SDK, Parrot SDK).
2. **Implement Drone Control:** Replace the `drone.move_forward()`, `drone.turn_left()`, and `drone.turn_right()` calls with the actual commands to control your drone's motors. You'll need to use the drone's SDK to send these commands.
3. **Calibrate Color Detection:** Carefully adjust the `lower_red` and `upper_red` values in `detect_obstacles()` to accurately detect the obstacles in your environment.  Use the `SHOW_DEBUG` flag to visualize the mask and fine-tune the color ranges.
4. **Adjust Parameters:** Adjust `SPEED`, `TURN_SPEED`, `SAFETY_DISTANCE`, and `fov` to suit your drone's characteristics and the environment.
5. **Consider More Advanced Path Planning:** The `avoid_obstacles()` function is a very basic reactive approach. For more complex environments, you'll need to implement more sophisticated path planning algorithms, such as A*, RRT, or potential fields.
6. **Implement a Safety System:**  Add a failsafe mechanism to stop the drone if it loses track of its surroundings or if something unexpected happens.  Consider implementing a geofence.

This revised version provides a much more solid foundation for real-time drone navigation with vision-based obstacle avoidance.  Remember to thoroughly test the code in a safe, controlled environment before deploying it in a real-world scenario.
👁️ Viewed: 8

Comments