What is an 'İş Parçacığı' (Thread)?\nAn 'İş Parçacığı', or thread, is a lightweight unit of execution within a process. Multiple threads can exist within the same process and share its resources, such as memory space, open files, and other process-level data. Unlike separate processes, threads are designed to run concurrently, allowing a program to perform multiple operations at the same time or to handle different parts of a task in parallel. This can improve responsiveness, especially in applications with I/O-bound operations (like network requests or file access), as the program doesn't have to wait for one operation to complete before starting another.\n\n Python's `threading` Module\nPython provides the `threading` module as its standard library for implementing multithreading. This module offers a high-level API for creating and managing threads. It builds on a lower-level module called `_thread` (formerly `thread`), but `threading` is generally preferred for its more object-oriented approach and richer set of synchronization primitives.\n\nKey Concepts and Components:\n\n1. `threading.Thread` Class: This is the primary way to create a new thread. You can subclass `Thread` and override its `run()` method, or more commonly, pass a callable object (a function) to its constructor via the `target` argument.\n - `Thread(target=function, args=(arg1, arg2, ...), kwargs={'key': 'value'})`\n2. `start()`: Once a `Thread` object is created, calling its `start()` method initiates the thread's execution. This method runs the `target` function (or `run()` method if subclassed) in a separate thread of control.\n3. `join()`: The `join()` method blocks the calling thread (usually the main thread) until the thread whose `join()` method is called terminates. This is crucial for ensuring that the main program waits for all child threads to complete their tasks before exiting or proceeding with operations that depend on thread results.\n4. Synchronization Primitives: When multiple threads share resources (like global variables or files), there's a risk of 'race conditions' – situations where the final outcome depends on the unpredictable order of execution of threads. The `threading` module provides several mechanisms to prevent these issues:\n - `threading.Lock`: The most basic synchronization primitive. It's used to protect critical sections of code, ensuring that only one thread can execute that section at a time. A thread acquires the lock before entering the critical section and releases it upon exiting. `with lock:` is a common and recommended way to use locks, ensuring the lock is automatically acquired and released.\n - `threading.RLock` (Reentrant Lock): Similar to `Lock`, but a thread can acquire an `RLock` multiple times without blocking itself. It must release it the same number of times.\n - `threading.Semaphore`: A counter-based lock. It allows a specified number of threads to access a resource concurrently.\n - `threading.Condition`: Allows threads to wait for certain conditions to be met before proceeding.\n - `threading.Event`: A simple signaling mechanism. A thread can set an internal flag, and other threads can wait for this flag to be set.\n\n Global Interpreter Lock (GIL)\nIt's important to understand Python's Global Interpreter Lock (GIL). In CPython (the most common Python interpreter), the GIL is a mutex that protects access to Python objects, preventing multiple native threads from executing Python bytecodes at once. This means that even with multiple threads, only one thread can actively execute Python code at any given time. Consequently, `threading` in CPython does not offer true parallel execution for CPU-bound tasks (tasks that spend most of their time performing calculations). However, the GIL is released during I/O operations (like reading from a network or file), making `threading` highly effective for I/O-bound tasks, as other threads can run while one thread is waiting for I/O to complete.\n\n Use Cases\n- Responsiveness: Keeping a GUI application responsive while performing a long-running background task.\n- I/O Concurrency: Performing multiple network requests, file operations, or database queries concurrently.\n- Parallelism Simulation: For tasks that involve waiting for external resources, threading can simulate parallelism and improve overall throughput.
Example Code
import threading
import time
import random
A shared global variable and a Lock for synchronization
shared_counter = 0
counter_lock = threading.Lock()
def increment_counter(thread_id):
global shared_counter
print(f"Thread {thread_id}: Starting to work...")
time.sleep(random.uniform(0.5, 2)) Simulate some work
Use the lock to protect the shared_counter
with counter_lock:
old_value = shared_counter
shared_counter += 1
print(f"Thread {thread_id}: Counter updated from {old_value} to {shared_counter}")
print(f"Thread {thread_id}: Finished work.")
if __name__ == "__main__":
num_threads = 5
threads = []
print(f"Initial shared_counter value: {shared_counter}")
print("\n--- Creating and starting threads ---")
Create and start multiple threads
for i in range(num_threads):
Create a Thread object, specifying the target function and its arguments
thread = threading.Thread(target=increment_counter, args=(i + 1,))
threads.append(thread)
thread.start() Start the thread's execution
print("\n--- All threads started. Waiting for them to complete ---")
Wait for all threads to complete using join()
for thread in threads:
thread.join() Block until the thread terminates
print("\n--- All threads have finished ---")
print(f"Final shared_counter value: {shared_counter}")
Expected output for shared_counter should be num_threads (e.g., 5)
The Lock ensures that each increment operation is atomic and no updates are lost
due to race conditions.








Multithreading with Python's `threading` Module