`
tuicwy
  • 浏览: 8207 次
  • 性别: Icon_minigender_1
  • 来自: 上海
最近访客 更多访客>>
社区版块
存档分类
最新评论
阅读更多

5.1 Overview

 

A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter,a register set, and a stack. It shares with other threads belonging to the same processits code section, data section, and other operating-system resources, such as open files and signals.

 

The benefits of multithreaded programming can be broken down into four major
categories:

 

1. Responsiveness

2. Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity within the same address space.

3. Economy: Empirically gauging the difference in overhead can be difficult, but in general it is much more time consuming to create and manage processes than threads. In Solaris, for example, creating a process is about thirty times slower than is creating a thread, and context switching is about five times slower.

4. Utilization of multiprocessor architectures

 

User and Kernel Threads

 

User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system.

 

The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call.

The second approach is to implement a kernel-level library supported directly bythe operating system.In this case, code and data structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the kernel.

 

 

5.2 Multithreading Models

Many-to-One Model

The many-to-one model (Figure 5.2) maps many user-level threads to one kernel thread. Thread management is done by the thread library in user space, so it is efficient; but the entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors.

One-to-One Model

The one-to-one model (Figure 5.3) maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread.
Many-to-Many Model

Developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.

 

 

5.3 Threading Issues

 

Often, a web page is loaded using several threads (each image is loaded in a separate thread). When a user presses the stop button, all threads loading the page are cancelled.

 

A thread that is to be cancelled is often referred to as the target thread.

Cancellation of a target thread may occur in two different scenarios:
1. Asynchronous cancellation: One thread immediately terminates the target thread.
2. Deferred cancellation: The target thread can periodically check whether it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion.

 

Lightweight process

An application may require any number of LWPs to run efficiently. Consider a CPUbound application running on a uniprocessor. In this scenario, only one thread may be running at once, so one LWP is sufficient. An application that is I/O-intensive may require multiple LWPs to execute, however. Typically, an LWP is required for each concurrent blocking system call.

 

5.5 Windows XP Threads

 

Windows XP uses the one-to-one mapping described in Section 5.2.2 where each user-level thread maps to an associated kernel thread. However, Windows XP also provides support for a fiber library, which provides the functionality of the many-to-many model.

 

The general components of a thread include:

• A thread ID uniquely identifying the thread.

• A register set representing the status of the processor.

• A user stack used when the thread is running is user mode. Similarly, each thread has a kernel stack used when the thread is running in kernel mode.

• A private storage area used by various run-time libraries and dynamic link libraries (DLLs).

 

The register set, stacks, and private storage area are known as the context of the thread. The primary data structures of a thread include:

• ETHREAD (executive thread block).

• KTHREAD (kernel thread block).

• TEB (thread environment block).

 

The key components of the ETHREAD include a pointer to the process to which the thread belongs and the address of the routine in which the thread starts control. The ETHREAD also contains a pointer to the corresponding KTHREAD.

The KTHREAD includes scheduling and synchronization information for the thread. In addition, the KTHREAD includes the kernel stack (used when the thread is running in kernel mode) and a pointer to the TEB.

The ETHREAD and the KTHREAD exist entirely in kernel space; this means only the kernel can access them. The TEB is a user-space data structure that is accessed when the thread is running in user mode. Among other fields, the TEB contains a user mode stack and an array for thread-specific dat

 

5.6 Linux Threads

 

The sharing of the address space is allowed because of the way a process is represented in the Linux kernel. A unique kernel data structure exists for each process in the system. However, the data structure, rather than storing the data for the process, contains pointers to other data structures where this data is stored—for example, data structures that represent the list of open files, signal-handling information, and virtual memory. When fork() is invoked, a new process is created along with a copy of all the associated data structures of the parent process. A new process is also created
when the clone() system call is made. However, rather than copying all data structures, the new process points to the data structures of the parent process, thereby allowing the child process to share the memory and other process resources of the parent. A set of flags is passed as a parameter to the clone() system call. This set of flags is used to indicate how much of the parent process is to be shared with the child. If none of the flags is set, no sharing occurs; and clone() acts just like fork(). If all flags are set, the child process shares everything with the parent. Other combinations of flags allow various levels of sharing between these two extremes. The Linux kernel also creates several kernel threads that are designated for specific tasks; such as memory management.

Interestingly, Linux does not distinguish between processes and threads.

 

However, because Java threads are managed by the JVM and not by a userlevel thread library or kernel, they do not fall under the category of either user- or kernel-level threads.

  • 大小: 23.5 KB
  • 大小: 5.7 KB
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics