Robert Love Linux Kernel Development Summary

Category: Love
Last Updated: 06 Jul 2020
Essay type: Summary
Pages: 4 Views: 250

Threads and processes are modeled the same ! The general notion of a thread being a lightweight process and much efficient than a process is defied by the Linux community. We know that threads share code and data, whereas It Is not that straight-forward to share data among processes. Threads are computationally less expensive than processing considering their creation and reaping being cheaper than processes. According to the author, process creation times are favorable towards Linux as compared to process or even thread creation for other operating systems.

The author does not throw much light on the benchmark results. He concludes that the only thing to be concerned about is sharing among different processes In Linux. This Is achieved by the clone system call, which makes sharing of resources like VIM, open files, FSP, signal handlers, etc. Possible. 3. Completely fair scheduler. Linux handles scheduling of processes by assigning them nice values (inversely proportional to priority) and real-time priorities. In earlier versions of Linux, a constant time scheduler was introduced which calculated the timescale (based on nice values) in constant time and introduced per errors run queue.

But this design gave poor performance for interactive processes. The author demonstrates this by providing an example of text editor (10 bound process +interactive) and video encoder (CPU bound), where the goal is to have the Interactive text editor preempt the video encoder on an 10 and use CPU_ The limitation in directly mapping nice values to timescale are variable context switching behavior, unit difference In nice values result In far deferent timescale values depending upon starting nice value, inability to assign an absolute timescale, ND timescale coupled to timer ticks.

Order custom essay Robert Love Linux Kernel Development Summary with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

In the new Completely Fair Scheduler (CIFS), processes are assigned a proportion of CPU to use, context switched out If any other process of equal priority has consumed less CUP]. CIFS calculates the proportion of the CPU assigned to a process by weighing its priority and CPU allocation to other geometric effect on CPU allocation. CIFS thus yields constant fairness but a variable switching rate. CIFS does not schedule fairly if the system has a large number of processes which leads to infinite context switching time, though Linux imposes a restriction on the minimum amount of time quantum allocated to a process.

In Linux, there is no process context switch after we acquire a lock in the kernel. This is unfair to processes (might be high priority) that do not want that lock. Telecast this approach does not disable interrupts. 4. Interrupt handling approach in the Linux kernel is very interesting. It breaks down the Sirs into two pieces depending upon their work. It recognizes upper half of the Sirs to be time critical and perform important functions like acknowledge interrupts. Bottom half is for long processing Jobs.

The author gives an example of networked card deader, where packets can be dropped if the network buffer is not copied into memory and becomes full, whereas the Job of processing the packets can be delayed for some later time. An eye-opening thing to know was that, interrupt handlers cannot sleep because they are not associated with process context and it blocks the current process or it needs a backing process. Most of the Linux design decisions are based on this fact ! 5. Bottom halves and deferring work.

Softly, tackles and work queues were the important concepts dealt, which perform the bottom half of the SIR, each used for a different purpose. Softly, from their definition, should be used as bottom halves, when we are looking for more concurrency/ performance because Softly can run simultaneously on any processor; even two of the same type can run concurrently. Softly are limited in number as registered softies are statically determined at compile time and cannot be changed later.

Tackles are derived from Softly, which are dynamically created. Two different tackles can run concurrently on different processors, but two of the same type of tackles cannot run simultaneously. Thus they provide a good trade-off between reference and ease of use. If there is a need to sleep in the bottom halves, kernel threads can be made to do deferred work, which maintains work queues. Kernel threads are schedulable and run in process context; hence, can sleep.

Context switching overhead is involved in them, of course. An interesting thing to know was softly processing should also be done with kernel threads. Reason being user-space running more or less depending upon softly 6. Kernel Synchronization. A significant point to notice is that spin locks can be used in interrupt handlers, whereas semaphores cannot be used because they sleep. Sequential Locks was a new thing that Is got to know, which are used for atomic read and write operations on shared data.

It uses a sequence count on the object accessed and has a similar concept to Load-linked, Store Conditional instructions. These are used to mange 64-bit Jiffies in Linux, which hold the timer tick count. Big kernel lock is not well-explained. The book says that, "it was created to ease the transition from Linen's original SMS implementation to fine-grained locking. " , but it does not explain how it did that and leaves the reader confused.

Cite this Page

Robert Love Linux Kernel Development Summary. (2018, Aug 02). Retrieved from https://phdessay.com/robert-love-linux-kernel-development-summary/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer