Tags

,

The most typical case one may have observed with the processor is when it spends time with user tasks. If not then what may it do..???
Lets have a glimpse of what all ‘plays’ are running around with kernel code, when the processor is out of hybrid user tasks…

  • System call – When a *user* task or any specific kernel thread requests some service from the heart of the OS aka kernel, it does a trap to the privileged mode and some kernel code does the requested service. System call handlers usually return an integer, by convention if the return value is a small negative integer (-1 .. -515), it signifies the system call is returning an error (absolute value of it should be one of the constants from ‘errno.h’). Other values = successful system call completion. If you i.e. user returns an error, application will usually get a positive error code in errno and the system call will return -1 to the application.
  • Exception handling – When some instruction in user-code or kernel raises some exception(which one could observe usually), the kernel has to handle it as well. Unlike system call, this action is not requested by the user directly. The most typical member of this category is ‘page table’ miss. Most of these exception handlers reside in platform dependant files, some of them then call some generic handling (like handle_mm_fault in our case).
  • Kernel thread – There are many special processes which execute in only kernel space (eg init : one of the roots of a process processes), and use standard trap method of invoking system calls when necessary. When booting, kernel spawns several kernel threads, some of them then do execute system call by which they loose the kernel thread state and become normal processes, which is the case of e.g. init.
  • Interrupt – When some hardware requests some action, it sends an interrupt to the kernel, which in turn calls some interrupt handler, if ‘someone registered’ it. This interrupt handler should be fast, so that it does not lock the system for too long – interrupt handlers are usually executed with disabled interrupts on local CPU. Interrupts normally execute in context of the task which has been current on the IRQ servicing -> CPU at the time when the interrupt came, so there is no interrupt thread or something like that. Each thread has kernel space mapped as part of it’s address space, so unless you access user space (you should never try that, *till you are that nerd*), it does not matter in which task context are you executing. Every interrupt has an assigned interrupt number which you use e.g. in calls to enable_irq() or disable_irq() (to disable that particular interrupt). The exact interpretation of this number is platform dependent and device driver writer should not assume anything about its value, it should be just a 32bit integer with unknown value for it. Never use this value to index into static arrays, it might work on one platform, but break on another one. Some architectures have different interrupt numbers just for the different interrupt levels, some encode board and slot numbers into it (thats good in a way for recognition *not in a enterprise environment* ). Like that, on some platform a disable_irq() can disable just one interrupt level from a certain card on a certain bus, while on other platform, where the IRQ handling is not advanced that much yet, it just disables a certain interrupt level on all CPUs.
  • Bottom half handler – So that you don’t block interrupts on local CPU for too long, you can do some part of your interrupt handling in the bottom half handler viz. add some functions for later processing. Normally, in your short fast interrupt handler, you call mark_bh() if you need some longer processing. Then, when your interrupt handler is done, the system checks for pending bottom half handlers. If it finds some pending, it enables interrupts and executes them. Normally, to use a bottom half handler from within your interrupt handler, you have to allocate a new bottom half handler type , in your initialization register; your bottom half handler with its type (init_bh()) and then in your interrupt actually use it (mark_bh()). Now, this is probably not a good solution, as there are only 32 bottom half handlers available for registering, half of them in use by other parts of the kernel already (therefore make a wise decision, so not to regret afterwards). Another, and much nicer way how to run your bottom half handlers and not waste precious global bottom half types comes in as task queues or ‘tasklets’. This has nothing to do with tasks as execution entities, here task means a function with some arbitrary argument which you can schedule for later execution or in more precise words a Sub-categorization to Interrupt handling.

//Abhiraj

Advertisements