What are the basic functions
of an operating system?
Explain briefly about,
processor, assembler, compiler, loader, linker and the functions executed by
them.
What are the difference
phases of software development? Explain briefly?
Differentiate between RAM
and ROM?
What is DRAM? In which form
does it store data?
What is cache memory?
What is hard disk and what
is its purpose?
Differentiate between
Complier and Interpreter?
What are the different tasks
of Lexical analysis?
What are the different
functions of Syntax phase, Sheduler?
What are the main difference
between Micro-Controller and Micro- Processor?
Describe different job
scheduling in operating systems.
What is a Real-Time System ?
What is the difference
between Hard and Soft real-time systems ?
What is a mission critical
system ?
What is the important aspect
of a real-time system ?
If two processes which
shares same system memory and system clock in a distributed system, What is it
called?
What is the state of the
processor, when a process is waiting for some event to occur?
What do you mean by
deadlock?
Explain the difference
between microkernel and macro kernel.
Give an example of
microkernel.
When would you choose bottom
up methodology?
When would you choose top
down methodology?
Write a small dc shell
script to find number of FF in the design.
Why paging is used ?
Which is the best page
replacement algorithm and Why? How much time is spent usually in each phases
and why?
Difference between Primary
storage and secondary storage?
What is multi tasking, multi
programming, multi threading?
Difference between multi
threading and multi tasking?
What is software life cycle?
Demand paging, page faults,
replacement algorithms, thrashing, etc.
Explain about paged
segmentation and segment paging
While running DOS on a PC,
which command would be used to duplicate the entire diskette?
Following are a few basic questions that cover the
essentials of OS:
SECTION
- III
MEMORY
MANAGEMENT
1. What is the difference between Swapping and Paging?
Swapping:
Whole process is moved from the swap device to the
main memory for execution. Process size must be less than or equal to the
available main memory. It is easier to implementation and overhead to the
system. Swapping systems does not handle the memory more flexibly as compared
to the paging systems.
Paging:
Only the required memory
pages are moved to main memory from the swap device for execution. Process size
does not matter. Gives the concept of the virtual memory.
It provides
greater flexibility in mapping the virtual address space into the physical
memory of the machine. Allows more number of processes to fit in the main
memory simultaneously. Allows the greater process size than the available
physical memory. Demand paging systems handle the memory more flexibly.
2. What is major difference between the Historic Unix and the new BSD
release of Unix System V in terms of Memory Management?
Historic Unix uses Swapping
- entire process is transferred to the main memory from the swap device,
whereas the Unix System V uses Demand Paging - only the part of the process is
moved to the main memory. Historic Unix uses one Swap Device and Unix System V
allow multiple Swap Devices.
3. What is the main goal of the Memory Management?
·
It decides which process should reside in the main memory,
·
Manages the parts of the virtual address space of a process which is
non-core resident,
·
Monitors the available main memory and periodically write the processes
into the swap device to provide more processes fit in the main memory
simultaneously.
4. What is a Map?
A Map is an Array, which
contains the addresses of the free space in the swap device that are
allocatable resources, and the number of the resource units available there.
This allows First-Fit
allocation of contiguous blocks of a resource. Initially the Map contains one
entry - address (block offset from the starting of the swap area) and the total
number of resources.
Kernel treats each unit of Map as a group of disk blocks. On
the allocation and freeing of the resources Kernel updates the Map for accurate
information.
5. What scheme does the Kernel in Unix System V follow while choosing
a swap device among the multiple swap devices?
Kernel follows Round Robin
scheme choosing a swap device among the multiple swap devices in Unix System V.
6. What is a Region?
A Region is a continuous
area of a process’s address space (such as text, data and stack). The kernel in
a ‘Region Table’ that is local to the process maintains region. Regions are
sharable among the process.
7. What are the events done by the Kernel after a process is being
swapped out from the main memory?
When Kernel swaps the
process out of the primary memory, it performs the following:
Ø Kernel decrements the
Reference Count of each region of the process. If the reference count becomes
zero, swaps the region out of the main memory,
Ø Kernel allocates the space
for the swapping process in the swap device,
Ø Kernel locks the other
swapping process while the current swapping operation is going on,
Ø The Kernel saves the swap
address of the region in the region table.
8. Is the Process before and after the swap are the same? Give
reason.
Process before swapping is
residing in the primary memory in its original form. The regions (text, data
and stack) may not be occupied fully by the process, there may be few empty
slots in any of the regions and while swapping Kernel do not bother about the
empty slots while swapping the process out.
After swapping the process resides in the swap (secondary
memory) device. The regions swapped out will be present but only the occupied
region slots but not the empty slots that were present before assigning.
While swapping the process once again into the main
memory, the Kernel referring to the Process Memory Map, it assigns the main
memory accordingly taking care of the empty slots in the regions.
9. What do you mean by u-area (user area) or u-block?
This contains the private
data that is manipulated only by the Kernel. This is local to the Process, i.e.
each process is allocated a u-area.
10. What are the entities that are swapped out of the main memory while
swapping the process out of the main memory?
All memory space occupied by
the process, process’s u-area, and Kernel stack are swapped out, theoretically.
Practically, if the
process’s u-area contains the Address Translation Tables for the process then
Kernel implementations do not swap the u-area.
11. What is Fork swap?
fork() is a system call to create a child process. When the
parent process calls fork() system call, the child process is created and if
there is short of memory then the child process is sent to the read-to-run
state in the swap device, and return to the user state without swapping the
parent process. When the memory will be available the child process will be
swapped into the main memory.
12. What is Expansion swap?
At
the time when any process requires more memory than it is currently allocated,
the Kernel performs Expansion swap. To do this Kernel reserves enough space in
the swap device. Then the address translation mapping is adjusted for the new
virtual address space but the physical memory is not allocated. At last Kernel
swaps the process into the assigned space in the swap device. Later when the Kernel swaps the process into
the main memory this assigns memory according to the new address translation
mapping.
13. How the Swapper works?
The
swapper is the only process that swaps the processes. The Swapper operates only
in the Kernel mode and it does not uses System calls instead it uses internal
Kernel functions for swapping. It is the archetype of all kernel process.
14. What are the processes that are not bothered by the swapper? Give
Reason.
Ø Zombie process: They do not
take any up physical memory.
Ø Processes locked in memories
that are updating the region of the process.
Ø Kernel swaps only the
sleeping processes rather than the ‘ready-to-run’ processes, as they have the
higher probability of being scheduled than the Sleeping processes.
15. What are the requirements for a swapper to work?
The
swapper works on the highest scheduling priority. Firstly it will look for any
sleeping process, if not found then it will look for the ready-to-run process
for swapping. But the major requirement for the swapper to work the
ready-to-run process must be core-resident for at least 2 seconds before
swapping out. And for swapping in the process must have been resided in the
swap device for at least 2 seconds. If the requirement is not satisfied then
the swapper will go into the wait state on that event and it is awaken once in
a second by the Kernel.
16. What are the criteria for choosing a process for swapping into
memory from the swap device?
The
resident time of the processes in the swap device, the priority of the
processes and the amount of time the processes had been swapped out.
17. What are the criteria for choosing a process for swapping out of the
memory to the swap device?
Ø The process’s memory
resident time,
Ø Priority of the process and
Ø The nice value.
18. What do you mean by nice value?
Nice value is the value that controls {increments or
decrements} the priority of the process. This value that is returned by the nice
() system call. The equation for using nice value is:
Priority = (“recent CPU usage”/constant) + (base- priority) + (nice
value)
Only the administrator can
supply the nice value. The nice () system call works for the running process
only. Nice value of one process cannot affect the nice value of the other
process.
19. What are conditions on which deadlock can occur while swapping the
processes?
·
All processes in the main memory are asleep.
·
All ‘ready-to-run’ processes are swapped out.
·
There is no space in the swap device for the new incoming process that
are swapped out of the main memory.
·
There is no space in the main memory for the new incoming process.
20. What are conditions for a machine to support Demand Paging?
·
Memory architecture must based on Pages,
·
The machine must support the ‘restartable’ instructions.
21. What is ‘the principle of locality’?
It’s the nature of the
processes that they refer only to the small subset of the total data space of
the process. i.e. the process frequently calls the same subroutines or executes
the loop instructions.
22. What is the working set of a process?
The set of pages that are
referred by the process in the last ‘n’, references, where ‘n’ is called the window of the working set of the
process.
23. What is the window of the working set of a process?
The window of the working
set of a process is the total number in which the process had referred the set
of pages in the working set of the process.
24. What is called a page fault?
Page fault is referred to
the situation when the process addresses a page in the working set of the
process but the process fails to locate the page in the working set. And on a
page fault the kernel updates the working set by reading the page from the
secondary device.
25. What are data structures that are used for Demand Paging?
Kernel
contains 4 data structures for Demand paging. They are,
Ø Page table entries,
Ø Disk block descriptors,
Ø Page frame data table
(pfdata),
Ø Swap-use table.
What are the bits that
support the demand paging?
Valid, Reference, Modify,
Copy on write, Age. These bits are the part of the page table entry, which includes
physical address of the page and protection bits.
Page address AgeCopy on
writeModifyReferenceValidProtection
How the Kernel handles the fork() system call in traditional Unix and
in the System V Unix, while swapping?
Kernel
in traditional Unix, makes the duplicate copy of the parent’s address space and
attaches it to the child’s process, while swapping. Kernel in System V Unix,
manipulates the region tables, page table, and pfdata table entries, by
incrementing the reference count of the region table of shared regions.
Difference between the fork() and vfork() system call?
During
the fork() system call the Kernel makes a copy of the parent process’s address
space and attaches it to the child process.
But the vfork() system call do not makes any copy of the
parent’s address space, so it is faster than the fork() system call. The child
process as a result of the vfork() system call executes exec() system call. The
child process from vfork() system call executes in the parent’s address space
(this can overwrite the parent’s data and stack ) which suspends the parent
process until the child process exits.
What is BSS(Block Started by Symbol)?
A
data representation at the machine level, that has initial values when a
program starts and tells about how much space the kernel allocates for the
un-initialized data. Kernel initializes it to zero at run-time.
What is Page-Stealer process?
This
is the Kernel process that makes rooms for the incoming pages, by swapping the
memory pages that are not the part of the working set of a process.
Page-Stealer is created by the Kernel at the system initialization and invokes
it throughout the lifetime of the system. Kernel locks a region when a process
faults on a page in the region, so that page stealer cannot steal the page,
which is being faulted in.
Name two paging states for a page in memory?
The two paging states are:
·
The page is aging and is not yet eligible for swapping,
·
The page is eligible for swapping but not yet eligible for reassignment
to other virtual address space.
What are the phases of swapping a page from the memory?
·
Page stealer finds the page eligible for swapping and places the page
number in the list of pages to be swapped.
·
Kernel copies the page to a swap device when necessary and clears the valid bit in the page table entry, decrements the pfdata reference
count, and places the pfdata table entry at the end of the free list if its
reference count is 0.
What is page fault? Its types?
Page
fault refers to the situation of not having a page in the main memory when any
process references it.
There are two types of page
fault :
·
Validity fault,
·
Protection fault.
In what way the Fault Handlers and the Interrupt handlers are
different?
Fault
handlers are also an interrupt handler with an exception that the interrupt
handlers cannot sleep. Fault handlers sleep in the context of the process that
caused the memory fault. The fault refers to the running process and no
arbitrary processes are put to sleep.
What is validity fault?
If a process referring a
page in the main memory whose valid bit is not set, it results in validity
fault.
The valid bit is not set for those pages:
·
that are outside the virtual address space of a process,
·
that are the part of the virtual address space of the process but no physical
address is assigned to it.
What does the swapping
system do if it identifies the illegal page for
swapping?
If the disk block descriptor
does not contain any record of the faulted page, then this causes the attempted
memory reference is invalid and the kernel sends a “Segmentation violation” signal
to the offending process. This happens when the swapping system identifies any
invalid memory reference.
What are states that the
page can be in, after causing a page fault?
·
On a swap device and not in memory,
·
On the free page list in the main memory,
·
In an executable file,
·
Marked “demand zero”,
·
Marked “demand fill”.
In what way the validity
fault handler concludes?
·
It sets the valid bit of the page by clearing the modify bit.
·
It recalculates the process priority.
At what mode the fault
handler executes?
At
the Kernel Mode.
What do you mean by the
protection fault?
Protection
fault refers to the process accessing the pages, which do not have the access
permission. A process also incur the protection fault when it attempts to write
a page whose copy on write bit was set during the fork() system
call.
How the Kernel handles the
copy on write bit of a page, when the bit is set?
In
situations like, where the copy on write bit of a page is set and that page is
shared by more than one process, the Kernel allocates new page and copies the
content to the new page and the other processes retain their references to the
old page. After copying the Kernel updates the page table entry with the new
page number. Then Kernel decrements the reference count of the old pfdata table
entry.
In cases like, where the copy on write bit is set and no
processes are sharing the page, the Kernel allows the physical page to be
reused by the processes. By doing so, it clears the copy on write bit and
disassociates the page from its disk copy (if one exists), because other
process may share the disk copy. Then it removes the pfdata table entry from
the page-queue as the new copy of the virtual page is not on the swap device.
It decrements the swap-use count for the page and if count drops to 0, frees
the swap space.
For which kind of fault the
page is checked first?
The
page is first checked for the validity fault, as soon as it is found that the
page is invalid (valid bit is clear), the validity fault handler returns
immediately, and the process incur the validity page fault. Kernel handles the
validity fault and the process will incur the protection fault if any one is
present.
In what way the protection
fault handler concludes?
After
finishing the execution of the fault handler, it sets the modify and protection bits and clears the copy on write bit. It recalculates the process-priority and checks for signals.
How the Kernel handles both
the page stealer and the fault handler?
The page stealer and the
fault handler thrash because of the shortage of the memory. If the sum of the
working sets of all processes is greater that the physical memory then the
fault handler will usually sleep because it cannot allocate pages for a process.
This results in the reduction of the system throughput because Kernel spends
too much time in overhead, rearranging the memory in the frantic pace.
Explain the concept of
Reentrancy.
It is a useful,
memory-saving technique for multiprogrammed timesharing systems. A Reentrant Procedure is one in which
multiple users can share a single copy of a program during the same period.
Reentrancy has 2 key aspects: The program code cannot modify itself, and the
local data for each user process must be stored separately. Thus, the permanent
part is the code, and the temporary part is the pointer back to the calling
program and local variables used by that program. Each execution instance is
called activation. It executes the
code in the permanent part, but has its own copy of local variables/parameters.
The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the
stack.
Note:
A reentrant procedure can be
interrupted and called by an interrupting program, and still execute correctly
on returning to the procedure.
Explain Belady's Anomaly.
Also called FIFO anomaly.
Usually, on increasing the number of frames allocated to a process' virtual
memory, the process execution is faster, because fewer page faults occur.
Sometimes, the reverse happens, i.e., the execution time increases even when
more frames are allocated to the process. This is Belady's Anomaly. This is
true for certain page reference patterns.
What is a binary semaphore?
What is its use?
A binary semaphore is one,
which takes only 0 and 1 as values. They are used to implement mutual exclusion
and synchronize concurrent processes.
What is thrashing?
It is a phenomenon in
virtual memory schemes when the processor spends most of its time swapping
pages, rather than executing instructions. This is due to an inordinate number
of page faults.
List the Coffman's
conditions that lead to a deadlock.
·
Mutual Exclusion: Only one process may use a critical resource at a
time.
·
Hold & Wait: A process may be allocated some resources while
waiting for others.
·
No Pre-emption: No resource can be forcible removed from a process
holding it.
·
Circular Wait: A closed chain of processes exist such that each process
holds at least one resource needed by another process in the chain.
What are short-, long- and
medium-term scheduling?
Long term scheduler
determines which programs are admitted to the system for processing. It
controls the degree of multiprogramming.
Once admitted, a job becomes a process.
Medium term scheduling is
part of the swapping function. This relates to processes that are in a blocked
or suspended state. They are swapped out of real-memory until they are ready to
execute. The swapping-in decision is based on memory-management criteria.
Short term scheduler, also
know as a dispatcher executes most
frequently, and makes the finest-grained decision of which process should
execute next. This scheduler is invoked whenever an event occurs. It may lead
to interruption of one process by preemption.
What are turnaround time and
response time?
Turnaround time is the
interval between the submission of a job and its completion. Response time is
the interval between submission of a request, and the first response to that
request.
What are the typical elements
of a process image?
·
User data: Modifiable part of user space. May include program data,
user stack area, and programs that may be modified.
·
User program: The instructions to be executed.
·
System Stack: Each process has one or more LIFO stacks associated with
it. Used to store parameters and calling addresses for procedure and system
calls.
·
Process control Block (PCB): Info needed by the OS to control
processes.
What is the Translation
Lookaside Buffer (TLB)?
In a cached system, the base
addresses of the last few referenced pages is maintained in registers called
the TLB that aids in faster lookup. TLB contains those page-table entries that
have been most recently used. Normally, each virtual memory reference causes 2
physical memory accesses-- one to fetch appropriate page-table entry, and one
to fetch the desired data. Using TLB in-between, this is reduced to just one
physical memory access in cases of TLB-hit.
What is the resident set and
working set of a process?
Resident set is that portion
of the process image that is actually in real-memory at a particular instant.
Working set is that subset of resident set that is actually needed for
execution. (Relate this to the variable-window size method for swapping
techniques.)
When is a system in safe state?
The set of dispatchable
processes is in a safe state if there exists at least one temporal order in
which all processes can be run to completion without resulting in a deadlock.
What is cycle stealing?
We encounter cycle stealing
in the context of Direct Memory Access (DMA). Either the DMA controller can use
the data bus when the CPU does not need it, or it may force the CPU to
temporarily suspend operation. The latter technique is called cycle stealing.
Note that cycle stealing can be done only at specific break points in an
instruction cycle.
What is meant by
arm-stickiness?
If one or a few processes
have a high access rate to data on one track of a storage disk, then they may
monopolize the device by repeated requests to that track. This generally happens
with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc).
High-density multisurface disks are more likely to be affected by this than low
density ones.
What are the stipulations of
C2 level security?
C2 level security provides
for:
·
Discretionary Access Control
·
Identification and Authentication
·
Auditing
·
Resource reuse
What is busy waiting?
The repeated execution of a
loop of code while waiting for an event to occur is called busy-waiting. The
CPU is not engaged in any real productive activity during this period, and the
process does not progress toward completion.
Explain the popular
multiprocessor thread-scheduling strategies.
·
Load Sharing: Processes are not assigned
to a particular processor. A global queue of threads is maintained. Each
processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work
is allocated to processors on a more permanent basis.
·
Gang Scheduling: A set of related threads is
scheduled to run on a set of processors at the same time, on a 1-to-1 basis.
Closely related threads / processes may be scheduled this way to reduce
synchronization blocking, and minimize process switching. Group scheduling
predated this strategy.
·
Dedicated processor
assignment:
Provides implicit scheduling defined by assignment of threads to processors.
For the duration of program execution, each program is allocated a set of
processors equal in number to the number of threads in the program. Processors
are chosen from the available pool.
·
Dynamic scheduling: The number of thread in a
program can be altered during the course of execution.
When does the condition
'rendezvous' arise?
In message passing, it is
the condition in which, both, the sender and receiver are blocked until the
message is delivered.
What is a trap and trapdoor?
Trapdoor is a secret
undocumented entry point into a program used to grant access without normal
methods of access authentication. A trap is a software interrupt, usually the
result of an error condition.
What are local and global
page replacements?
Local replacement means that
an incoming page is brought in only to the relevant process' address space.
Global replacement policy allows any page frame from any process to be
replaced. The latter is applicable to variable partitions model only.
Define latency, transfer and
seek time with respect to disk I/O.
Seek time is the time
required to move the disk arm to the required track. Rotational delay or
latency is the time it takes for the beginning of the required sector to reach
the head. Sum of seek time (if any) and latency is the access time. Time taken
to actually transfer a span of data is transfer time.
Describe the Buddy system of
memory allocation.
Free memory is maintained in
linked lists, each of equal sized blocks. Any such block is of size 2^k. When
some memory is required by a process, the block size of next higher order is
chosen, and broken into two. Note that the two such pieces differ in address
only in their kth bit. Such pieces are called buddies. When any used block is
freed, the OS checks to see if its buddy is also free. If so, it is rejoined,
and put into the original free-block linked-list.
What is time-stamping?
It is a technique proposed
by Lamport, used to order events in a distributed system without the use of
clocks. This scheme is intended to order events consisting of the transmission
of messages. Each system 'i' in the network maintains a counter Ci. Every time
a system transmits a message, it increments its counter by 1 and attaches the time-stamp
Ti to the message. When a message is received, the receiving system 'j' sets
its counter Cj to 1 more than the maximum of its current value and the incoming
time-stamp Ti. At each site, the ordering of messages is determined by the
following rules: For messages x from site i and y from site j, x precedes y if
one of the following conditions holds....(a) if Ti<Tj or (b) if Ti=Tj and i<j.
How are the wait/signal
operations for monitor different from those for semaphores?
If a process in a monitor signal
and no task is waiting on the condition variable, the signal is lost. So this
allows easier program design. Whereas in semaphores, every operation affects
the value of the semaphore, so the wait and signal operations should be
perfectly balanced in the program.
In the context of memory
management, what are placement and replacement algorithms?
Placement algorithms
determine where in available real-memory to load a program. Common methods are
first-fit, next-fit, best-fit. Replacement algorithms are used when memory is
full, and one process (or part of a process) needs to be swapped out to
accommodate a new program. The replacement algorithm determines which are the
partitions to be swapped out.
In loading programs into
memory, what is the difference between load-time dynamic linking and run-time
dynamic linking?
For load-time dynamic
linking: Load module to be loaded is read into memory. Any reference to a
target external module causes that module to be loaded and the references are
updated to a relative address from the start base address of the application
module.
With run-time dynamic
loading: Some of the linking is postponed until actual reference during
execution. Then the correct module is loaded and linked.
What are demand- and
pre-paging?
With demand paging, a page
is brought into memory only when a location on that page is actually referenced
during execution. With pre-paging, pages other than the one demanded by a page
fault are brought in. The selection of such pages is done based on common
access patterns, especially for secondary memory devices.
Paging a memory management
function, while multiprogramming a processor management function, are the two
interdependent?
Yes.
What is page cannibalizing?
Page swapping or page
replacements are called page cannibalizing.
What has triggered the need
for multitasking in PCs?
·
Increased speed and memory capacity of microprocessors together with
the support fir virtual memory and
·
Growth of client server
computing
What are the four layers
that Windows NT have in order to achieve independence?
·
Hardware abstraction layer
·
Kernel
·
Subsystems
·
System Services.
What is SMP?
To achieve maximum
efficiency and reliability a mode of operation known as symmetric
multiprocessing is used. In essence, with SMP any process or threads can be
assigned to any processor.
What are the key object
oriented concepts used by Windows NT?
·
Encapsulation
·
Object class and instance
Is Windows NT a full blown
object oriented operating system? Give reasons.
No Windows NT is not so, because
its not implemented in object oriented language and the data structures reside
within one executive component and are not represented as objects and it does
not support object oriented capabilities .
What is a drawback of MVT?
It does not have the features
like
·
ability to support multiple processors
·
virtual storage
·
source level debugging
What is process spawning?
When the OS at the explicit
request of another process creates a process, this action is called process
spawning.
How many jobs can be run
concurrently on MVT?
15 jobs
List out some reasons for
process termination.
·
Normal completion
·
Time limit exceeded
·
Memory unavailable
·
Bounds violation
·
Protection error
·
Arithmetic error
·
Time overrun
·
I/O failure
·
Invalid instruction
·
Privileged instruction
·
Data misuse
·
Operator or OS intervention
·
Parent termination.
What are the reasons for
process suspension?
·
swapping
·
interactive user request
·
timing
·
parent process request
What is process migration?
It is the transfer of
sufficient amount of the state of process from one machine to the target
machine
What is mutant?
In Windows NT a mutant
provides kernel mode or user mode mutual exclusion with the notion of
ownership.
What is an idle thread?
The special thread a
dispatcher will execute when no ready thread is found.
What is FtDisk?
It is a fault tolerance disk
driver for Windows NT.
What
are the possible threads a thread can have?
·
Ready
·
Standby
·
Running
·
Waiting
·
Transition
·
Terminated.
What are rings in Windows
NT?
Windows
NT uses protection mechanism called rings provides by the process to implement
separation between the user mode and kernel mode.
What is Executive in Windows
NT?
In Windows NT, executive
refers to the operating system code that runs in kernel mode.
What are the sub-components
of I/O manager in Windows NT?
·
Network redirector/ Server
·
Cache manager.
·
File systems
·
Network driver
·
Device driver
What are DDks? Name an
operating system that includes this feature.
DDks
are device driver kits, which are equivalent to SDKs for writing device drivers.
Windows NT includes DDks.
What level of security does
Windows NT meets?
C2
level security.
0 Comments