Matt Bilodeau and Brandon Shaw's Brain Project
|Brain98 Source Files:|
|brain98.cpp||Brain Class Files, used to hold the processors Main Routines|
|Brain98OS.cpp||Brain's Shell Interpreter|
|mem.cpp||Brain's Memory Class, used for elements of my memory list|
|memmang.cpp||Brain's Memory Management Class|
|messages.cpp||Brain's Message Class|
|pcb.cpp||Brain's PCB class used to hold information about a process|
|sched.cpp||Brain's Scheduler class used to schedule process execution|
|sema.cpp||Brain's Semaphore Class|
|pagetable.h||Brain's Page Table Class|
|pageindex.h||Brain's Process Class for the page table|
|page.h||Brain's pageindex class|
|SD.b98||Software Delay Program|
|RM.b98||Modulus Division via Recursive Subtraction - Initialization|
|DI.b98||Modulus Division via Recursive Subtraction - Implementation|
|EI.b98||Modulus Division via Recursive Subtraction - Output|
some programs more resilient to changes in m and n than others? Why?
Yes, some programs are more resilient to changes in m and
others. An example would be our
Therefore, no matter what size m and n are, we can predict the maximum number of page faults (where we call for a different page of memory) we would need to generate with these sequential programs by the formula (assuming on-demand paging):
In programs with a lot of memory calls (not in sequence), it would be impossible to predict how many page faults there would be without knowing more about the code and n.
Show that when the available memory is doubled, the mean interval between page faults is constant.
Does choosing m = 1 and n = 1 cause any problems with your system? Why? Why not?
Our system doesn't have any problems when m = 1 and n
= 1. However,
it slows our system down considerably because for every instruction
we want to execute we generate at least one page fault when we increment
the instruction count. These page faults take CPU time away from
the running process and thus slow the entire system down.