Skip to main content

Posts

Showing posts from July, 2024

CST335 Week 6 Journal Entry

 This week we learned about semaphores and their role in synchronizing processes in computer science. Semaphores are synchronization tools used to manage concurrent processes by utilizing counters to control access to shared resources. They can be categorized into binary semaphores, which only take values 0 and 1, and act like a lock, and counting semaphores, which can take non-negative integer values to allow multiple processes to access a finite number of resources. Understanding semaphores is essential for ensuring that concurrent processes do not cause conflicts or data corruption. We also learned how to use semaphores in coding. We learned to write simple code with semaphores that involved initializing the semaphore, and performing wait and signal operations. By controlling the semaphores we ensured that resources were accessed in a controlled manner. We also learned a little about other synchronizations variables like monitors and were able to see the tradeoffs between the differ

CST 334 Week 5 Journal Entry

 This week's topic dealt with concurrency and covered  the benefits of concurrent programming, an important concept in modern computing. Concurrent programming allows multiple task to be executed simultaneously, significantly improving the efficiency and performance of applications. We learned how to create multiple threads for program execution and how to properly manage those threads to prevent program race conditions. Understanding the concept of concurrency is crucial to ensure correctness of concurrent programs. A race condition can occur when the behavior of software depends on the relative timing of events, that often lead to unpredictable results. One method to prevent the race condition is to use locks to achieve mutual exclusion, ensuring that only one thread has access to a section of code at a time. We also learned about condition variables, which are used for thread synchronization, and proved a practical example of how to manage concurrency in programs.

CST334 Week 4 Journal Entry

 During this week of CST334 we continued to learn about memory virtualization, focusing on the technique of paging. Paging divides virtual memory into fixed-size pages that are mapped to physical memory frames, reducing fragmentation. We practiced using page tables and multi-level paging to learn how the Memory Management Unit (MMU) translates virtual to physical addresses. We also learned about temporal and spatial locality and calculated average memory access times with paging. We also explored the mechanics of swapping in operating systems and simulated various page replacement policies and analyzing their effects. This week helped me to continue to build on my understanding of efficient memory management and virtualization techniques in operating systems.

CST334 Journal Entry Week 3

 This week was about learning how the Operating System(OS) manages memory. It was somewhat challenging to understand but I believe I have a basic grasp of how it works. I think one of the important concepts of memory was how the OS virtualizes memory for processes. Each process is given access to memory in such a way that it seems like that process is the only one running. As far as the process is concerned all of the memory is available for it to use. However, the OS does some operations on its side to manage the process' memory access to allow for the sharing of the limited memory among a multitude of processes. This managing of memory by the OS is abstracted away from each process and allows each to run without any real concern as to where the data is physically stored in memory.

CST334 Journal Entry Week 2

 In the second week of CST334 Operating Systems, we learned about how the CPU executes multiple processes at a time through context switching. I learned that there are several techniques to instruct the CPU to handle multiple  processes. I learned about first in, first out (FIFO), where the first process to start is the first to finish. Last in first out (LIFO) where the last process that comes in is the first to be completed. Shortest job first (SJF) looks at all the current processes and picks the one that is the shortest. This is closely related to shortest to complete which looks at which process has the least amount of time remaining to be completed. One additional method was round robin where the CPU is continually switching process based on a given time interval. I found it interesting to learn about the various ways an operating system can be programmed to handle the many processes it will be expected to handle. I felt that I learned a lot about the tradeoffs for each method an