In this course we strongly emphasize using locks and condition variables for implementing concurrent programs: we discuss them extensively in lecture, you solve concurrency problems in homework and the project, and you even implement them in the project (plus rumor has it that you will likely encounter the material on the exams).
We focus on locks and condition variables because those are the mechanisms that you are likely going to use going forward in your career when writing your own concurrent programs in a modern programming language. To illustrate this point, this page shows the use of locks and condition variables to solve the same concurrency problem in a variety of modern languages:
Each of the following programs implements the "alternating threads" pattern that we discussed in the Monitors lecture (and is one of the example test programs for the project). The programs create two child threads which coordinate with each other to explicitly alternate their execution in the face of arbitrary context switches. More specifically, the synchronization implements the property that after one thread executes an iteration of the loop, it cannot execute another iteration until the other thread has executed an iteration of the loop.
Similar to the original design goals of synchronization in the Mesa language and Pilot operating system, the baseline implementation of locks and condition variables in Java is part of the language specification. Each object implicitly has a lock associated with it, and all objects support condition variable semantics. (Indeed, more than half of the methods in Java's base Object class are for synchronization!) For methods in Java labeled synchronized, the compiler will generate code for locking and unlocking the implicit lock associated with the object. Similarly, the Object methods for condition variable operations will also use the implicit object lock. The original motivation for having the compiler generate the code is to prevent programmer mistakes when using locks.
The program creates two child threads, which alternate with each other. Since context switches can happen dynamically at any time, ultimately determined by the scheduler when it executes, we add explicit calls to yield to emphasize the point. The program uses the lock associated with the PingPong1 class object (hence the use of synchronized static). Note that this program does not have any code for using locks; the compiler will generate that code for us for the synchronized method.
In 2004, Java version 1.5 (Java SE 5) introduced explicit Lock and Condition classes for synchronization. If the language already has built-in synchronization primitives, why add new classes for them? These classes are more flexible than the built-in primitives and handle a wider range of behaviors, such as progammer-defined semantics for scheduling the threads waiting on a condition variable (e.g., waking up after a timeout). A tradeoff, though, is that programmers have to explicitly manage locks (such methods are no longer synchronized). In doing so, Java synchronization becomes comparable to other modern languages which abandoned the approach of having the compiler generate code to manage locks.
The C# version is very similar to the Java program that uses the Lock and Condition classes. As in Java, in C# every object implicitly has a lock associated with it. However, in C# the lock and condition methods are encapsulated in a single Monitor class. Note that a Monitor does not include a lock, it uses a lock associated with an object passed as a parameter (as in Java, all objects in C# have an implicit lock associated with them).
C++ originally did not have threading, concurrency, and synchronization as part of the language. The C++11 standard added these features (among many others) to the language.
Rust's syntax may be rather notorious, but if you set aside the syntax the synchronization pattern and mechanisms are very similar to other languages.