Predicaments with the Java 1.4 synchronization model

Java’s synchronized technique– has some benefits. It’s reasonably easy to know and implement, and generally permits you to write thread-safe code with no getting too bogged down in the details of what the JVM does “under the hood” to coordinate data. Of course, this is also one of its potential disgraces– a breakdown to understand accurately what synchronized does has led to programmers by means of incorrect idioms for instance “double-checked locking” or the notion that “you only require to synchronize on the write, not the read”. But still in a perfect world where each programmer fully appreciates the synchronized keyword, as a technique of synchronization it has some shortcomings:

• For a quantity of purposes, it’s fairly a heavy-handed means of synchronization. For each object synchronized on, the JVM has to stay track of ‘housekeeping’ information, for example which thread owns the lock and how many times it has obtained it. And recall that each time any lock is acquired or at large at the beginning and end of a synchronized block, all cached variables must be synchronized by means of main memory. There’s no way to inform the JVM “you only need to synchronize this one”, nor is the JVM permitted to make such a decision.

• Linking to the previous point, there is no method to gives presentation hints to the JVM about how we expect the code in our synchronized block to act. We’ll give details this point in more detail below.
• Synchronized is an all-or-nothing thing. One time your thread attempts to enter a synchronized block, it will hang everlastingly until the lock is obtainable. In the real world, we frequently want to do things similar to “wait for up to 2 seconds for the lock on the cache, else don’t bother caching”, for which we need a more complex workaround. There’s also the more serious risk of deadlock: two threads each holding on to the lock that the other thread needs to continue. In a multifaceted server application that synchronizes on a lot of different objects in a lot of points in the code, guaranteeing that deadlock does not happen can be a very tricky issue.

• Once a thread does enter a synchronized block, we have no good way of asking the JVM if our thread had to wait for the lock or not. (We could time it, but calling System.currentTimeMillis() twice as we’d have to can itself burn up to a millisecond of CPU time and in any case probably isn’t accurate enough.) So from our normal running program, we can’t profile lock contention and spot bottlenecks.
A limitation of pre-Java 5 as a platform is that the standard class library doesn’t provide implementations of some common synchronization idioms. For example, a common use of synchronization in server applications is to manage a shared resource pool (for example, of database connections). Rather than a simple lock, what we really need in this case is a “permission” system that says “allow up to N threads to hold a lock at any one time” (because there are N resources available in the pool). Idioms such as this can of course is constructed Java 51 (else nobody would have been using Java to run servers!) but not necessarily very efficiently. And without a standard library implementation, different programmers have been forced to re-invent the wheel, possibly in buggy ways. Java programming assignment help is also accessible online from online experts they can provide help to understand java with some techniques and they can also provide assignment help

.

Resource article: https://blog.expertsmind.com/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s