In the Linux 2.6 NPTL library, which is where Tyma and Manson ran their tests, context switching was not expensive at all. Even with a thousand threads competing for Core Duo CPU cycles, the context-switching wasn't even noticeable.
31 May 2008
How expensive is thread context switching
In the Linux 2.6 NPTL library, which is where Tyma and Manson ran their tests, context switching was not expensive at all. Even with a thousand threads competing for Core Duo CPU cycles, the context-switching wasn't even noticeable.
Frameworks and the danger of a grand design
Frameworks and the danger of a grand design
This isn't really something that can be taught, at least not in such a way that it really drums the lesson home. You have to experience the pain of creating an over-designed "framework for the sake of it", believing in all good faith that your grand design will help, not hinder, maintainability.
The grand design mindset isn't just the application of an anti-pattern, or even just the inappropriate use of a normally well-behaved design pattern. The mindset is the overuse of patterns, carefully cementing nano-thin layers of indirection atop each other like a process in a chip fabrication plant that can't be shut down. It's the naïve belief that if X is good then 100X must be better every time.
Luckily this mindset is easy to spot. Your team members will be busy creating a beautiful but over-designed system with enums, annotiations, closures and all the latest language features, loosely coupled classes and several hundred pluggable frameworks when a well-placed isThisTheRightValue() method would probably have sufficed.
Picture a pluggable framework that only ever has one plug. You'll see a comment in the code like, "Later this could be applied to other parts of the system."
If there's neither time nor a compelling reason to apply the pluggable framework to other parts of the system in this iteration, then it's likely the programmer going off on a design pattern hike - exploring their talents - and the framework should be struck out of the code. It's just more complexity to maintain.
Of course, some programmers will never learn: they like writing code too much. Lots of it, as if they're paid in lines of code, or reviewed that way. Or maybe it's a macho thing. Pursuing your own grand design will do that for you, but it's better to solve problems with little code. Less is more.
Reducing Coupling Through Unit Tests
Interesting article:
Low coupling refers to a relationship in which one module interacts with another module through a stable interface and does not need to be concerned with the other module's internal implementation.
Unit testing under any name is a good test of the ability to call your code in isolation.
If you have written code with low coupling, it should be easy to unit test. If you have written code with high degrees of coupling, you're likely to be in for a world of pain as you try to shoehorn the code into your test harness.
Is the code highly cohesive? That is, does each module carry a single, reasonably simple responsibility, and is all the code with the same responsibility combined in a single module? If code implementing a single feature of your application is littered all over the place, or if your methods and classes try to do many different things, you almost invariable end up with a lot of coupling between them, so code with low cohesion is a big red flag alerting you to the likelihood of high coupling as well.
10 May 2008
Possibly a useful static analysis tool that doesn't spam you with issues
There are a lot of static analysis tools out there, but Findbugs is unique. Where Checkstyle will raise 500 issues, and PMD 100, FindBugs will only raise 10 - but you damn well better look at them carefully!
That is a slight over-simplification, but it does reflect the philosophy of FindBugs. FingBugs uses more sophisticated analysis techniques than tools like PMD and Checkstyle, working at the bytecode level rather than with the source code, and is more focused on finding the most high priority and potentially dangerous issues.
Can Redhat's realtime kernel reduce latency?
An institution may be running the fastest and most efficient feed handlers, but its servers may be running a standard Linux distribution. We’ve seen an improvement in a number of cases when a switch is made to, for example, the real-time version of Red Hat. It’s not as though an institution needs to have its private team of kernel hackers; today one gets pretty standard distributions, which can make a significant difference.
Java to get better runtime optimisations
Two features were given the spotlight to show how “Java is moving closer to the bare metal as it evolves”. By that AMD means the JVM will be using an increasing number of instruction sets and features for tuning purposes going forward and we could ultimately see the JVM being run directly on a hypervisor, sans operating system. The features are:
* Light-weight profiling (LWP): Designed to improve software parallelism through new hardware features in future versions of AMD processors. It will allow technologies like Java to more easily benefit from the multi-core processors that are now being designed and deployed.
* Advanced Synchronization Facility (ASF): Created to increase concurrency performance and introduces hardware read barriers to help with Garbage Collection.
Tool for tracing application behaviour for latency analysis - another tool
04 May 2008
Some software to look at for event processing
Aleri
Apama
Coral8
Esper
Portware
Streambase
Messaging
29West
Apache ActiveMQ
EAI
Apache ServiceMix
Apache Camel
Streaming over web
Caplin
LightStreamer
03 May 2008
ssh tunnels
userA@machineY> ssh -N -f -L 8888:machineZ:8888 userA@machineX
Now, on machineY you can connect to localhost:8888 and automatically be tunnelled to machineZ via machineX.