16 January 2010

Deliver Polished Presentations Steve Jobs Style - Presentation Tips - Lifehacker

Maven Reactor Plugin Examples

Turns out you can be a lot more selective about which modules are built when you run a Maven ractor build.

From Maven Reactor Plugin - Examples
Consider an ordinary multi-module reactor build:

|-- pom.xml
|-- fooUI
| `-- pom.xml
|-- barBusinessLogic
| `-- pom.xml
`-- bazDataAccess
`-- pom.xml

Suppose project "fooUI" depends on project "barBusinessLogic", which depends on project "bazDataAccess".
fooUI --> barBusinessLogic --> bazDataAccess

mvn reactor:resume -Dfrom=barBusinessLogic

Suppose you're working on your code and you attempt to build your code
with mvn install from my-root-project, and suppose you get a test
failure in barBusinessLogic. You make additional changes to
barBusinessLogic without changing bazDataAccess; you know that
bazDataAccess is fine, so there's no need to rebuild/test it.  That will skip over bazDataAccess and pick up the build where you left off in barBusinessLogic. If barBusinessLogic succeeds, it will go on to build fooUI.

mvn reactor:make -Dmake.folders=barBusinessLogic

reactor:make will examine barBusinessLogic and walk down its dependency tree, finding all of the projects that it needs to build. In this case, it will automatically build bazDataAccess and then barBusinessLogic, without building fooUI.

mvn reactor:make-dependents -Dmake.folders=barBusinessLogic

reactor:make-dependents will examine all of the projects in your
reactor to find projects that depend on barBusinessLogic, and
automatically build those and nothing else. In this case, it will
automatically build barBusinessLogic and then fooUI.  Suppose you've made a change to barBusinessLogic; you want to make sure
you didn't break any of the projects that depend on you. (In this case,
you want to make sure that you didn't break fooUI, but in a more
complex reactor that might not be so obvious.) You also want to avoid
rebuilding/testing projects that you know you haven't changed. In this
case, you want to avoid building bazDataAccess.

mvn reactor:make-scm-changes

reactor:make-scm-changes determines which files have changed using your SCM (Source Configuration Management) tool, e.g. Subversion, Perforce, Git, etc. To use it, you'll need to configure an SCM connection in your root project POM file:

mvn reactor:make -Dmake.folders=barBusinesslogic -Dmake.printOnly

All of the reactor plugin goals take in an argument -Dmake.printOnly
that you can use to see what the goal would have done without actually
doing it. For example:

Running a different goal/lifecycle ("test", "package", "eclipse:eclipse", "clean", etc.)

By default, all of the reactor plugin goals will run mvn install on the appropriate projects. That's a pretty reasonable default, but sometimes you want to run a different command on a bunch of projects. All of the reactor plugin goals will accept a -Dmake.goals argument that will let you run other goals instead. You can separate multiple goals with commas:

mvn reactor:make -Dmake.folders=barBusinessLogic -Dmake.goals=eclipse:eclipse
mvn reactor:make-dependents -Dmake.folders=barBusinessLogic -Dmake.goals=package,clean
mvn reactor:resume -Dmake.folders=barBusinessLogic -Dmake.goals=test
mvn reactor:resume -Dmake.folders=barBusinessLogic -Dmake.goals=install,-DskipTests

In other words, the "goals" are just extra command-line parameters passed to the spawned Maven; they don't necessarily have to be "goals."

mvn reactor:make -Dmake.folders=fooUI -Dfrom=barBusinessLogic

When you use reactor:make, you run a subset of projects, but that doesn't mean stuff won't fail halfway through the build. You can resume a reactor:make build from the project that stopped the build by passing -Dfrom to the reactor:make goal.  The -Dfrom argument also works with reactor:make-dependents and reactor:make-scm-changes.

Nested directories

Let's consider a more complex project:

|-- pom.xml
|-- fooUI
| `-- pom.xml
|-- barBusinessLogic
| `-- pom.xml
|-- quz
| |-- pom.xml
| |-- quzAdditionalLogic
| | `-- pom.xml
| `-- quzUI
| `-- pom.xml
`-- bazDataAccess
`-- pom.xml

Again suppose project "fooUI" depends on project "barBusinessLogic", which depends on project "bazDataAccess".

fooUI --> barBusinessLogic --> bazDataAccess

But furthermore, suppose "quzUI" depends on "quzAdditionalLogic", which depends on "barBusinessLogic."

quzUI --> quzAdditionalLogic --> barBusinessLogic --> bazDataAccess

If you try to run mvn reactor:make -Dmake.folders=quzUI, you'll get an error:

mvn reactor:make -Dmake.folders=quzUI
[INFO] Folder doesn't exist: /home/person/svn/trunk/quzUI

Naturally, you'll have to specify the complete relative path to quzUI, like this:

mvn reactor:make -Dmake.folders=quz/quzUI

15 January 2010

Maven AppAssembler

Appassembler generates artifacts exposing your Java app through Java Service Wrapper


Installing PHPoxy under an secure website (https) is a simple way of being able to visit sites that may not be possible through a corporate firewall.

Install PHProxy in Your Web Space to Access Blocked Sites - Proxy - Lifehacker

HornetQ - another JMS implementation from Jboss

HornetQ - putting the buzz in messaging - JBoss Community
HornetQ is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system.

Why should I use HornetQ?

JMS and above - HornetQ supports the JMS 1.1 API
and also defines its own messaging API for maximum performance and
flexibility. Other protocols are planned for upcoming releases.

Superb performance
- HornetQ class-beating high performance journal provides persistent
messaging performance at rates normally seen for non-persistent
messaging. Non-persistent messaging performance rocks the boat too.

POJO-based design - HornetQ has been designed using POJO and minimal third-party dependencies. You choose how you want to use HornetQ: run it stand-alone, integrate it with JBoss Application Server or another Java server/container or embed it directly inside your own product.

Solid high availability
- HornetQ offers server replication and automatic client failover to
eliminate lost or duplicated messages in case of server failure.

Flexible clustering
- Create clusters of HornetQ servers that know how to load balance
messages. Link geographically distributed clusters over unreliable
connections to form a global network. Configure routing of messages in
a highly flexible way. Adapt HornetQ to your network topology, not the
other way round.

Management - HornetQ provides
a comprehensive management API to manage & monitor servers. It is
integrated seamlessly to the servers to work in a HA environment.

Notes on Oracle Coherence | Architects Zone

Google Wave

Frequently Asked Questions About Google Wave - Google Wave - Lifehacker
Q: How do you describe what Google Wave is in the fewest words possible?

A: Two words: Google Wave is multimedia wikichat.

Ok, I cheated a little. Wikichat is my made-up word for the combination of document collaboration (wikis) and messaging (chat). Imagine a Wikipedia page that only your workgroup can access and that multiple people can change simultaneously, with live, inline chat embedded in it and the ability to add online multimedia like an image slideshow, videos, maps, polls, a Sudoku game, video conference call, and other interactive widgets. See it? That's Wave.

Q: Why would I use Wave instead of email?

A: You'd use Wave instead of email because you can have real-time, IM-like conversations inside it, and cut out the lag time of asynchronous email communication—you know, when you send an email and have to wait for your recipients to read, reply, and send one back. In Wave, if your recipient is online, you don't have to wait. In fact, your recipient can start typing before you stop. It's wacky.

Q: Then why would I use Wave instead of IM?

A: You'd use Wave instead of instant messenger because you can edit the same text, images, captions as someone else is at the same time. During an instant messenger conversation you pass back and forth a series of single-author, uneditable messages. In Wave, anyone can edit any message (or blip, in Wave-speak). Imagine correcting someone else's typos during a chat yourself, without pointing out to them that they mistyped.

Wave also supports conversation threads, which means that instead of one linear discussion where new messages appear on top or below old ones, you can branch off sub-chats on different topics in one wave.

But mostly you use Wave to collaborate on a single copy of a document with multiple people at the same time.

Q: Then why would I use Wave instead of Google Docs?

A: GDocs is more like collaborative/web-based Microsoft Word, where the object is to create a flat file that gets printed or emailed to someone. Wave is more like a real-time wiki, which creates pages meant to be linked and constantly revised, pages that contain web-based multimedia and interactive gadgets.

In Wave you can drop multimedia like image slide shows, YouTube videos, Google Maps, and countless other gadgets that you can't in Google Docs. Like a wiki (and unlike Google Docs), you can link waves to each other very easily.

Wave is more like a real-time, workgroup Wikipedia than Google Docs or email.

Q: So, what would I actually use Wave for?

A: Wave works when two or more people need to co-write a document. A few common use cases include:

* collaborative meeting, conference, or class notes—whether or not everyone's in the same physical room, several people taking notes in one place is much more efficient than everyone taking their own individual notes
* interviews—each question and answer series can be one thread within the parent interview thread, where the interviewer and interviewee can revise and expand questions and answers inline
* group event planning, like a party, trip, wedding
* co-writing and editing—whether it's books, blogs, brochures, policies
* surveys
* translations
* project management

Biased Locking - Cliff Click

Some interesting comments on biased locking at Cliff Click Jr.’s Blog
Recently I re-did HotSpot's internal locking mechanism for Azul's JVM. The old locking mechanism is approaching 15 years old and features a number of design decisions that are now out-dated:

1. Recursion counts are kept as a NULL word on the stack for every recursion depth (i.e., counting in Base 1 math) in order to save a few instructions and a few bits of memory. Both are now in vast plentiful supply. On the 1st lock of an object, it's header is moved into the stack word instead of a NULL and this means that GC or other locking threads (or threads installing a hash code) all need to find and update the header word - which can now be "displaced". This mechanism is complex, racey and error prone.
2. The existing mechanism requires a strong memory fence after a Compare-And-Swap (CAS) op, but on most machines the CAS also includes a memory fence. I.e., HotSpot ends up fencing *twice* for each lock acquire, once to CAS the header and again moving the displaced header to the stack. Each memory fence costs about a cache-miss on most X86 CPUs.
3. The existing mechanism uses "Thin Locks" to optimize for the very common case of a locked object never being contended. New in Java7, +UseBiasedLocking is on by default. This optimizes the common case even more by not using any fencing for locks which have never (yet) changed threads. (See this nice IBM paper on how to do it). The downside in the OpenJDK implementation is that when an object DOES have to change thread-ownership, the cost is so high that Sun has choosen to disable biased locking for whole classes of locks to avoid future thread-ownership-change costs.
4. When a lock does see contention it "inflates" and then the "inflated" lock is much more expensive than a fast-path "thin lock". So even the smallest bit of contention will cause a lock to be much more expensive than the good case.
5. JVM internal locks and locked Java objects use 2 utterly different code bases. This adds a lot of complexity to an already complex system. The two classes of locks are used in slightly different ways and do have different requirements, BUT they both fundamentally implement a fast-path locking protocol over the OS provided locking abstraction. At Azul Systems, we found that these two locking systems have a lot more in common than they do in difference.

10 January 2010

Middleware Integration Testing With JUnit, Maven and VMware: Part 3 | Javalobby

Middleware Integration Testing With JUnit, Maven and VMware: Part 3 | Javalobby makes interesting point about being able to use a VMware snapshot of a server to start up a test fixture for each test in the exact state that you want quickly and easily.

Google Collections Library 1.0 released

Playing default system sounds in Java

Playing default system sounds in Java | Java.net
final Runnable runnable = Toolkit.getDefaultToolkit().getDesktopProperty("win.sound.exclamation");
if (runnable != null)

Going into full screen mode in Java

Fullscreen mode is cool | Java.net

An interesting observation on Java hashCodes

01 January 2010

Wrong Correctness | Bruce Eckel

Wrong Correctness | Bruce Eckel

Some interesting points:


DeMarco and Lister first point out something very important. When someone asks you how long a particular subproject will take, it's usually implicit, and sometimes explicit, that they want to know the shortest, most optimistic time for this task. DeMarco and Lister note that the actual time for finishing a task is a probability curve, and if you only ever give the shortest time, you are giving the leading edge of the curve, where it touches the axis. Thus, each subtask prediction has a 0% probability of being correct. This means your project completion time estimation starts out, from day one, with a 0% probability of being correct. They suggest a relatively simple change in behavior: give, instead, the middle of your probability curve for each subtask, so you begin with a palpable completion time. It doesn't make the completion time predictable, but it does make it significantly less wrong.

People not resources

Steve Blank tells a story that's been repeated in many forms: the seemingly small, one-logical-step-at-a-time event that makes the key players look up and notice that the company has just gone from sweet to sour. In this case it is the slightly-comical decision by a new CFO to stop providing the human resources with free soda, which was costing the company some 10K/year. An easy and rational call, which made the CFO look like a go-getter. The key engineers, once sought avidly by the company, quietly announced their availability and began disappearing. The company didn't panic because it had already gone through its change of life and become more important than its pieces; it was no longer an idealistic youth who valued things like people and quality of life. It had grown up and matured and was now in the adult business of making money. Workers had become fungible resources, easily replaceable.  I remember the first time I saw this happen, in the second company where I had a "real job" after college. I'm not sure what the inciting incident was, perhaps the 3rd or 4th business reorganization within a couple of years, perhaps a sudden withdrawals of bonuses and raises. Whatever the case, a number of the engineers that I considered to be extra-smart began quietly disappearing, with the company making no-big-deal noises as this happened. My own direct manager left, which should have been cold water in my face (but I typically have to learn things in the hardest possible way, and this lesson was -- eventually -- not lost on me).  When did we decide that we were no longer "personnel" (which at least sounds personal) but instead the resources that are human?

Problem with standard interviews for selecting people

Gladwell tells the story of outstanding college football quarterbacks, the majority of whom are abject failures in professional football -- because the game is played entirely differently in the two domains. Thus, you cannot predict the success of a quarterback based on their success in college. Later in the book, he looks at the way we interview prospects for jobs. It turns out the most critical point of the interview is the initial handshake (or other initial impression). If you like the way someone shakes hands, you take whatever answers they give you and adapt them to that first impression. It's basically a romantic process, except with a real romance you decide the outcome after many months, whereas with a job interview you decide after only hours -- or actually in a moment, with the initial handshake. Even our lame attempts to simulate "real" work (by asking programming puzzles, for example), tell us nothing about the truly critical things, like how someone responds to project pressure. We suffer from Fundamental Attribution Error -- we "fixate on supposedly stable character traits and overlook the influence of context," and we combine this with mostly-unconscious, mostly-inappropriate snap judgments to produce astoundingly bad results. Basically, we think that someone who interviews well (one context) will work well on a task or in a team (a completely orthogonal context).  The answer is something called structured interviewing, which changes the questions from what HR is used to -- questions where the answer is obvious, where the interviewee can generate the desired result (not unlike what we've been trained to do in school) -- to those that extract the true nature of the person. For example, when asked "What is your greatest weakness?" you are supposed to tell a story where something that is ostensibly a weakness is actually a strength. Structured interviewing, in contrast, posits a situation and asks how you would respond. There's no obvious right or wrong answer, but your answer tells something important about you, because it tells how you behave in context. Here's an example: "What if your manager begins criticizing you during a meeting? How do you respond?" If you go talk to the manager, you're more confrontational, but if you put up with it, you're more stoic. Neither answer is right, but the question reveals far more than the typical interview questions that have "correct" answers.

[JavaSpecialists 179] - Escape Analysis

[JavaSpecialists 179] - Escape Analysis. Wow, this article is showing some huge improvements with escape analysis. A reasonable complex use-case I tested showed no significant difference when it was first released. Maybe I should try -XX:+DoEscapeAnalysis again, along with -XX:+UseCompressedOops and -XX:+AggressiveOpts. Some notes from 6u14:

Optimization Using Escape Analysis
The -XX:+DoEscapeAnalysis option directs HotSpot to look for objects that are created and referenced by a single thread within the scope of a method compilation. Allocation is omitted for such non-escaping objects, and their fields are treated as local variables, often residing in machine registers. Synchronization on non-escaping objects is also elided.

Compressed Object Pointers
The -XX:+UseCompressedOops option can improve performance of the 64-bit JRE when the Java object heap is less than 32 gigabytes in size. In this case, HotSpot compresses object references to 32 bits, reducing the amount of data that it must process.

Improvement TreeMap Iteration
6u14 includes an experimental implementation of java.util.TreeMap that can improve the performance of applications that iterate over TreeMaps very frequently. This implementation is used when running with the -XX:+AggressiveOpts option.

Creating Intellij Live Templates Quickly

Creating live templates from… templates | JetBrains IntelliJ IDEA Blog. New Year's resolution to use more live templates?

Takeaways on Responsive Design | Javalobby

Takeaways on Responsive Design | Javalobby
In my opinion, knowing what you know well and what you don't know is important. Good designers usually have good instinct in sensing between the "known" and "unknown" and adjust the flexibility of his design along the way as more information is gathered.

As more information is gathered, the dynamics of "change anticipation" also evolves. Certain parts of your system has reduced its anticipated changes due to less unknowns so now you can trade off some flexibility for efficiency or simplicity. On the other hand, you may discover that certain parts of the system has increased its anticipated changes and so even more flexibility is needed.

One important aspect when design a system is not just by looking at what the end result should be, but also look at what the evolution path of the system should look like. The key idea is that a time dimension is introduced here and the overall cost and risk should be summed along the time dimension.

In other words, it is not about whether you have designed a solution that finally meet the business requirement. What is important is how your solution bring value to the business as it evolves over time. A good design is a live animal that can breath and evolve together with your business.

JGroups 2.8.0GA

JGroups 2.8.0.GA released. Nice to see continued development activity on it. Its been a while since I used it. Was quite useful to manage a group of processes maintaining a replicated cache. Might be worth another look to see if it offers other useful use-cases.

Website notes
  • Group creation and deletion. Group members can be spread across LANs or WANs
  • Joining and leaving of groups
  • Membership detection and notification about joined/left/crashed members
  • Detection and removal of crashed members
  • Sending and receiving of member-to-group messages (point-to-multipoint)
  • Sending and receiving of member-to-member messages (point-to-point)