21 December 2010

Lazy programming « Otaku, Cedric's blog

Lazy programming « Otaku, Cedric's blog
Maybe it’s because static typing is so ingrained into my brain, but when I write something like:

def raise_salary(employee)

I really, REALLY, REALLY want to type:

def raise_salary(Employee employee)

My fingers are just screaming to add this type information. Same for local variables or return types. It’s in my head, it’s in my code, why can’t I just give this information to the compiler, the tools and to future readers and reviewers of this code? It’s really not that much to type and it buys me so much. Think about it: refactorings that are guaranteed 100% correct.

I know, amazing, right?

Java Code Geeks: Things Every Programmer Should Know

Java Code Geeks: Things Every Programmer Should Know. Some interesting points amongst others:
1. Act with Prudence
Technical debt is like a loan: You benefit from it in the short-term, but you have to pay interest on it until it is fully paid off. Shortcuts in the code make it harder to add features or refactor your code. They are breeding grounds for defects and brittle test cases. The longer you leave it, the worse it gets. By the time you get around to undertaking the original fix there may be a whole stack of not-quite-right design choices layered on top of the original problem making the code much harder to refactor and correct. In fact, it is often only when things have got so bad that you must fix it, that you actually do go back to fix it. And by then it is often so hard to fix that you really can't afford the time or the risk.

2. Apply Functional Programming Principles
Mastery of the functional programming paradigm can greatly improve the quality of the code you write in other contexts. If you deeply understand and apply the functional paradigm, your designs will exhibit a much higher degree of referential transparency.
Referential transparency is a very desirable property: It implies that functions consistently yield the same results given the same input, irrespective of where and when they are invoked. That is, function evaluation depends less — ideally, not at all — on the side effects of mutable state.

5. Beauty Is in Simplicity by Jørn Ølmheim
There is one quote that I think is particularly good for all software developers to know and keep close to their hearts:
Beauty of style and harmony and grace and good rhythm depends on simplicity. — Plato
In one sentence I think this sums up the values that we as software developers should aspire to.

7. Beware the Share by Udi Dahan
It was my first project at the company. I'd just finished my degree and was anxious to prove myself, staying late every day going through the existing code. As I worked through my first feature I took extra care to put in place everything I had learned — commenting, logging, pulling out shared code into libraries where possible, the works. The code review that I had felt so ready for came as a rude awakening — reuse was frowned upon!
How could this be? All through college reuse was held up as the epitome of quality software engineering. All the articles I had read, the textbooks, the seasoned software professionals who taught me. Was it all wrong?
It turns out that I was missing something critical.Context.
The fact that two wildly different parts of the system performed some logic in the same way meant less than I thought. Up until I had pulled out those libraries of shared code, these parts were not dependent on each other. Each could evolve independently. Each could change its logic to suit the needs of the system's changing business environment. Those four lines of similar code were accidental — a temporal anomaly, a coincidence. That is, until I came along.
The libraries of shared code I created tied the shoelaces of each foot to each other. Steps by one business domain could not be made without first synchronizing with the other. Maintenance costs in those independent functions used to be negligible, but the common library required an order of magnitude more testing.
While I'd decreased the absolute number of lines of code in the system, I had increased the number of dependencies. The context of these dependencies is critical — had they been localized, it may have been justified and had some positive value. When these dependencies aren't held in check, their tendrils entangle the larger concerns of the system even though the code itself looks just fine.
These mistakes are insidious in that, at their core, they sound like a good idea. When applied in the right context, these techniques are valuable. In the wrong context, they increase cost rather than value. When coming into an existing code base with no knowledge of the context where the various parts will be used, I'm much more careful these days about what is shared.
Beware the share. Check your context. Only then, proceed.

14. Code Reviews
You should do code reviews. Why? Because they increase code quality and reduce defect rate. But not necessarily for the reasons you might think.
Instead of simply correcting mistakes in code, the purpose of code reviews should be to share knowledge and establish common coding guidelines. Sharing your code with other programmers enables collective code ownership.

15. Coding with Reason
More generally, each unit of code, from a block to a library, should have a narrow interface. Less communication reduces the reasoning required. This means that getters that return internal state are a liability — don't ask an object for information to work with. Instead, ask the object to do the work with the information it already has. In other words, encapsulation is all — and only — about narrow interfaces.

18. Continuous Learning by Clint Shank
We live in interesting times. As development gets distributed across the globe, you learn there are lots of people capable of doing your job. You need to keep learning to stay marketable. Otherwise, you'll become a dinosaur, stuck in the same job until, one day, you'll no longer be needed or your job gets outsourced to some cheaper resource.
# Read books, magazines, blogs, twitter feeds, and web sites. If you want to go deeper into a subject, consider joining a mailing list or newsgroup.
# If you really want to get immersed in a technology, get hands on — write some code.
# Always try to work with a mentor, as being the top guy can hinder your education. Although you can learn something from anybody, you can learn a whole lot more from someone smarter or more experienced than you. If you can't find a mentor, consider moving on.
# Use virtual mentors. Find authors and developers on the web who you really like and read everything they write. Subscribe to their blogs.
# Get to know the frameworks and libraries you use. Knowing how something works makes you know how to use it better. If they're open source, you're really in luck. Use the debugger to step through the code to see what's going on under the hood. You'll get to see code written and reviewed by some really smart people.
# Whenever you make a mistake, fix a bug, or run into a problem, try to really understand what happened. It's likely that somebody else ran into the same problem and posted it somewhere on the web. Google is really useful here.
# A really good way to learn something is to teach or speak about it. When people are going to listen to you and ask you questions, you'll be highly motivated to learn. Try a lunch-n-learn at work, a user group, or a local conference.
# Join or start a study group (à la patterns community) or a local user group for a language, technology, or discipline you are interested in.
# Go to conferences. And if you can't go, many conferences put their talks online for free.
# Long commute? Listen to podcasts.
# Ever run a static analysis tool over the code base or look at the warnings in your IDE? Understand what they're reporting and why.
# Follow the advice of The Pragmatic Programmers and learn a new language every year. At least learn a new technology or tool. Branching out gives you new ideas you can use in your current technology stack.
# Not everything you learn has to be about technology. Learn the domain you're working in so you can better understand the requirements and help solve the business problem. Learning how to be more productive — how to work better — is another good option.

19. Think carefully about the design of your api

20. Deploy Early and Often (Continuous Deployment)
Debugging the deployment and installation processes is often put off until close to the end of a project. In some projects writing installation tools is delegated to a release engineer who take on the task as a "necessary evil." Reviews and demonstrations are done from a hand-crafted environment to ensure that everything works. The result is that the team gets no experience with the deployment process or the deployed environment until it may be too late to make changes.

21. Distinguish Business Exceptions from Technical
An unresolvable technical problem can occur when there is a programming error.
A variant of this situation is when you are in the "library situation" and a caller has broken the contract of your method, e.g., passing a totally bizarre argument or not having a dependent object set up properly. This is on a par with accessing 83rd element from 17
A different, but still technical, situation is when the program cannot proceed because of a problem in the execution environment, such as an unresponsive database.
In contrast to these, we have the situation where you cannot complete the call for a domain-logical reason.
Mixing technical exceptions and business exceptions in the same hierarchy blurs the distinction and confuses the caller about what the method contract is, what conditions it is required to ensure before calling, and what situations it is supposed to handle.

26. Don't Ignore that Error!

19 December 2010

Google Caliper

Google Caliper
Caliper is Google's open-source framework for writing, running and viewing the results of JavaMicrobenchmarks.

It is quite rough around the edges (June 2010), but we have already found it quite useful, and the API should be pretty stable.

The simplest complete Caliper benchmark looks like this:

public class MyBenchmark extends SimpleBenchmark {
     public void timeMyOperation(int reps) {
         for (int i = 0; i < reps; i++) {

Joshua Bloch: Performance Anxiety – on Performance Unpredictability, Its Measurement and Benchmarking | Javalobby

06 December 2010

DealHub Announces New eFX Price Distribution Customer

DealHub Announces New eFX Price Distribution Customer « A-Team Group
DealHub today announced that Société Générale Corporate & Investment Banking (SGCIB) has selected DealHub as its eFX price distribution system.

The DealHub Connectivity Manager solution supplied by Option Computers Ltd (OCL) provides a low latency price distribution backbone servicing SGCIB customers across the Bank’s newly launched Alpha FX platform, FIX API and multi-bank ECNs.

Broken Windows and collective code ownership

Evolutionary architecture and emergent design: Environmental considerations for design, Part 2
In The Pragmatic Programmer (see Resources), Dave Thomas and Andy Hunt borrow the concept of broken windows from studies about abandoned buildings. Deserted buildings generally aren't damaged until a window is broken. That first broken window indicates that no one cares about the property, and the general disrepair and abuse of the building accelerate thereafter.

Broken windows occur in software development too. When you see some code that's not technically a bug but isn't quite right from a design standpoint, you've found a broken window. Collective code ownership says that you must fix that code. Part of the reason that software projects tend to become more fragile and brittle over time is the presence of hundreds (or thousands) of broken windows. If you fix them routinely, your code can get stronger with age, not weaker.

My projects always use pair programming, and we're always on the lookout for broken windows. But we don't automatically drop what we're doing to attack those problems as soon as we find them. When my pair and I discover an error, we assess how long it will take to fix it. If it will consume less than 15 minutes, we'll go ahead and fix it inline with whatever other story we're working on. If the change is more involved, we add it to a technical-debt backlog. All my projects have a technical-debt backlog, maintained by the technical lead. When we get slack time on the project, the tech lead assigns stories from this backlog to eat away at accrued technical debt gradually.

Collaborative design also suggests that developers are responsible for the correctness and quality of the parts of the overall application they create. Correctness has two facets: adherence to the business requirements and technical suitability. Business-requirements correctness is determined by whatever verification mechanism your company ecosystem has in place to judge the software's suitability to the problem it was written to solve. Technical correctness is left to the development team.

Different teams put their own procedures and mechanisms in place to ensure technical quality, such as code reviews and automated metrics tools run via continuous integration. One practice that many agile teams employ that's a key enabler of emergent design is collective code ownership (http://www.martinfowler.com/bliki/CodeOwnership.html), which suggests that every person on the project has ownership responsibilities for all the code, not just the code he or she has written. (Another benefit of ... collective code ownership is bringing up everyone's skill level to that of the most skilled team member.)

More specifically, it requires:
* Frequent code reviews (or real-time code reviews such as pair programming) to make sure everyone is leveraging common idiomatic patterns and other useful design discoveries by the collective group.
* Awareness by everyone on the project of at least some of the details of all parts of the project.
* Willingness for anyone on the project to jump in and fix broken windows regardless of the original author(s) of the code.

27 November 2010

Vertica column-orientated database

Tungsten Finite State Machine Library

HPPC: High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections for Java
HPPC provides template-generated implementations of typical collections, such as lists, sets and maps, for all Java primitive types. The primary driving force behind HPPC is optimization for highest performance and memory efficiency.

There are a few projects implementing collections over primitive types, including fastutil, PCJ, GNU Trove, Apache Mahout (ported COLT collections), Apache Primitive Collections. Some of them are released under the LGPL license, which many commercial companies tend to avoid at all costs; other are no longer maintained or complete. Most of the projects tend to write tightly encapsulated code with no access to private internals, implement the API of standard Java packages and strive for fast error-recovery. While these are all good programming practices, they are not always practical. In many computationally-intensive applications, access to the collection class' internals is crucial for writing highest-performance application code.

Efficient Java Matrix Library (EJML)

efficient-java-matrix-library - Project Hosting on Google Code
Efficient Java Matrix Library (EJML) is a linear algebra library for manipulating dense matrices. Its design goals are; 1) to be as computationally efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime and by designing a clean API. EJML is free, written in 100% Java and has been released under an LGPL license.

OpenXava 4.0: Rapid Java Web Development | Javalobby

Quartz Scheduler GUI

More on dangers of the caches | Architects Zone

Real-Time Charts on the Java Desktop

17 November 2010

Diffkit - diff rdms/csv tables

DiffKit is an application, and a framework, for comparing two tables of data, field-by-field. The tables can come from any of a number of sources, such as an RDBMS or CSV file, and DiffKit is able to mix different kinds of sources in the same diff operation. DiffKit is like the Unix diff utility, but for tables instead of lines of text.

15 November 2010

Spring Social

Spring Social | SpringSource.org
Spring Social is an extension of the Spring Framework to enable the development of social-ready applications. With Spring Social you can create applications that interact with various social networking sites such as Twitter, Facebook, LinkedIn, and TripIt, giving the users of your application a more personal experience.

The main features of Spring Social include:

* A set of social network templates for interacting with Twitter, Facebook, LinkedIn, TripIt, and Greenhouse.
* An OAuth-aware request factory for signing RestTemplate requests with OAuth authorization details.
* A web argument resolver for extracting Facebook user ID and access token information in a Spring MVC controller.

Spring Social is used by Greenhouse for all of its social network integration. Have a look at the Greenhouse source code for examples of Spring Social in action.

21 October 2010

Efficiently sorting an array that is already mostly sorted

I've been investigating how to improve the performance of sorting an array that is already mostly sorted.  Turns out java is replacing the mergesort with timsort in Java 7's Arrays.sort.  You can find the code here.  I'm seeing a factor of 3-4 improvement!

17 October 2010

26 September 2010

Drools Planner - automated contraint solver for planning

Drools Planner looks interesting. From their website: Drools Planner does automated planning, it solves a planning problem while respecting the constraints as much as possible. Drools Planner can be used on all kinds of planning problems. Let's take a look at some use cases.

24 September 2010

Connamara Releases FX Market Data Adapter Suite

Connamara Releases FX Market Data Adapter Suite « A-Team Group. C++ adapters released with under an interesting licence.... you buy the sourcecode for a one-off fee, which is very neat as you can adapt and optimise the code as required and have no fears of lockin.

Java IDE review

16 September 2010

Opensource JMS performance report from makers of HornetQ

HornetQ - the Performance Leader in Enterprise Messaging - JBoss Community
HornetQ - the Performance Leader in Enterprise Messaging

NASDAQ ITCH feed handling and outbound OUCH order entry with under two microseconds of latency

in-FPGA™ Trading Systems & Impulse reduce trade latency | Automated Trader
in-FPGA™ Trading Systems (www.infpga.com) have announced a hardware-accelerated automated trading reference design that performs NASDAQ ITCH feed handling and outbound OUCH order entry running on 10Gb Ethernet, with under two microseconds of latency.

26 August 2010

Lifehacker Pack for Android: Our List of the Best Android Apps

Lifehacker Pack for Android: Our List of the Best Android Apps
We're using the great app search and sync service AppBrain to create a Lifehacker Pack for Android. If you install the AppBrain App Market on your Android phone and sign in through the AppBrain site (using a Google OAuth, no-password-revealed log-in), you can check off and click install multiple apps from the list

Waste #2: Extra Features | Agile Zone

Waste #2: Extra Features | Agile Zone
Our first best weapon against extra features is a short feedback cycle. Frequent product demos will expose features that we're working on that our customers no longer believe will give them a competitive advantage. Even better than frequent demos are frequent production deployments. Getting the software in the wild on a regular basis and then tracking feature usage can easily expose features that are not needed. Removing features from the system will reduce the complexity, maintenance load, and likelihood that things will go wrong going forward.

Our second weapon against extra features is a healthy dose of "YAGNI." YAGNI stands for "You Ain't Gonna Need It." This phrase represents one of the original principles of eXtreme Programming, that of only adding functionality when it is necessary to meet a clear and present need of the customer.

Wikipedia's article on YAGNI [3] provides the following useful summary of the disadvantages of extra features:

* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.

20 August 2010

Fun with the Anthropic Principle « Otaku, Cedric's blog

Fun with the Anthropic Principle « Otaku, Cedric's blog
One day, someone called Steve sends you an email in which he predicts that tomorrow, team A will win against team B. You don’t think much of that email and you delete it. The next day, you learn that indeed, team A won. A few days later, you receive another email from Steve which, again, makes a prediction for the result of an upcoming game. And again, the prediction turns out to be correct.

After a while, you have received ten emails from Steve, each of which accurately predicted a game outcome. You start being quite shocked and excited. What are the odds that this person would randomly guess correctly ten matches? 1 over 2^10 (1024), about 0.1%. That’s quite remarkable.

In his next email, Steve says “I hope that by now, I convinced you that I can guess the future. Here is the deal: send me $10,000, I’ll bet them on the next match and we’ll split the profits”.

Do you send the money? read on

19 August 2010

Belas Blog: Daisychaining in the clouds

Belas Blog: Daisychaining in the clouds. Interesting idea, although I would generally go for a more traditional fan-out model that looks like a tree rather than a daisy chain.
The idea is that, instead of sending a message to N-1 members, we only send it to our neighbor, which forwards it to its neighbor, and so on. For example, in {A,B,C,D,E}, D would broadcast a message by forwarding it to E, E forwards it to A, A to B, B to C and C to D. We use a time-to-live field, which gets decremented on every forward, and a message gets discarded when the time-to-live is 0.

The advantage is that, instead of taxing the link between a member and the switch to send N-1 messages, we distribute the traffic more evenly across the links between the nodes and the switch. Let's take a look at an example, where A broadcasts messages m1 and m2 in cluster {A,B,C,D}, '-->' means sending:

Schmidt: Erase your identity to escape Google shame • The Register

Schmidt: Erase your identity to escape Google shame • The Register, wow, that's a scary vision.
Increasingly bonkers Google governor Eric Schmidt has seen the future, and you might have to change your name to be a part of it.  According to the man in charge of the company de facto in charge of the web, young people's tendency to post embarrassing personal information and photographs to Googleable social networks means that in the future they will all be entitled to change their name on reaching adulthood.

02 August 2010

UtilityVsStrategicDichotomy: Martin Fowler

One of the most important ways in which these efforts differ is where the risks lie. For utility projects the biggest risk is some kind of catastrophic error - you don't want the sewage pipe to break, or to miss payroll. So you need enough attention to make sure that doesn't happen, but other than that you want costs to be as low as possible. However with strategic projects, the biggest risk is not doing something before your competitors do. So you need to be able to react quickly. Cost is much less of an issue because the opportunity cost of not doing something is far greater than costs of software development itself.

This is not a static dichotomy. Business activities that are strategic can become a utility as time passes. Less often, a utility can become strategic if a company figures out how to make that activity a differentiator. (Apple did something like this with the design of personal computers.)

One way this dichotomy helps is in deciding between building custom software and installing a package. Since the definition of utility is that there's no differentiator, the obvious thing is to go with the package. For a strategic function you don't want the same software as your competitors because that would cripple your ability to differentiate.

Ross goes so far as to argue that there shouldn't be a single IT department that's responsible for both utility and strategic work. The mindset and management attitudes that are needed for the two are just too different. It's like expecting the same people who design warehouses to design an arts museum.

30 July 2010

Maven Versions Plugin

Versions Maven Plugin - Usage
The plugin offers goals for updating the versions of artifacts referenced in a Maven pom.xml file.

UBS Hires Foreign-Exchange Algorithmic-Trading Team From Barclays Capital

UBS Hires Foreign-Exchange Algorithmic-Trading Team From Barclays Capital - Bloomberg
UBS AG said it hired a foreign- exchange algorithmic-trading team from Barclays Capital.

UBS hired Chris Purves as global head of foreign-exchange e-trading, Mark Meredith as head of foreign-exchange e-trading quantitative analytics and Parwinder Sekhon as head of foreign- exchange e-trading infrastructure, the Swiss bank said in an e- mailed statement.

The team will begin work in London in October and report to Chris Vogelgesang and Arie Adler, co-heads of global foreign- exchange trading, UBS said.

A Barclays spokeswoman declined to comment when contacted at the company’s London office.

25 July 2010

Manipulating Collections With Lambdaj's Fluent Interface

Manipulating Collections With a Fluent Interface | Javalobby
Lambdaj is a Java library that allows you to manipulate collections in a declarative way. In particular its API is designed to be easily combined in more complex single statements...

16 July 2010

Portware Unveils FX 5.0 Trading Platform « A-Team Group

Portware Unveils FX 5.0 Trading Platform « A-Team Group
Portware FXLM. Portware’s new FX TCA and post trade analytics package is a powerful reporting toolset that gives traders increased visibility into their trading performance. Portware FX users can benchmark their strategy’s performance, and the performance of their liquidity providers, against any number of absolute or calculated data points. FXLM allows users to gauge slippage by comparing executions to Weighted Average Price (WAP) arrival time calculations; compare trading strategy results to single broker RFQ prices; view a breakdown of liquidity found at each trading destination for each order; and analyze the performance of liquidity providers, either on a real-time basis or via comprehensive month end reporting.

05 July 2010

Are You A Starter, A Finisher Or An Implementer?

Are You A Starter, A Finisher Or An Implementer? | Javalobby... sounds like its best to be a starter, implementer and a finisher, not so sure its good to break these roles up.
For example, lots of people have ideas. Ideas are easy because they require very little risk. But, what happens after the idea? You are supposed to start the project. However, most people stop with the idea because they “don’t have time” or even “I wouldn’t know where to begin”. Kat French explains how she does her best creative work: the super-secret, hush-hush, “I could tell you, but then I’d have to kill you” secret of how I do my best creative work. Ready? It’s called “starting.”

Using Systems Thinking to Improve Service Performance | Javalobby

02 July 2010

Exegy Tickerplant hits 3,245,070 messages per second for MarketDataPeaks website

MarketDataPeaks Makes New High in Busy Market: "Exegy, Inc., the market data appliance company announced today that the Exegy Ticker Plant driving the MarketDataPeaks web site hit 3,245,070 messages per second. This is a new record and surpasses the previous high water mark of 2,808,532 mps reached on 6 May 2010 during the flash crash.

Converting avi to mp4 on linux

Recently I needed to convert some avi files to mp4, on linux, so that I could push them to my ipad. Here is the command I used:

/usr/local/bin/handbrake/HandBrakeCLI -i ${file} -o ${file}.mp4 --preset="Normal" --cpu 3

Note you first need to install Handbrake CLI. I took a 64 bit Centos build from here.

23 June 2010

StreamBase and Solace Systems Partner for Low Latency Trading Solutions

StreamBase and Solace Systems Partner for Low Latency Trading Solutions « A-Team Group
StreamBase today announced integration of its CEP platform with Solace Systems’ hardware-based middleware routing products.

Progress Software Introduces A New Generation Apama Capital Markets Foundation

Progress Software Introduces A New Generation Apama Capital Markets Foundation « A-Team Group
The new generation Apama Capital Markets Foundation is built in the form of easily configurable building blocks and includes the following capabilities:

1. A new market data architecture, which provides an increase in performance and flexibility in the processing of cross-asset market data. The architecture makes optimal use of the patented Apama parallel event processing engine that can scale to hundreds of thousands of market data updates per second. The new market data architecture can channel that data into an application as useable information with sustained sub-millisecond latency end-to-end (from the adapter’s receipt of data through to information being available to the application);

2. Native support for new order types and features to enrich application functionality and enhance usability for the developer of Quant trading applications. The developer, using the Apama Capital Markets Foundation, can now handle natively disparate pricing information and data feeds that Exchanges present (including BBO, Market-By-Price, Market-By-Order, RFQ and Trades) and transform them into a single, customizable format based on what the Quant Trader uses;

3. Expanded analytics including integration with additional third party analytics solutions and plug-ins (i.e., Quantlib and Matlab);

4. A risk firewall - a fundamental component of the Apama Capital Markets Foundation - that provides trading and risk management applications with the ability to deliver pre-trade risk management in real-time. The risk firewall’s capabilities include pre-empting trades that exceed their firm’s market risk tolerance and intercepting so-called "fat finger" trades. Such capabilities are critical to trading effectively in today’s volatile markets and are particularly useful as a complement to high frequency trading and to provide a pre-trade risk management capability for sponsored access;

5. An improved exchange simulator, including a full matching engine implementation, provides the ability to:
o Better perform advanced back-testing of strategies prior to live implementation; and
o Create internal liquidity pools;

6. New latency measurement functionality. This unique capability allows developers to measure the latency between each step within the application code of their strategies. This enables them to better identify optimization opportunities in the strategies they are creating and help traders improve the performance of their applications; and

7. New sample code and trading strategies to get started.

The Apama Capital Markets Foundation, as its name implies, provides the foundation for a growing portfolio of solution accelerators - pre-built customizable solutions - that include algorithmic trading, market surveillance and monitoring, and FX trading and eCommerce.

21 June 2010

BBVA Banks on Progress Software for FX Aggregation

BBVA Banks on Progress Software for FX Aggregation « A-Team Group
Progress Software Corporation, a leading software provider that enables enterprises to be operationally responsive, today announced that financial services group, Banco Bilbao Vizcaya Argentaria (BBVA), is live on the Progress Apama FX Aggregation accelerator for its foreign exchange operations. BBVA FX traders are now using the Progress Apama platform along with its customised dashboards to view and trade across aggregated liquidity from a number of Banks and FX ECNs. Alongside increased trader effectiveness, the FX aggregator is also enabling BBVA to use the Progress Apama platform to power advanced FX algorithms and optimized real-time FX prices for their internal and external customers.

Voltaire Introduces Software and 10GbE Switching Solutions for Lowest Latency High Frequency Trading

Voltaire Introduces Software and 10GbE Switching Solutions for Lowest Latency High Frequency Trading « A-Team Group
Voltaire also announced the industry’s first turnkey 10 GbE switching and software infrastructure solution for low latency trading. The solution is comprised of the Voltaire Vantage 6024 switch, Voltaire Messaging Accelerator (VMA) software, network interface cards and cables. It is ideal for co-located low latency trading and smaller high-frequency trading environments where one rack handles the full trading operations. Voltaire’s unique VMA software dramatically improves performance of high frequency trading and other multi-cast applications, further reducing networking latency and increasing application throughput per server.

Activ Breaks Boundaries with Second Generation Market Data Solution

Activ Breaks Boundaries with Second Generation Market Data Solution « A-Team Group
Activ Financial will demonstrate its second generation market data processing unit (MPU) technology at Sifma this week. The vendor claims that the hardware-based low latency market data solution will double messaging volumes per second and reduce latency by 50% to 80% to deliver single digit microsecond latency for feed handlers.

Google has introduced a command line utility for accessing various Chocolate Factory services

Google hits coder G-spot with Linux command line tool • The Register
Google has introduced a command line utility for accessing various Chocolate Factory services, including YouTube, Blogger, Google Docs, Calender, and Contacts.

29West IPC Transport Pushes Latency to Sub-Microsecond

Solace Systems Technology Release 5.0

14 June 2010

Javva The Hutt on Performance Regression Testing

Javva The Hutt May 2010
And the moral of the story is ... funny code is going to get in to your system sooner or later, so you really need a performance regression test or one day you'll get embarrassed by your release.

05 June 2010

Distributed data processing with Hadoop, developerWorks articles

Data News | Impulse and Luxoft create modular computational finance solutions to minimize latency | Automated Trader

Data News | Impulse and Luxoft create modular computational finance solutions to minimize latency | Automated Trader
Financial market data-processing solutions minimize latency via modular software and C-to-FPGA acceleration

Intellij IDEA compare directories plugin

Intellij IDEA compare directories plugin
Allows the fast comparison of two directories or archive files (jar, zip, war... and also tar/gz) in IntelliJ IDEA, based on file contents. Single file differences can be viewed with the usual IDEA diff window. Diffs in compiled Java classes can also be viewed using the usual IDEA diff window and a built-in Java disassembler. Detects blank-only differences in text files. Can also detect user-defined differences in text files and differences in source file comments, called «non-significant differences». Provides some basic mass-merging facilities on compared files and directories (copy/delete on files or directories).

Spring Expression Language (SpEL) useful for executing expressions

6. Spring Expression Language (SpEL)
The Spring Expression Language (SpEL for short) is a powerful expression language that supports querying and manipulating an object graph at runtime. The language syntax is similar to Unified EL but offers additional features, most notably method invocation and basic string templating functionality.

While there are several other Java expression languages available, OGNL, MVEL, and JBoss EL, to name a few, the Spring Expression Language was created to provide the Spring community with a single well supported expression language that can be used across all the products in the Spring portfolio. Its language features are driven by the requirements of the projects in the Spring portfolio, including tooling requirements for code completion support within the eclipse based SpringSource Tool Suite. That said, SpEL is based on a technology agnostic API allowing other expression language implementations to be integrated should the need arise.

20 May 2010

A Tour through the Visualization Zoo and Prefuse

A Tour through the Visualization Zoo - ACM Queue

Prefuse supports a rich set of features for data modeling, visualization, and interaction. It provides optimized data structures for tables, graphs, and trees, a host of layout and visual encoding techniques, and support for animation, dynamic queries, integrated search, and database connectivity. Prefuse is written in Java, using the Java 2D graphics library, and is easily integrated into Java Swing applications or web applets. Prefuse is licensed under the terms of a BSD license, and can be freely used for both commercial and non-commercial purposes.

Meraki WiFi Stumbler scans wifi channels

Meraki WiFi Stumbler lets you scan for wifi networks and see whose using which channels. Very useful.

14 May 2010

C-to-FPGA from DRC Computer and Impulse Accelerated Technologies

Algorithmic Trading News | Algorithmic Trading Software developers get new high performance computing C-to-FPGA tools | Automated Trader
DRC Computer and Impulse Accelerated Technologies have announced that the Impulse C™-to-FPGA tools have been integrated with the DRC Accelium™ coprocessor card, enabling software engineers to fully access hardware acceleration using familiar C programming methods. This integration provides C-language control of I/O, memory, streams and signals at the hardware level, allowing applications to leverage the high parallelism possible in FPGAs for higher performance.

Concurrent JUnit Tests With RunnerScheduler

02 May 2010

Finding Out Where Your Class Files Are | Javalobby

Finding Out Where Your Class Files Are | Javalobby
ProtectionDomain protectionDomain = HyenaDesk.class.getProtectionDomain();
File codeLoc = new File(protectionDomain.getCodeSource().getLocation().getFile());

24 April 2010

Scribe log centralisation

SourceForge.net: scribeserver
Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups. If the central scribe server isn't available the local scribe server writes the messages to a file on local disk and sends them when the central server recovers. The central scribe server(s) can write the messages to the files that are their final destination, typically on an nfs filer or a distributed filesystem, or send them to another layer of scribe servers.

Scribe is unique in that clients log entries consisting of two strings, a category and a message. The category is a high level description of the intended destination of the message and can have a specific configuration in the scribe server, which allows data stores to be moved by changing the scribe configuration instead of client code. The server also allows for configurations based on category prefix, and a default configuration that can insert the category name in the file path. Flexibility and extensibility is provided through the "store" abstraction. Stores are loaded dynamically based on a configuration file, and can be changed at runtime without stopping the server. Stores are implemented as a class hierarchy, and stores can contain other stores. This allows a user to chain features together in different orders and combinations by changing only the configuration.

Scribe is implemented as a thrift service using the non-blocking C++ server. The installation at facebook runs on thousands of machines and reliably delivers tens of billions of messages a day.

12 April 2010

10 April 2010


OpenFAST - About
OpenFAST is a 100% Java implementation of the FAST Protocol (FIX Adapted for STreaming). The FAST protocol is used to optimize communications in the electronic exchange of financial data. OpenFAST is flexible and extensible through high volume - low latency transmissions. The FAST protocol uses a data compression algorithm to decrease the size of data by two processes.

Java Tester - What Version of Java Are You Running?

Maven repositories

List of maven repositories currently proxied through Nexus

Maven 2
Maven Central: http://repo1.maven.org/maven2/
Apache Snapshots: http://repository.apache.org/snapshots
Codehaus Snapshots: http://snapshots.repository.codehaus.org/
Java.net: http://download.java.net/maven/2

Maven 1
Java.net.1: https://maven-repository.dev.java.net/repository

Kx Kdb Wiki

Kx Kdb Downloads

The W3C Markup (HTML, XHTML) Validation Service

The W3C Markup Validation Service

ReplacementDocs - online manuals for games

replacementdocs: The original web archive of game manuals
- Have you ever rented a game that came with no instructions?
- Have you ever bought a used game and found out later that the package you received didn't come with an essential map or answers to copy protection questions required to play the game?

If so, replacementdocs is here to help! We're here to provide you with those manuals for situations when you really should've had them to begin with.

Cheat Sheets

Our Favorite Cheat Sheets

Java.net project list

Sonatype Maven Online Books

Maven: The Definitive Guide | Sonatype

Kx Kdb c.java

Enterprise Integration Patterns - Table of Contents

Enterprise Integration Patterns - Table of Contents

07 April 2010

Always code as... | PHP Zone

Always code as... | PHP Zone
Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live

Always code as if you were paying your lines' weight in gold.  The less code you write to solve a problem, the less code you'll have to maintain: code is widely considered a liability more than an asset.  You should favor verbosity only to improve readability and encapsulation: the trade-off is difficult to find here.

Always code as if you had to deploy and use your application at the end of the day.  Which may be the case if it's a web application.  Portability is not a feature you can add as single user story: the best way to make an application portable, configurable, deployable and most of all working is to build it as simple as possible with these characteristics (a walking skeleton), and keep them while you expand the codebase with new features.

Evolutionary architecture and emergent design: Leveraging reusable code, Part 1

Evolutionary architecture and emergent design: Leveraging reusable code, Part 1
Ease of manufacturing explains why we don't have much mathematical rigor in software development. Traditional engineers developed mathematical models and other sophisticated techniques for predictability so that they weren't forced to build things to determine their characteristics. Software developers don't need that level of analysis. It's easier to build our designs and test them than to build formal proofs of how they will behave. Testing is the engineering rigor of software development. Which leads to the most interesting conclusion from Reeves' essay:

Given that software designs are relatively easy to turn out, and essentially free to build, an unsurprising revelation is that software designs tend to be incredibly large and complex.

Another conclusion from Reeves' essay is that design in software (that is, writing the entire source code) is by far the most expensive activity. That means that time wasted when designing is a waste of the most expensive resource. Which brings me back around to emergent design. If you spend a great deal of time trying to anticipate all the things you'll need before you've started writing code, you will always waste some time because you don't yet know what you don't know. In other words, you always run into unexpected time sinks when writing software because some requirements are more complex than you thought, or you didn't fully understand the problem at the beginning. The longer you can defer decisions, the greater your ability to make better decisions — because the context and knowledge you acquire increase with time.

Yet another conclusion from Reeves' essay revolves around the importance of readable design, which translates to more readable code. Finding idiomatic patterns in code is hard enough, but if your language adds extra cruft, it becomes even harder. Finding an idiomatic pattern in an assembly language code base, for example, is very difficult because the language imposes so many opaque elements that you must be able to see around to "see" the design.

I think that the complete source code is the design artifact in software. Once you understand that, it explains a lot about past failures (such as model-driven architecture, which tries to go directly from UML artifacts to code and fails because the diagramming language isn't expressive enough to capture the required nuances). This understanding has several side effects, including the realization that design (which is coding) is the most expensive activity you can perform. This doesn't mean that you shouldn't use preliminary tools (such as UML or something similar) to help you understand the design before you start coding, but the code becomes the real design once you move to that phase.

Readable design matters. The more expressive your design, the easier it is to modify it and eventually harvest idiomatic patterns from it via emergent design.

02 April 2010


zeromq: Fastest. Messaging. Ever.
What is ØMQ?

Imagine pipes that connect your app to many other apps. That lets you talk using a simple socket API. From any language and on any OS. Really fast, that gets out of your way. It's like TCP on steroids!

* ØMQ is a lightweight messaging implementation with a socket-style API.
* Sends and receives messages asynchronously (a.k.a. "message queueing").
* Supports different messaging patterns such as point-to-point, publish-subscribe, request-reply, paralellized pipeline and more.
* Is fast. 13.4 usec end-to-end latencies and over 8M messages a second today (Infiniband).
* Is thin. The core requires just a couple of pages in resident memory.
* Is open source, LGPL-licensed software written in C++.
* Has bindings for many different languages (see the "Languages" section on left).
* Supports different transport protocols: TCP, PGM, IPC, and more.
* Runs on HP-UX, Linux, Mac OS X, NetBSD, OpenVMS, Solaris, Windows, and more.
* Supports microarchitectures such as x86, AMD64, SPARC, IA-64, ARM and more.
* Is fully distributed: no central servers to crash, millions of WAN and LAN nodes.

ØMQ aims to turn messaging patterns as 1st class citizens of the Internet.

Compare to:

* TCP: message based, messaging patterns rather than stream of bytes.
* Jabber: do not confuse instant messaging with real messaging.
* AMQP: 100x faster to do the same work and with no brokers (and 278 pages less spec).
* IPC: we abstract across boxes not a single machine.
* CORBA: we do not enforce horrible complex message formats on you.
* RPC: 0MQ is totally asynchronous, and lets you add/remove participants at any time.
* RFC 1149: a lot faster!
* 29west LBM: we're free software!
* IBM Low-latency: we're free software!
* Tibco: we're still free software!

30 March 2010

10 Questions to Ask Your New Manager | Javalobby

10 Questions to Ask Your New Manager | Javalobby
# What do know about management? What models do you use?
# What books and blogs do you read? Which managers are your source of inspiration?
# Are your teams self-organizing? How? And how do you add value?
# Can you give examples of your teams being happy about what you've done for them?
# How have you motivated your team members?
# What kind of direction, rules and constraints do you impose on teams?
# What kinds of impediments have you removed lately?
# How do you develop competence and craftsmanship in the teams?

The Obix Framework: Software Configuration Made Easy

The Obix Framework: Software Configuration Made Easy
"The Obix Framework simplifies software configuration by providing a standard means, of encoding configuration data in XML, and of loading such data into an application so that the data can be accessed using basic Java™ objects. It provides a host of powerful, yet simple, features that simplify the representation, and use of configuration information. These features, to name but a few, include: the ability to represent complex configuration data (file) trees, by providing links between configuration documents; modularization of configuration data; automatic change detection and auto-reload of configuration data; simple integration into Java™ applications using little or no custom code; support for enterprise scale (J2EE™) applications; configuration event listeners; a flat learning curve; and extensibility."

22 March 2010

Codehaus: GreenMail

Codehaus: GreenMail
GreenMail is an embedable, lightweight and sandboxed email server for testing and developing purposes.

* Features: – supports SMTP, POP3, IMAP with SSL – provides a JBoss GreenMail Service extension for running a sandboxed mail server – contains examples for mail integration tests

21 March 2010

HtmlUnit 2.4, a headless java browser, released - TheServerSide.com

HtmlUnit 2.4, a headless java browser, released - TheServerSide.com
A new release of the pure GUI-Less browser is available, which allows high-level manipulation of web pages, such as filling forms, clicking links, accessing attributes and values of specific elements within the pages, you do not have to create lower-level requests of TCP/IP or HTTP, but just getPage(url), find a hyperlink, click() and you have all the HTML, JavaScript, and Ajax are automatically processed.

The most common use of HtmlUnit is test automation of web pages (even with complex JavaScript libraries, like jQuery and Google Web Toolkit), but sometimes it can be used for web scraping, or downloading website content.

20 March 2010

How to Use Symlinks in Windows - Symlinks - Lifehacker

How to Use Symlinks in Windows - Symlinks - Lifehacker
mklink /j "c:\users\Will\Music\iTunes\iTunes Music" d:\Music\ - This line makes a symlink that redirects from the folder c:\users\Will\Music\iTunes\iTunes Music to the Music folder on my second hard drive. This type of use is especially handy if you have a small main hard drive and a larger secondary drive.

How do I use all my cores? | Architects Zone

Didn't really agree with this article as a general approach to concurrency by using unix pipes but there was a "bang-on" comment included below:

This is a classic hammer nail kind of thing where pipes and filters are a particularly old hammer and map reduce is about the same age but recently re-discovered. Neither will solve all your problems.

Yes, concurrency is hard, especially if you have no background in computer science, i.e. if you lack basic understanding of the abstractions that can make your life easier. If you do have such a background, the next step is understanding the different patterns that exist in this space. Producer consumer, semaphores, message queues, callbacks, functions without side effects, threads, blocking/non blocking IO, etc.

Still with me? Now the good news. Your needs are probably quite modest and well covered by some existing framework. Using the java.concurrent api is not exactly easy but if use properly will allow you to dodge synchronization issues.

A few useful tricks that you should practice regardless of whether you are going to run with multiple threads:

- Don't use global variables.

- Don't share object state with mutable objects.

- Use Dependency injection (i.e. don't initialize objects yourself). Keep the number of dependencies per class low.

- Separation of concerns. Make methods only one thing and keep your classes cohesive (i.e. don't dump random methods in one class).

If you do all this properly, your design will make a shared nothing approach a lot easier. Shared nothing is what you need to parallelize. Shared something means context switches and synchronization. These are the two things that make concurrent programming hard. If you can do shared nothing, concurrency is easy.

If you can't, work on how you share between processes/threads. Asynchronous is great here. Use call back mechanisms or some kind of queueing solution. Avoid manipulating semaphores and locks yourself, leave that to some off the shelf solution.

Unix pipelines are great if all processes in the pipe line are independent, don't contest the same resources, need to do about the same amount of work, and can work on partial results as they are streamed from the predecessor. If not, you've got a clogged pipe and a bunch of processes waiting for it to become unclogged.

Java plugin for Firefox on 64 bit linux

cd ~/.mozilla/plugins
ln -s ${JAVA_HOME}/jre/lib/amd64/libnpjp2.so

15 March 2010

Great Lies: "Design" vs. "Construction" | Architects Zone

Great Lies: "Design" vs. "Construction" | Architects Zone
In the building trades there are architects, engineers and construction crews. In manufacturing, there are engineers and factory labor. In these other areas, there's a clear distinction between design and construction. Software must be the same. Right?

The analogy is fatally flawed because there is no "construction" in the creation of software. Software only has design. Writing code is -- essentially -- design work.

13 March 2010

Design Patterns Uncovered: The Visitor Pattern | Javalobby

Design Patterns Uncovered: The Visitor Pattern | Javalobby
Design Patterns Uncovered: The Visitor Pattern

Jira Studio available for Google Apps

JIRA Studio has "Gone Google" - Atlassian News
The analog to Google Apps at Atlassian is JIRA Studio, our hosted software development suite. JIRA Studio (Studio for short) is our fastest growing product, which shouldn't be a surprise. In a single, hosted, just-turn-it-on-and-it-works product, Studio combines source control (Subversion), issue tracking (JIRA), agile planning (GreenHopper), enterprise wiki (Confluence), code browsing (FishEye), code reviews (Crucible) and continuous integration (Bamboo). All of that, beautifully integrated, and hosted as a single service. As a customer, you don't worry about managing or upgrading it - we take care of all that. Studio helps teams build great software, by giving them the tools they need to manage code and development projects, without the hassle of managing those tools.

VcsSurvey - MartinFowler.com

Dilbert CMMI


06 March 2010

BlueGreenDeployment - MartinFowler.com

MF Bliki: BlueGreenDeployment
One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production. You usually need to do this quickly in order to minimize downtime. The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.

Blue-green deployment also gives you a rapid way to rollback - if anything goes wrong you switch the router back to your blue environment.

What do you try to leave in your commit messages? | Java.net

What do you try to leave in your commit messages? | Java.net
# Bug ID. In fact, bug/commit association is so useful that often you use (or write) programs that analyze these relationship, so it's preferrable for this information to be machine readable.
# URL to the e-mail in the archive that prompted me to produce a change. In Hudson, often a conversation with users reveal an issue or an enhancement that results in a commit. This URL lets me retrieve the context of that change, and I find it tremendously useful.
# If the discussion of a change was in IM, not e-mail, I just paste the whole conversation log, as they don't have an URL. Ditto if the e-mail was sent to me privately.
# The input value and/or the environment that caused a misbehavior. In Hudson, I have this one method that needs to do some special casing for various application servers. When I later generalized it a bit more, commit messages that recorded the weird inputs from WebSphere, OC4J, and etc. turned out to be very useful.
# For a fix, a stack trace that indicates the failure. Sometimes I misdiagnose the problem, and later when I suspect I did, being able to see the original output really helps me.
# If I tried/considered some other approaches to the problem and abandoned them, record those and why. I sometimes look back my old change and go "why did I fix it this way — doing it that way would be a whole lot better!", only to discover later that "ah, because this won't work if I've done that!", and I knew I've gone through the same trail of thoughts before. If I'm in a middle of a big change and decide to abandon a considerable portion of it, I sometimes even commit that and roll it back, just so that I can revisit why I abandoned it later.
# If a change should have been logically a part of a previous change, just say so. If I happen to know the commit ID of the previous change, I definitely leave that, but if I don't remember it, I still try to point to where that previous change was, like roughly when it was made, which file it was made, who did it, etc, so that future myself can narrow down the search space if I need to find it.

What's new in iBATIS 3

18 February 2010

JNA tutorial for JNI

29West Announces Next-Generation JMS Solution

HornetQ - fast with nice features?

HornetQ Stings the Competition in Peer-Reviewed Benchmarks | Javalobby

So not only is it supposed to be fast, but if you look at its feature list, it has some nice enterprise features:

  • Very high performance journal for message persistence
  • Topic hierarchies.  Also known as "topic wild-cards".  The idea here is that you can create a topic subscriber using a wild-card, e.g. you could create a consumer on newsfeeds.uk.* then it will receive all messages sent to newsfeeds.uk.sport and also newsfeeds.uk.culture.
  • Dead letter addresses.  A dead letter address is where a message gets sent when it can't be delivered after X number of retries.  Dead letter addresses are highly configurable in HornetQ - they can be configured either globally or on subsets of addresses individually.  Also they need not only represent a queue. Dead letter addresses can represent addresses at which there may be multiple subscribers, or diverts.
  • Producer flow control.  HornetQ provides a credit based producer flow control mechanism to prevent clients overwhelming a server with messages.
  • Consumer flow control.  HornetQ provides a credit based consumer flow control mechanism to prevent client consumers from being overwhelmed with messages sent from a server.
  • Send acknowledgements.  Register a listener to get acknowledgements that messages sent asynchronously have arrived and been processed on the server.  This gives the same arrival guarantees as sending a JMS message in blocking mode but at much higher peformance since it is asynchronous!
  • Last value queue.  This is a special queue which only keeps the most recent copy of a message with a certain header.  E.g. useful for stock prices where you're only interested in the latest price.
  • High performance core bridges.  Core bridges can be used to connect any two HornetQ instances.  Bridges can be optionally configured to only forward messages which match a particular selector query (like SQL 92 syntax).
  • Bridges can forward preserving destination or to a different destination.  Bridges have a transformation hook point where you can plug in message transformation (e.g. smooks).  Bridges are resilient and cope with connection failure, automatically retrying etc as appropriate.  100% once and only once delivery is guaranteed with a bridge without having to resort to more heavyweight solutions such as JTA (XA).
  • Server clusters.  You can configure groups of HornetQ servers into clusters.  Messages sent to a node in a cluster are automatically load balanced across all the matching consumers in the cluster, even if the target node is several hops away, and number of consumers on remote nodes is taken into account.
  • Automatic, fast, client failover.
  • Replication of data store.
  • Clustered request-response.  Send message with reply-to set, message is processed on any node of the cluster and reply is sent back to the original node.
  • Message grouping.  Message grouping allows you to guarantee that all messages of a certain type are processed serially by a single consumer.  This is done by adding a special property to the message. All messages with the same value of the property will be processed by the same consumer.  This also works across the cluster!

14 February 2010

Hibernate History

:// Micromata Labs // Home
The Hibernate History API provides Java interfaces to log modifications of persistent Java Objects in the Hibernate Framework.

07 February 2010

Modifying photo exif data from command line on linux

Install exiv2
yum install exiv2

To subtract 8 hours from the photo datetime
exiv2 ad -a -08:00:00 *.JPG

06 February 2010

Rapid Hiring & Firing to Build the Best Teams by Bruce Eckel

Rapid Hiring & Firing to Build the Best Teams
I talked about building teams, and how difficult it is to do it well within the business structures we currently have. One of the people there mentioned Kayak.com, and how the founder had hired and fired hundreds of people in order to get the 30 or so that he currently employs.

One way to look at the problem of company-building is that it should focus on creating great teams. I've heard numerous people say that the company could be in various kinds of bad shape but they could be happy in it as long as they were on a great team. If the team is indeed the fundamental component of the company, then it would be interesting to make an environment whose primary focus is to be a culture medium for great teams.

One of the biggest problems in team building can be thought of as a version of the "sunk cost" issue. Although it's better to think of lost investment as just that -- lost -- when making decisions, our brains tend to think in terms of what we've already invested. So if you've invested a lot of time and money in hiring someone, you'll tend to hang on to that person until the pain of doing so exceeds your imagination of what it took to get them in that position. This means that a poisonous person can easily be in a team long enough to destroy it before finally being ejected, at which point it's too late. You've thrown a bass into your team of goldfish, and while the bass is eating up your team, you're thinking "maybe he'll get full."

The other big factor is legal. Some people sued companies for wrongful termination, so every company adapted practices to prevent it. You can't just fire someone anymore, you have to give them a couple of 6-month evaluations to show due diligence.

I suspect the Kayak.com founder has everyone sign a contract that allows them to be fired easily. Producing company loyalty in a situation like that might be tough ... but if you get the right people then maybe that goes away. People might become vicariously loyal to their company because they really want to be in their team.

The real problem with hiring someone for a team is that you can't actually know how well they will work out until they're participating on a day-to-day basis. Nothing in an interview really tells you this. So apparently Kayak.com decided that most of the "interview" would go on by putting the person in the real situation and seeing how they worked out. Perhaps a little unsettling and brutal -- although looking for a job is never a great experience, and maybe having one, even if just for a short period, might be better than a long period out of work and looking -- but from the standpoint of the company it could produce some excellent results.

Spring Reflection Utils

04 February 2010

dvd::rip - A full featured DVD Ripper GUI for Linux,

GDataCopier command line for Google Docs

GDataCopier provides a complete set command line utilities to manage Google Docs (the syntax of which largely resembles scp) that allow users to

* gls - list documents on the Google document system
* gcp - export and import documents on the Google document system
* gmkdir - make directories on the Google docs system
* grm - remove documents and folders on the Google docs system
* gmv - move documents in and out of folder on the Google docs system

16 January 2010

Deliver Polished Presentations Steve Jobs Style - Presentation Tips - Lifehacker

Maven Reactor Plugin Examples

Turns out you can be a lot more selective about which modules are built when you run a Maven ractor build.

From Maven Reactor Plugin - Examples
Consider an ordinary multi-module reactor build:

|-- pom.xml
|-- fooUI
| `-- pom.xml
|-- barBusinessLogic
| `-- pom.xml
`-- bazDataAccess
`-- pom.xml

Suppose project "fooUI" depends on project "barBusinessLogic", which depends on project "bazDataAccess".
fooUI --> barBusinessLogic --> bazDataAccess

mvn reactor:resume -Dfrom=barBusinessLogic

Suppose you're working on your code and you attempt to build your code
with mvn install from my-root-project, and suppose you get a test
failure in barBusinessLogic. You make additional changes to
barBusinessLogic without changing bazDataAccess; you know that
bazDataAccess is fine, so there's no need to rebuild/test it.  That will skip over bazDataAccess and pick up the build where you left off in barBusinessLogic. If barBusinessLogic succeeds, it will go on to build fooUI.

mvn reactor:make -Dmake.folders=barBusinessLogic

reactor:make will examine barBusinessLogic and walk down its dependency tree, finding all of the projects that it needs to build. In this case, it will automatically build bazDataAccess and then barBusinessLogic, without building fooUI.

mvn reactor:make-dependents -Dmake.folders=barBusinessLogic

reactor:make-dependents will examine all of the projects in your
reactor to find projects that depend on barBusinessLogic, and
automatically build those and nothing else. In this case, it will
automatically build barBusinessLogic and then fooUI.  Suppose you've made a change to barBusinessLogic; you want to make sure
you didn't break any of the projects that depend on you. (In this case,
you want to make sure that you didn't break fooUI, but in a more
complex reactor that might not be so obvious.) You also want to avoid
rebuilding/testing projects that you know you haven't changed. In this
case, you want to avoid building bazDataAccess.

mvn reactor:make-scm-changes

reactor:make-scm-changes determines which files have changed using your SCM (Source Configuration Management) tool, e.g. Subversion, Perforce, Git, etc. To use it, you'll need to configure an SCM connection in your root project POM file:

mvn reactor:make -Dmake.folders=barBusinesslogic -Dmake.printOnly

All of the reactor plugin goals take in an argument -Dmake.printOnly
that you can use to see what the goal would have done without actually
doing it. For example:

Running a different goal/lifecycle ("test", "package", "eclipse:eclipse", "clean", etc.)

By default, all of the reactor plugin goals will run mvn install on the appropriate projects. That's a pretty reasonable default, but sometimes you want to run a different command on a bunch of projects. All of the reactor plugin goals will accept a -Dmake.goals argument that will let you run other goals instead. You can separate multiple goals with commas:

mvn reactor:make -Dmake.folders=barBusinessLogic -Dmake.goals=eclipse:eclipse
mvn reactor:make-dependents -Dmake.folders=barBusinessLogic -Dmake.goals=package,clean
mvn reactor:resume -Dmake.folders=barBusinessLogic -Dmake.goals=test
mvn reactor:resume -Dmake.folders=barBusinessLogic -Dmake.goals=install,-DskipTests

In other words, the "goals" are just extra command-line parameters passed to the spawned Maven; they don't necessarily have to be "goals."

mvn reactor:make -Dmake.folders=fooUI -Dfrom=barBusinessLogic

When you use reactor:make, you run a subset of projects, but that doesn't mean stuff won't fail halfway through the build. You can resume a reactor:make build from the project that stopped the build by passing -Dfrom to the reactor:make goal.  The -Dfrom argument also works with reactor:make-dependents and reactor:make-scm-changes.

Nested directories

Let's consider a more complex project:

|-- pom.xml
|-- fooUI
| `-- pom.xml
|-- barBusinessLogic
| `-- pom.xml
|-- quz
| |-- pom.xml
| |-- quzAdditionalLogic
| | `-- pom.xml
| `-- quzUI
| `-- pom.xml
`-- bazDataAccess
`-- pom.xml

Again suppose project "fooUI" depends on project "barBusinessLogic", which depends on project "bazDataAccess".

fooUI --> barBusinessLogic --> bazDataAccess

But furthermore, suppose "quzUI" depends on "quzAdditionalLogic", which depends on "barBusinessLogic."

quzUI --> quzAdditionalLogic --> barBusinessLogic --> bazDataAccess

If you try to run mvn reactor:make -Dmake.folders=quzUI, you'll get an error:

mvn reactor:make -Dmake.folders=quzUI
[INFO] Folder doesn't exist: /home/person/svn/trunk/quzUI

Naturally, you'll have to specify the complete relative path to quzUI, like this:

mvn reactor:make -Dmake.folders=quz/quzUI

15 January 2010

Maven AppAssembler

Appassembler generates artifacts exposing your Java app through Java Service Wrapper


Installing PHPoxy under an secure website (https) is a simple way of being able to visit sites that may not be possible through a corporate firewall.

Install PHProxy in Your Web Space to Access Blocked Sites - Proxy - Lifehacker

HornetQ - another JMS implementation from Jboss

HornetQ - putting the buzz in messaging - JBoss Community
HornetQ is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system.

Why should I use HornetQ?

JMS and above - HornetQ supports the JMS 1.1 API
and also defines its own messaging API for maximum performance and
flexibility. Other protocols are planned for upcoming releases.

Superb performance
- HornetQ class-beating high performance journal provides persistent
messaging performance at rates normally seen for non-persistent
messaging. Non-persistent messaging performance rocks the boat too.

POJO-based design - HornetQ has been designed using POJO and minimal third-party dependencies. You choose how you want to use HornetQ: run it stand-alone, integrate it with JBoss Application Server or another Java server/container or embed it directly inside your own product.

Solid high availability
- HornetQ offers server replication and automatic client failover to
eliminate lost or duplicated messages in case of server failure.

Flexible clustering
- Create clusters of HornetQ servers that know how to load balance
messages. Link geographically distributed clusters over unreliable
connections to form a global network. Configure routing of messages in
a highly flexible way. Adapt HornetQ to your network topology, not the
other way round.

Management - HornetQ provides
a comprehensive management API to manage & monitor servers. It is
integrated seamlessly to the servers to work in a HA environment.