11 November 2012

kryo - Fast, efficient Java serialization and cloning

kryo - Fast, efficient Java serialization and cloning - Google Project Hosting
Kryo is a fast and efficient object graph serialization framework for Java. The goals of the project are speed, efficiency, and an easy to use API. The project is useful any time objects need to be persisted, whether to a file, database, or over the network. Kryo can also perform automatic deep and shallow copying/cloning. This is direct copying from object to object, not object->bytes->object.

Finagle

Finagle, from Twitter
Finagle is a network stack for the JVM that you can use to build asynchronous Remote Procedure Call (RPC) clients and servers in Java, Scala, or any JVM-hosted language. Finagle provides a rich set of protocol-independent tools.

Snappy - A fast compressor/decompressor

snappy - A fast compressor/decompressor - Google Project Hosting
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.

30 June 2012

5' on IT-Architecture: Four Laws of Robust Software Systems | Javalobby

Interesting point here.  Given there is nothing permanent except change, its important that you build systems that are easy to change later?  Unit/integration tests facilitate change.

5' on IT-Architecture: Four Laws of Robust Software Systems | Javalobby
There is nothing permanent except change. If a system isn't designed in accordance to this superior important reality, then the probability of failure may increase above average. A widely used technique to facilitate change is the development of a sufficient set of unit tests. Unit testing enables to uncover regressions in existing functionality after changes have been made to a system. It also encourages to really think about the desired functionality and required design of the component under development.

28 June 2012

FolderSync • Reg Hardware

FolderSync • Reg Hardware
lets you synchronise your Android folders with SkyDrive and just about every other cloud storage supplier.

FolderSync supports most of the popular commercial cloud wallahs, including Dropbox, SkyDrive, SugarSync, Ubuntu One, Box, Amazon S3 and Google Drive. It also works with FTP, Samba/Windows share and WebDAV remote storage for the DIY enthusiast. It also supports multiple accounts of the same type.

Envers - Hibernate audit tables

Envers - JBoss Community
The Envers project aims to enable easy auditing/versioning of persistent classes. All that you have to do is annotate your persistent class or some of its properties, that you want to audit, with @Audited. For each audited entity, a table will be created, which will hold the history of changes made to the entity. You can then retrieve and query historical data without much effort.

The Netflix Tech Blog: The Netflix Simian Army

The Netflix Tech Blog: The Netflix Simian Army
Imagine getting a flat tire. Even if you have a spare tire in your trunk, do you know if it is inflated? Do you have the tools to change it? And, most importantly, do you remember how to do it right? One way to make sure you can deal with a flat tire on the freeway, in the rain, in the middle of the night is to poke a hole in your tire once a week in your driveway on a Sunday afternoon and go through the drill of replacing it. This is expensive and time-consuming in the real world, but can be (almost) free and automated in the cloud.

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables -- all the while we continue serving our customers without interruption. By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won't even notice.

Inspired by the success of the Chaos Monkey, we’ve started creating new simians that induce various kinds of failures, or detect abnormal conditions, and test our ability to survive them; a virtual Simian Army to keep our cloud safe, secure, and highly available.

Latency Monkey induces artificial delays in our RESTful client-server communication layer to simulate service degradation and measures if upstream services respond appropriately. In addition, by making very large delays, we can simulate a node or even an entire service downtime (and test our ability to survive it) without physically bringing these instances down. This can be particularly useful when testing the fault-tolerance of a new service by simulating the failure of its dependencies, without making these dependencies unavailable to the rest of the system.

Conformity Monkey finds instances that don’t adhere to best-practices and shuts them down. For example, we know that if we find instances that don’t belong to an auto-scaling group, that’s trouble waiting to happen. We shut them down to give the service owner the opportunity to re-launch them properly.

Doctor Monkey taps into health checks that run on each instance as well as monitors other external signs of health (e.g. CPU load) to detect unhealthy instances. Once unhealthy instances are detected, they are removed from service and after giving the service owners time to root-cause the problem, are eventually terminated.

Janitor Monkey ensures that our cloud environment is running free of clutter and waste. It searches for unused resources and disposes of them.

Security Monkey is an extension of Conformity Monkey. It finds security violations or vulnerabilities, such as improperly configured AWS security groups, and terminates the offending instances. It also ensures that all our SSL and DRM certificates are valid and are not coming up for renewal.

10-18 Monkey (short for Localization-Internationalization, or l10n-i18n) detects configuration and run time problems in instances serving customers in multiple geographic regions, using different languages and character sets.

Chaos Gorilla is similar to Chaos Monkey, but simulates an outage of an entire Amazon availability zone. We want to verify that our services automatically re-balance to the functional availability zones without user-visible impact or manual intervention.
With the ever-growing Netflix Simian Army by our side, constantly testing our resilience to all sorts of failures, we feel much more confident about our ability to deal with the inevitable failures that we'll encounter in production and to minimize or eliminate their impact to our subscribers. The cloud model is quite new for us (and the rest of the industry); fault-tolerance is a work in progress and we have ways to go to fully realize its benefits. Parts of the Simian Army have already been built, but much remains an aspiration -- waiting for talented engineers to join the effort and make it a reality.

24 June 2012

Spring 3.1 Constructor Namespace

Spring 3.1 Constructor Namespace makes it much neater to use constructor injection but you do need to use debug-compiled versions of classes

23 June 2012

Tracking Excessive Garbage Collection in Hotspot JVM

Tracking Excessive Garbage Collection in Hotspot JVM

Interesting looks like setting "-XX:GCHeapFreeLimit=20" and "-XX:GCTimeLimit=90" will help to ensure that OutOfMemory errors are generated earlier so that an application doesn't remain in degraded state involving too frequent garbage collections.

08 June 2012

GuavaExplained - guava-libraries - Every Java developer should read this wiki in detail

GuavaExplained - guava-libraries

Every Java developer should read this wiki in detail.
The Guava project contains several of Google's core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth. Each of these tools really do get used every day by Googlers.

But trawling through Javadoc isn't always the most effective way to learn how to make best use of a library. Here, we provide readable and pleasant explanations of some of the most popular and most powerful features of Guava.

This wiki is a work in progress, and parts of it may still be under construction.

Basic utilities: Make using the Java language more pleasant.
Preconditions: Test preconditions for your methods more easily.
Common object methods: Simplify implementing Object methods, like hashCode() and toString().
Ordering: Guava's powerful "fluent Comparator" class.
Throwables: Simplify propagating and examining exceptions and errors.
Collections: Guava's extensions to the JDK collections ecosystem. These are some of the most mature and popular parts of Guava.
Immutable collections, for defensive programming, constant collections, and improved efficiency.
New collection types, for use cases that the JDK collections don't address as well as they could: multisets, multimaps, tables, bidirectional maps, and more.
Powerful collection utilities, for common operations not provided in java.util.Collections.
Extension utilities: writing a Collection decorator? Implementing Iterator? We can make that easier.
Caches: Local caching, done right, and supporting a wide variety of expiration behaviors.
Functional idioms: Used sparingly, Guava's functional idioms can significantly simplify code.
Concurrency: Powerful, simple abstractions to make it easier to write correct concurrent code.
ListenableFuture: Futures, with callbacks when they are finished.
Service: Things that start up and shut down, taking care of the difficult state logic for you.
Strings: A few extremely useful string utilities: splitting, joining, padding, and more.
Primitives: operations on primitive types, like int and char, not provided by the JDK, including unsigned variants for some types.
Ranges: Guava's powerful API for dealing with ranges on Comparable types, both continuous and discrete.
I/O: Simplified I/O operations, especially on whole I/O streams and files, for Java 5 and 6.
Hashing: Tools for more sophisticated hashes than what's provided by Object.hashCode(), including Bloom filters.
EventBus: Publish-subscribe-style communication between components without requiring the components to explicitly register with one another.
Math: Optimized, thoroughly tested math utilities not provided by the JDK.
Reflection: Guava utilities for Java's reflective capabilities.

03 June 2012

27 May 2012

Continuous delivery/deployment

Have been reading the http://continuousdelivery.com/ book.  Really like the ideas:


Infrastructure as code

The desired state of your infrastructure should be specified through version-controlled configuration.

You should always know the desired state of your infrastructure through your instrumentation and monitoring.

Automated deployment of infrastructure includes
  • operating system configuration
  • middleware stack and its configuration: app servers, messaging systems,  databases
  • network infrastructure: routers, firewalls, switches, DNS, DHCP, DMZs
Acceptance tests

Keeping the complex acceptance tests running will take time from your development team. However, this cost is in the form of an investment which, in our experience, is repaid many times over in reduced maintenance costs, the protection that allows you to make wide-ranging changes to your application, and significantly higher quality. This follows our general principle of bringing the pain forward in the process. We know from experience that without excellent automated acceptance test coverage, one of three things happens:
  • Either a lot of time is spent trying to find and fix bugs at the end of the process when you thought you were done, or 
  • You spend a great deal of time and money on manual acceptance and regression testing, or 
  • You end up releasing poor-quality software.
It is difficult to enumerate the reasons that make a project warrant such an investment in automated acceptance testing. For the type of projects that we usually get involved in, our default is that automated acceptance testing and an implementation of the deployment pipeline are usually a sensible starting point. For projects of extremely short duration with a small team, maybe four or fewer developers, it may be overkill—you might instead run a few end-to-end tests as part of a single-stage CI process. But for anything larger than that, the focus on business value that automated acceptance testing gives to developers is so valuable that it is worth the costs. It bears repeating that large projects start out as small projects, and by the time a project gets large, it is invariably too late to retro-fit a comprehensive set of automated acceptance tests without a Herculean level of effort. We recommend that the use of automated acceptance tests created, owned, and maintained by the delivery team should be the default position for all of your projects.


Capacity Testing System

The capacity test system is usually the closest analog to your expected production system. As such, it is a very valuable resource. Further, if you follow our advice and design your capacity tests as a series of composable, scenario-based tests, what you really have is a sophisticated simulation of your production system. This is an invaluable resource for a whole variety of reasons. We have discussed already why scenario-based capacity testing is of importance, but given the much more common approach of benchmarking specific, technically focused interactions, it is worth reiterating. Scenario-based testing provides a simulation of real interactions with the system. By organizing collections of these scenarios into complex composites, you can effectively carry out experiments with as much diagnostic instrumentation as you wish in a production-like system. We have used this facility to help us perform a wide variety of activities:
  • Reproducing complex production defects
  • Detecting and debugging memory leaks
  • Longevity testing
  • Evaluating the impact of garbage collection
  • Tuning garbage collection
  • Tuning application configuration parameters
  • Tuning third-party application configuration, such as operating system, application server, and database configuration
  • Simulating pathological, worst-day scenarios
  • Evaluating different solutions to complex problems
  • Simulating integration failures
  • Measuring the scalability of the application over a series of runs with different hardware configurations
  • Load-testing communications with external systems, even though our capacity tests were originally intended to run against stubbed interfaces
  • Rehearsing rollback from complex deployments.
  • Selectively failing parts or the application to evaluate graceful degradation of service
  • Performing real-world capacity benchmarks in temporarily available production hardware so that we could calculate more accurate scaling factors for a longer-term, lower-specification capacity test environment
This is not a complete list, but each of these scenarios comes from a real project.

Release Strategy

The most important part of creating a release strategy is for the application’s stakeholders to meet up during the project planning process. The point of their discussions should be working out a common understanding concerning the deployment and maintenance of the application throughout its lifecycle. This shared understanding is then captured as the release strategy. This document will be updated and maintained by the stakeholders throughout the application’s life. When creating the first version of your release strategy at the beginning of the project, you should consider including the following:
  • Parties in charge of deployments to each environment, as well as in charge of the release.
  • An asset and configuration management strategy.
  • A description of the technology used for deployment. This should be agreed upon by both the operations and development teams.
  • A plan for implementing the deployment pipeline.
  • An enumeration of the environments available for acceptance, capacity, integration, and user acceptance testing, and the process by which builds will be moved through these environments.
  • A description of the processes to be followed for deployment into testing and production environments, such as change requests to be opened and approvals that need to be granted.
  • Requirements for monitoring the application, including any APIs or services the application should use to notify the operations team of its state.
  • A discussion of the method by which the application’s deploy-time and runtime configuration will be managed, and how this relates to the automated deployment process.
  • Description of the integration with any external systems. At what stage and how are they tested as part of a release? How do the operations personnel communicate with the provider in the event of a problem?
  • Details of logging so that operations personnel can determine the application’s state and identify any error conditions.
  • A disaster recovery plan so that the application’s state can be recovered following a disaster.
  • The service-level agreements for the software, which will determine whether the application will require techniques like failover and other high-availability strategies.
  • Production sizing and capacity planning: How much data will your live application create? How many log files or databases will you need? How much bandwidth and disk space will you need? What latency are clients expecting?
  • An archiving strategy so that production data that is no longer needed can be kept for auditing or support purposes.
  • How the initial deployment to production works.
  • How fixing defects and applying patches to the production environment will be handled.
  • How upgrades to the production environment will be handled, including data migration.
  • How application support will be managed.
Release Plan
  • The steps required to deploy the application for the first time
  • How to smoke-test the application and any services it uses as part of the deployment process
  • The steps required to back out the deployment should it go wrong
  • The steps required to back up and restore the application’s state
  • The steps required to upgrade the application without destroying the application’s state
  • The steps to restart or redeploy the application should it fail
  • The location of the logs and a description of the information they contain
  • The methods of monitoring the application
  • The steps to perform any data migrations that are necessary as part of the release
  • An issue log of problems from previous deployments, and their solutions
...

There are several methods of performing a rollback that we will discuss here.

The more advanced techniques—blue-green deployments and canary releasing—can also be used to perform zero-downtime releases and rollbacks. Before we start, there are two important constraints. The first is your data. If your release process makes changes to your data, it can be hard to roll back. Another constraint is the other systems you integrate with. With releases involving more than one system (known as orchestrated releases), the rollback process becomes more complex too.

There are two general principles you should follow when creating a plan for rolling back a release. The first is to ensure that the state of your production system, including databases and state held on the filesystem, is backed up before doing a release. The second is to practice your rollback plan, including restoring from the backup or migrating the database back before every release to make sure it works.

...

One way to approach this problem is to put the application into read-only mode shortly before switchover. You can then take a copy of the green database, restore it into the blue database, perform the migration, and then switch over to the blue system. If everything checks out, you can put the application back into read-write mode. If something goes wrong, you can simply switch back to the green database. If this happens before the application goes back into read-write mode, nothing more needs to be done. If your application has written data you want to keep to the new database, you will need to find a way to take the new records and migrate them back to the green database before you try the release again. Alternatively, you could find a way to feed transactions to both the new and old databases from the new version of the application.

Continuous Deployment

Continuous deployment isn’t for everyone. Sometimes, you don’t want to release new features into production immediately. In companies with constraints on compliance, approvals are required for deployments to production. Product companies usually have to support every release they put out. However, it certainly has the potential to work in a great many places.

The intuitive objection to continuous deployment is that it is too risky. But, as we have said before, more frequent releases lead to lower risk in putting out any particular release. This is true because the amount of change between releases goes down. So, if you release every change, the amount of risk is limited just to the risk inherent in that one change. Continuous deployment is a great way to reduce the risk of any particular release.

Perhaps most importantly, continuous deployment forces you to do the right thing (as Fitz points out in his blog post). You can’t do it without automating your entire build, deploy, test, and release process. You can’t do it without a comprehensive, reliable set of automated tests. You can’t do it without writing system tests that run against a production-like environment. That’s why, even if you can’t actually release every set of changes that passes all your tests, you should aim to create a process that would let you do so if you choose to.

Your authors were really delighted to see the continuous deployment article cause such a stir in the software development community. It reinforces what we’ve been saying about the release process for years. Deployment pipelines are all about creating a repeatable, reliable, automated system for getting changes into production as fast as possible. It is about creating the highest quality software using the highest quality process, massively reducing the risks of the release process along the way. Continuous deployment takes this approach to its logical conclusion. It should be taken seriously, because it represents a paradigm shift in the way software is delivered. Even if you have good reasons for not releasing every change you make—and there are less such reasons than you might think—you should behave as if you were going to do so.

17 May 2012

Growing a Team By Investing in Tools

Growing a Team By Investing in Tools
Where many teams struggle however is in finding the right balance of building out the product versus “sharpening the saw” by improving their tools and automating processes. With the same people attempting to do both jobs that contention is unavoidable.

The solution then is to build a second team who’s job is entirely focussed on building, maintaining, configuring and improving the tools that the product team uses. Since the majority of tools used by a project are fairly orthogonal to the project itself – and often shared with other product teams – the tool team can be much more independent, reducing the communication overhead. Not only do the product team now have more time to devote to improving the product, they are continuously made more productive because of the work done by the tools team.

10 May 2012

Engineering Management - Process

Engineering Management - Process
A process created and implemented by the individual contributors themselves is much more tuned to optimize the real work being done. A process designed by managers at best approximates the actual flow of work that it needs to govern, optimize, or codify. This is the source of many clueless and inefficient processes.

People feel greater ownership over process they create themselves, and are thus more empowered to adjust it in the future as circumstances inevitably evolve rather than allow it to calcify. Process imposed externally ("from above") is more difficult to break down and tends to be unnecessarily enshrined, thus producing extra organizational inertia.

22 April 2012

VMWare getting sounding working with linux host and windows guest

Installed VMWare Player to have a windows guess running on my linux host.  I found the sound wasn't working well, sounding distorted and clipped.

Fixed this by:

a) Running 'aplay -L' and finding the entry for my soundcard beginning with "front".  In my case the entry was "front:CARD=Intel,DEV=0".

b) Edit my vm config file "Windows Vista.vmx" and changing the following entries to:
sound.fileName = "front:CARD=Intel,DEV=0"
sound.autodetect = "FALSE"

Thanks to this post for the solution.

21 April 2012

Idempotence Is Not a Medical Condition - ACM Queue

Idempotence Is Not a Medical Condition - ACM Queue
In a world full of retried messages, idempotence is an essential property for reliable systems. Idempotence is a mathematical term meaning that performing an operation multiple times will have the same effect as performing it exactly one time. The challenges occur when messages are related to each other and may have ordering constraints. How are messages associated? What can go wrong? How can an application developer build a correctly functioning app without losing his or her mojo?

14 April 2012

An Introduction to NoSQL Patterns | Architects Zone

Regaining root on Google Nexus S following 4.0.4 OTA update

My phone has
1) the bootloader unlocked
2) usb debug mode enabled

My PC has
1) the Android SDK installed to use adb and fastbook
 
To regain root following an OTA update

1) Download "ClockworkMod Touch" recovery from http://www.clockworkmod.com/rommanager to C:\sekhonp\installation\

2) Download su from http://download.clockworkmod.com/test/su.zip to C:\sekhonp\installation\

3) Plugin phone into usb port and ensure usb debug mode is on

4) cd "C:\Program Files (x86)\Android\android-sdk\platform-tools"

5) adb push C:\sekhonp\installation\su.zip /sdcard/

6) reboot phone into bootloader menu, but don't switch to recovery mode

7) ..\tools\fastboot flash recovery C:\sekhonp\installation\recovery-clockwork-touch-5.8.0.2-crespo.img (replace path with downloaded image)

8) Switch into recovery mode and install /sdcard/su.zip

9) Reboot

10) Open Root Explorer, change to /etc, mount r/w, delete /etc/install-recovery.sh, mount r/o

11) Open Rom Manager and flash "ClockworkMod Touch" recovery (as it would have been deleted by /etc/install-recovery.sh in previous reboot)

24 March 2012

Hyperdecanting: Better Wine in a Minute, You Impatient Philistine

Hyperdecanting: Better Wine in a Minute, You Impatient Philistine
Former Microsoft CTO and master chef Nathan Myhrvold suggests a method he calls "hyperdecanting". Sounds fancy and high-tech, right? It's basically shorthand for "put your wine in a blender for a minute and it'll taste better".

When Disruptor is not a good fit

As with most new and useful technologies and techniques, it is often easy to overuse them in scenarios where they are not appropriate.  If you have a shiny new hammer, the screws start to look like nails :).  In this blog I want to discuss the requirement for high-frequency, low-latency event dispatch between different threads in a process and use of Disruptor.

When

The Disruptor framework is certainly a good fit where consumers of events need to receive all events that are published.  However I think it is a poor fit if:

1) Subscribers of events do not need to receive all events that are published.  For example, if they are receiving prices for a particular stock, they might only need the last value.  Interestingly, in my domain, the sort of events that occur in high-frequency and that need to be dispatched with low-latency are often events that can be conflated in some form.  Of course, that's not true for everybody and if all subscribers need to receive each and every event, then using Disruptor makes sense.

2) Subscribers can become slow and performance should remain good when some consumers are fast and others are slow.

Why

Lets take the scenario where a single publisher is generating stock prices for a single stock at a rate of 1000 prices per second.  There are three subsribers in the process.  If one of the subscribers blocks for 1 second, then 1000 events will backup in the RIngBuffer.  Subscribers that are not keeping up with the publisher will cause the Java VM to work harder on minor garbage collections as newly created events live longer in the ringbuffer.  In a more terminal scenario, say you size your ringbuffer at 1,000,000 events, if one subscriber blocks for more than 1000 seconds, then the publisher will have to block as well, as the ringbuffer becomes full, affecting the two fast subscribers that were keeping up.  The impact on garbage collection will be even heavier, causing latency jitter.

In this scenario, where events can be conflated, I would rather not use Disruptor, instead I would have the publisher dispatch to each subscriber, via some sort of conflating queue, here is a snippet that makes it clearer:

 public class SingleValueConflatingDispatcher<Subscriber, Event> {  
   private final Subscriber subscriber;                   
   private volatile Event lastEvent;                    
   private SingleValueConflatingDispatcher(Subscriber subscriber) {     
     this.subscriber = subscriber;                    
   }                                    
   public void add(Event event) {                      
     this.lastEvent = event;                       
   }                                    
   public void dispatch() {                         
     subscriber.dispatch(lastEvent);                   
   }                                    
 }  

The publisher thread invokes add() and the subscriber thread invokes dispatch().  For more complex conflation strategies it's likely you'll need to use locks rather than memory barriers to prevent race conditions in such a ComplicatedConflatingDispatcher, however for me that's a small price to pay.

The net effect is that there is no impact from any slow consumer, on other consumers.  The conflation happens on the publisher's thread, rather than queuing up a large number of events.

Mutability

One suggestion to reduce the impact of garbage collection when using Disruptor with slower consumers is to use mutable events for the entries in the ringbuffer.  Its true that mutable events are probably the only way to get to zero-GC, which helps give you the lowest levels of latency jitter possible.  In the scenario, where conflation is possible, I would much rather stick with immutable events.  Where you have lots of complex business logic operating on events downstream, with different developers/teams writing code operating on those events, its just too easy to introduce concurrency bugs with mutable events.  I sleep much easier at night, knowing all events are immutable.  Also while mutable events reduce the effects garbage collection, it does not prevent the publisher from blocking if one subscriber blocks for too long.

However, if your usecase complexity is constrained and/or you have a small, skilled team building it all, then you do have the option of using mutable events.

SpinLocks

If Disruptor is right for your usecase, do watch out overuse of the BusySpinWaitStrategy.  Stating the obvious, only use this when you can afford to burn away a CPU core when the application is doing nothing.  Otherwise you have a range of WaitStrategies available..

20 March 2012

Inspirational Quotes About System Design | Javalobby

Some nice quotes HERE. Particularly like
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. Antoine de Saint-Exupery, French writer (1900 - 1944)

No more primitives that would be good if done efficiently

Java won't curl up and die like Cobol, insists Oracle • The Register
For the Java Development Kit (JDK) 10 or after, a fundamental change is being discussed: making the Java language Object Oriented. This might see the introduction of a unified type system that turns everything into objects and means no more primitives.

Seven Wastes

Lean Tools: Seeing Waste | Web Builder Zone
Partially Done Work

Also called WIP (Work In Progress), this waste is identified with all partially developed features or stories. Examples of WIP are features which are:

fully specified in a big document, but not developed yet.
Checked in, but not tested yet.
Unit tested, but not yet integrated with the rest of the system.
Integrated, but not yet released or deployed where a customer can use it.

These features are in a limbo, and won't provide any value to the customer until they reach him. Meanwhile, we are paying the burden of a bigger codebase, and accepting the risk that in the time that it takes to complete the feature the goals of the user will have changed, making part of our efforts useless. You won't deliver a Black Friday shopping application in December; the utility curve of other features is not usually so sharp, but the more is shifted into the future, the less its present value.

13 March 2012

JDK 8 backported ConcurrentHashMaps in Infinispan | Javalobby

Feature Toggles

If You Aren’t Using Feature Toggles, Start… Now | Javalobby
Feature toggles create a distinction between deploying your feature & making that feature available for use. They also remove the requirement that to disable a feature, or to go back to ‘old behavior’ you have to rollback your deployment to an older version of code. There are lots of other benefits too, as well as some challenges.

Bottom line though, if you aren’t using these you need to really seriously consider whether they would be a benefit. If you control your software release & you operate a multi-tenant system, and you want to increase the amount of control you have around the features you release, you need to be using these.

06 January 2012

Guava Release 11's IntMath | Javalobby

Guava Release 11's IntMath | Javalobby
Guava Release 11's IntMath class provides three more "checked" methods that throw an ArithmeticException when the given mathematical operation results in an overflow condition. These methods are for addition, substraction, and multiplication and are respectively called IntMath.checkedAdd(int,int), IntMath.checkedSubtract(int,int), and IntMath.checkedMultiply(int,int). As discussed in conjunction with IntMath.checkedPow(int,int), the advantage of this occurs in situations where it is better to have an exception and know overflow occurred than to blindly operate on an erroneous value due to an overflow condition.

04 January 2012

Process Related Classic Mistakes | Javalobby

Process Related Classic Mistakes | Javalobby
Today’s blog takes a quick look at the second of Steve’s categories of mistakes: Process Related Mistakes, which include:

* Overly Optimistic Schedules
* Insufficient Risk Management
* Contractor Failure
* Insufficient Planning
* Abandonment of Planning Under Pressure
* Wasted Time During Fuzzy Front End
* Short-changed Upstream Activities
* Inadequate Design
* Shortchanged QA
* Insufficient Management Controls
* Premature or Overly Frequent Convergence
* Omitting Necessary Tasks from Estimates
* Planning to Catch Up Later
* Code Like Hell Programming

03 January 2012

Interpolation Search

Dibert on the conversation that never happened


Phaser (Java Platform SE 7 )

If I had more time I would have written less code | Javalobby

If I had more time I would have written less code | Javalobby
When you cut corners, you pay for it later. Time not spent discussing the requirements in detail leads to misunderstandings; if you don’t spot it later on, you might have to wait until the show and tell for the customer to spot it – now you’ve wasted loads of time. Time not spent thinking about the design can lead to you going down an architectural blind alley that causes loads of rework. Finally time not spent refactoring leaves you with a pile of crap that will be harder to change when you start work on the next story, or even have to make changes for this one. Of course, you couldn’t have seen these problems coming – you did your best, these things crop up in software don’t they?

But what if you hadn’t rushed? What if rather than diving in, you’d spent another 15 minutes discussing the requirements in detail with the customer? You might have realised earlier that you had it all wrong and completely misunderstood what she wanted. What if you spent 30 minutes round a whiteboard with a colleague discussing the design? Maybe he would have pointed out the flaws in your ideas before you started coding. Finally, what if you’d spent a bit more time refactoring? That spaghetti mess you’re going to swear about next week and spend days trying to unravel will still be fresh in your mind and easier to untangle. For the sake of an hour’s work you could save yourself days.