Validating Map values using Grails constraints

In a recent Grails project, we came up with a requirement where we could collect arbitrary input data from a form and validate them in the Grails controller.

There are several validation techniques and initially I was thinking of creating various standard validation closures (like isEmpty, isNull, hasLength etc) on the map key name and execute them at runtime. By my coworker proposed an idea to use the Grails constraints directly instead of standard validation closures. I liked the idea, because a) it allows reuse of existing constraints and validate it the “Grails-way” and b) It is trivial to create custom constraints.

CommandObject

@Validateable
class DynamicFields {
  Map responses = [:]
}

Controller

class DynamicFieldsController {
def submit() {
  DynamicFields fields = new DynamicFields()
  fields.responses.putAll(params)

  //for illustration, imagine the fields.responses has a key called "firstName" whose value has to be validated
  ConstrainedProperty constrainedProperty = new ConstrainedProperty(DynamicFields, "firstName", String)
  constrainedProperty.applyConstraint("blank", false)
  constrainedProperty.validate(fields, params.firstName, fields.getErrors()) //getErrors() is available  because of @Validateable
  fields.errors.each { println it }
}
}

Problem

The (pseudo) execution stack now is validate() -> processValidate() -> AbstractConstraint.rejectValue -> BeanWrapperImpl.getPropertyValue() -> java.beans.PropertyDescriptor.getMethod() that throws a “NotReadablePropertyException”.

Obviously the java beans framework cannot find the firstName property. I tried several variations on the DynamicFields class: propertyMissing(), @Override getProperty(), setProperty(), metaclass.getProperty(), even AbstractConstraint.metaClass.rejectValue() – none of them were respected by the java beans. Well obviously, because the Grails magic does all this via GroovyObjectSupport, but the underlying Java Beans framework does not know about it.

Solution

Remember that Errors and BindingResult are interfaces and the concrete implementation is provided by AbstractBindingResult and its subclasses. By default Spring uses the BeanPropertyBindingResult for tying back the validation error to the field. From the hierarchy of these classes, I saw the MapBindingResult class, which binds the validation to a target map. Just what I wanted.

So I changed the controller to

DynamicFields fields = new DynamicFields()
fields.responses.putAll(params)
fields.setErrors(new MapBindingResult(fields.responses, DynamicFields.class.getName()))

This cleanly tied the errors to the individual map keys.

Displaying the errors also is trivial:

<g:hasErrors bean="${fields?.errors}">
<g:eachError><p>${it}</p></g:EachError>
</g:hasErrors/>

A cleaner solution is probably to create an AST that ensures that the setErrors (MapBindingResult) is done always or a better way of injecting MapBindingResult into Errors.

In fact, even without Map responses object, using Groovy’s Expando, one can directly store values in the DynamicFields class and validate them. Power of dynamic programming – ensuring valid data is collected even for “non-existent” attributes !

Unassumable Abstractions

In the recently concluded annual technology conference of my organization, the keynote speaker Kirby Ferguson presented his talk “Everything is a Remix“. It also appears in TED. Watch it, its pretty good. Sometime last year, I had posted a blog on the similar theme, that we do not really invent anything. All we do is look into recycle bin and pull out existing ideas, but “innovate” just enough to present in a different form.

I remember one of my first implementation of a functionality in Visual Basic, back in 1995 – drag-n-drop a combobox from Toolbar panel, open the properties page and set the database connectivity properties, select a few columns, click on some validation checks and voila, the client retrieved the data from db! How simple can it be? Apparently not enough.

In writing web applications, there are two main styles – component based frameworks (CBF) and action based frameworks (ABF/MVC). Both have pros and cons based on the needs and perspectives and the war between the two seems to shift in balance of victory from time to time. Microsoft started with a CBF Asp.net solution and eventually dumped it to go towards MVC. Java started predominantly with ABF and later geared towards a few CBFs (JSF, Tapestry, Wicket etc).

I was recently working on a custom application that should create a component based solution based on a ABF. Before even designing the solution, I was giving a lot of thought about the philosophy of these styles. CBFs completely abstract the underlying mechanisms (models, views and data access) to provide a uniform API to create solutions, while the ABFs abstract each layer separately and lets you wire them up as required. Both, obviously have their utility value, but one problem I see with building component solutions is that if the underlying protocols change, almost the whole framework has to change (eg. Tapestry 4->5 and also Tapestry 5.4 having to replace the existing javascript layer).

The reality is that the underlying mechanisms have been evolving. And evolving fast. For eg., if a component solution had abstracted the rdbms access, javascript and web requests, the former two have changed significantly over the last two years. NoSqls have already disrupted the rdbms space and javascript frameworks like AngularJS are directly exploiting html, quite ingeniously. A CBF abstracting with an assumption of only HTML5 and CSS3 is already behind the curve. There is another way though – create a very high level of abstraction where all these layers could be switched at will. Even if there is such an abstraction, it would need intermediate-abstractions to bridge them to low-level code. Not too maintenable in the long run. This is one of the reasons that ASP.Net, which is tightly coupled to all layers, just could not scale. Other similar component solutions will meet a road block somewhere down the road, depending on the level of abstractions they provide. The reason is same – the assumption of stable assets, does not hold anymore.

In the ABFs, the individual layers are abstracted on its own – abstraction of views by GSP, jQuery, Angular, Express, Ember etc; abstraction of server-side by Grails, Rails, Spring MVC, Asp.Net MVC etc. and abstraction of data access by GORM, JPA, Hibernate, Spring Batch, Spring Data etc. In case any of these frameworks go out of fad, it is only necessary to refactor that abstraction and its relations, instead of rewriting the whole solution.

The challenge with the web technologies now is that newer simpler ways of achieving the same results are happening too fast. Granted, the protocol (http) itself is not changing, at least for now (WebSockets anyone?), but what changes is how data is being mined, how it is being groomed and how it is being presented. Let us take a short look at the three layers – mining, grooming and presenting.

Data-mining: The advantage with document dbs is you get to store your model as you want to be, not as what the datastore dictates you to be. If you evaporate all the data objects, it will basically residue down to two: Map and List. Neither is “natively” supported in an rdbms. You have to decompose them to some id and, more importantly a relation.

Data-grooming: Many data-grooming happen around xml and json. Unfortunately for xml, from being a mere human readable representation by strings, it took a very impractical turn – complicated schemas, explosion of verbosity, confusing binding libraries, generic representation of data types, configuration and code logic. It falls by its own weight of abstractions. In a traditional web application, think about the data conversions that you do – they take about 60% of the effort – converting from request input to command objects to domain or service layer objects to stored procedure models to actual data! And all the way back to presentation layer! (N+1)-Tier FWIW!

Data-presenting: Of the three, this layer has proven to change wildly of late. With javascript back as a first-class language, several frameworks have challenged the “traditional” notions. CBF solutions, which try to neatly wrap intricate mechanisms of mvc, would work only if what they wrap are stable. If what they wrap start to evolve frenetically or completely stop evolving, they just wont scale any more. Instead of providing real new features, CBF solutions end up reworking newer abstractions.

I was working on a project with Grails + MongoDB + AngularJS, the conversion layer is reduced to request from html and saving as json; Reading as json and passing it to Javascript. From thin-client to thin-server, moved we have, O Dooku! And while I was looking for something even simpler than this and I came across Breeze.js framework and found this code:

breeze.EntityQuery.from(“Orders”)
.where(“Customer.CompanyName”, “startsWith”, “Alfreds”)
.expand(“Customer, OrderDetails.Product”)
.using(manager)
.execute().then(querySucceeded).fail(queryFailed);

So querying the database domain models directly from presentation layer? Heard of it somewhere? Yeah, I heard that first in Visual Basic. Everything is a remix indeed!