The Art of Getting Interviewed

I was recently involved in interviewing many candidates – a mix of freshers, juniors and seniors for some positions at Virtual Instruments. Based on my limited view, I see some worrying trends at the freshers level. This blog is a result of some observations to help freshers (also applies to other candidate levels) prepare for industry.

Most of the freshers I interviewed had a Masters in CS/Electronics background – some already had an year of experience, mostly interns. It left me wondering what do they really teach at the Universities? College Ed in USA is not cheap, its a loan that haunts you most of your life. Yet, are the freshers who put a lot of money and effort ready for the real world? It appears that University CS courses still teach a lot of “outdated” stuff that hardly matters in silicon valley jobs. Are the professors aware of changes/challenges in the industry? Has the industry attempted to influence Universities to prepare students for future jobs? I honestly don’t know. Most candidates are familiar with a few algorithms, a few technologies but that alone is not enough to get hired.

Some of the common issues I see are:

lack of clarity: Agreed, that as a fresher you just want to get a break, but that should not yield an impression of “I will work on anything you give me”. Software Industry is extremely varied – engineers eventually specialize into some area based on their passion – web design, web development, server-side, networks, cloud etc are too varied to become a master of all. Ask yourself what technology motivates you most and bring out that passion. Interviewers are not expecting you to know everything or to know what they already know, but look for how quickly you can adapt, learn and positively work in a team.

too much technology on the resume: There are resumes that span from TCP/IP Socket programming to AWS Certification and myriad acronyms in between. It takes considerable amount of time to master any technology these days. Differentiate between what you know and what you have heard of. And its okay to not know. Interviewers will eventually figure that out anyway.

bad spelling and grammar: believe it or not, this is common. Imo, it just shows carelessness (not bothering to get it reviewed). Be thorough and get resumes reviewed by your friends. Inconsistent capitalization/acronyms etc. are on eye-sore on the resume.

inconsistent style: a resume should be easy on eyes, highlight the most important. Too many bolds, italics, underlines are jarring. Be consistent in content styling too. Eg: If you use italics for orgs you worked for, use italics every where. Many people assume styling is not as important as content. In reality, they exhibit at different levels. Styling plays a part in our psychology at a sub-conscious level.

resume too long: For some junior/seniors, I’ve seen resumes that run 6-10 pages long. Even if an interviewer manages to go through it all, he/she won’t remember your history. For all practicals, your last job is really the most important thing. And the one before is a useful incident.

lack of presentation skills: This is one of the biggest issues I see. Too much of “I did this” (taking credit for everything) or “We did this” (you did not contribute much) paints a wrong picture. Highlight how you worked as a team, with the right mixtures of I’s and We’s. All I had to ask is one question “Can you describe yourself briefly?” and many candidates go on a lecture of what they did technically in some project. Some don’t even pause to ask if I understood what they said (esp on phone-screens). Come with innovative ways of rephrasing most banal answers like “I want to solve customer’s problems” or “I like to learn new technology”. An interviewer is typically going thru 10s of candidates, think how you can stand out to contribute.

make your presence felt: Contribute to open source at least a bit, even if just documentation. Presence in open source communities rate higher score than just knowing how to code.

be humble but convincing: You have to be positive and attempt to convince the interviewer that you are up for the challenge, without being arrogant. For eg, if you change jobs frequently, thats usually a red flag, but you can convert that to positives. It may not be your fault that a company got sold or a project nixed due to external factors. But highlight the impact you have made on teams or groups or how you smoothened the transition without being acerbic about it. Self-confidence without arrogance gets noticed.

There is a beautiful anecdote of kAlidAsa. For those who are unfamiliar, kAlidAsa was one of the greatest Sanskrit poets. In the same court, there was yet another great poet bhavabhUti. (There were nine such). Once pArvatI (shivA’s wife) had a doubt who is greater between the two, and shivA promptly says her to test it out. So she descends to the king’s court as an ordinary woman with a (pretend) dead son and tells the king, “My son is dead. A sage predicted that a poet in the king’s court can revive him by solving a samasyA (word-puzzle)” (A samasyA is a type of word-puzzle in Sanskrit, where only half of a verse is given – usually nonsense, and the other half must be completed to make it meaningful). bhavabhUti comes and king asks him to complete the samasyA. bhavabhUti tries but the son is not revived. Then bhavabhUti says “I am sorry, this is the best I could do. Perhaps ask kAlidAsa”. After some time, kAlidAsa enters the court and is presented with the puzzle. kAlidAsa completes the puzzle with exactly the same words that bhavabhUti did. And the son is not revived either. Then kAlidAsa says to the woman “Your son is not dead because this is the only way a samasyA could be solved”. That’s the difference between the two great poets. Same content, but kAlidAsa had that self-confidence.

Finally, if you do not get selected, follow up with the interviewer what are the areas you could improve and do better next time. Most interviewers will oblige and point out positives and negatives.

The Manager Guru

In the IT industry (or any for that matter), its typical that an engineer after gaining a bit of experience (say 5y) is expected to take on a bigger role. Usually it starts with becoming a Team Lead, Module Lead, Architect etc. depending on the needs of the team or the org. Notwithstanding the title, it usually signifies the responsibility is not any more of an individual’s productivity, but ensuring that a group of people can stick to the same quality of productivity.

It is somewhat a gradual slope to slip into. All fine for a while. But a few years in the lead, then the industry also expects something drastically different – promote the individual engineer to a People Manager. There is a general tendency to question the capabilities of an individual with a wealth of experience not ending up in the management. Why aren’t you a Manager, Director or VP yet?

Its a cliche that employees leave managers, not jobs. A lot of best sellers have been written around management, lectures delivered, courses created and money made. But the fundamental problem of why employees don’t get along with their managers remains and will remain. Why? Because, humans. Understanding human psychology is a fundamental part of people management. Mere technical proficiency is not a ticket for that skill, they need to be able to debug people.

The greatest fear of any manager is that, on any given day his/her best engineer will leave. So the biggest tap dancing any manager has to do is to sustain the balance of happiness-quotient vs egoistic-importance of the employee. You give too less importance to an employee, he/she will leave. You give too much importance to an employee, others will leave. While every resource is (or must be) replaceable, a manager cannot simply go around and say that to his/her employees. At the same time, the manager should not give too much importance to anyone at the cost of displeasure to others. True, in any company, everyone’s productivity is different and should be rewarded in proportion, but at the same time explicit indulgence isn’t a good management trait.

One of the first things I did after becoming a manager is, I stopped calling any one as “subordinates” and referred to them as “reports”, for a lack of better term. But now, I think even that is wrong. So I reflected a lot on “What should a manager be”?

The biggest asset of any company (project or team) is not the people, but the institutional knowledge. If the higher-ups tell you that the people are most important, its a lie. You are missing the flashing marquee of old style web page that says “you are replaceable”. So if a person leaves, he/she does not just take the knowledge, they take the crucial mental map and gap of that knowledge. That includes the nodes, links, qualities, relations, relevances and most importantly, the irrelevances. Thats almost impossible to reconstruct by someone else to the same degree. How do we protect that institutional knowledge? Documents? LOL. Knowing what won’t work is much more expensive than knowing what works.

For a different perspective, let’s look at the how some of the world’s greatest institutional knowledge were retained. In India, the guru-parampara (loosely translated as teacher-student-lineage) is the basis of its traditional-knowledge-system. Ancient knowledge has been transferred over hundreds of generations, pretty much intact. How was this possible? The Guru was once a student, gained technical proficiency, but at some point of time graduated to a senior, then eventually establishing his own lineage continuing to transfer that knowledge, by becoming a guru himself and taking care of his students. Now, why was this necessary? Because knowledge gained is considered a debt and not a business. That means the only way of getting of rid of the debt of learnt knowledge, is by teaching it to others. This way the guru is not a manager but a server to his students. The guru teaches his students, commands respect, course-corrects the errants, serves for their future, and all this by getting rid of his debt. And that makes the difference.

Most companies have an org chart that goes from top to bottom as CxO, xVP, xMgr etc. and trickles down to other staff employees. This gives a sense of people “managing” other people top down. In the guru-parampara system, the students would be at the top and who they learn from (are being served from) would be at the bottom. (Note that this is not to be confused with Top-Down management or Bottom-up management style of functioning). This is just the perspective of how one employee is seen in relation to another employee. Calling an employee a subordinate or report, pre-supposes a “superiority”, while calling a manager as one who serves his employees is a mark of humility, respect and above all – humanness – which is the original responsibility of “people” managers. It gives an opportunity for such a role to understand the human psychology, perform the role of enabling others in terms of repaying debt, rather than a sense of scaling up the hierarchy.

The roles and responsibilities are still same, but the frame of reference is now quite different. The top becomes the “root” yielding to a tree chart, where the layer below exists for the fruition of the layer above. Who knows, the tree chart could potentially allow to optimize the organization using tree-based algorithms! A binary tree org, anyone?

The Mutable State-of-Art

Once very often, social media lights up with opinions about the Javascript language, frameworks and its ecosystem being terrible. Some time ago, there was one around React framework, and the question was whether putting all HTML/CSS/JavaScript in one file is a really a good thing, which is against the age old practice of separation of these technologies.

Whenever people question the new ways of doing things, I am reminded of one of the greatest quotes by the Sanskrit poet Kalidasa (4th century CE).

“Just because its old, something cannot be assumed right. Just because its new, something can’t be brushed away. The smart ones analyze and adapt, the fools are swayed by others’ opinions”.

Many retweets, faves, dislikes, epics, savages later, the question still remains: What is the right language or framework?

The separation of technologies then, say 15 years ago, was not intentional. It was a necessity of times. CSS, HTML, JS were advancing independently, performance in browsers were so poor and finding an engineer who could do all was almost impossible. But now, we still have specialists designers and developers, but in general they have a good idea about the other technologies. I still remember back in my IBM days, pointing out to JSP structures to our designer, on where to make style changes. But now these frameworks and their idiosyncrasies are not strange to designers.

Here are my thoughts on why it is pointless criticizing the Javascript world for its organic growth, and equally pointless to criticize the criticism.

Whether “language purists” like it or not, Javascript is ubiquitous. Yes, that’s very hard to digest. Despite its “severe short comings”, it has conquered the net. Why? As IT engineers, we keep holding ourselves to this arbitrary sense of “perfection” and trying to judge everything against it. Typed languages, OO Patterns, Functional paradigms, Modularity, Testability – we seem to know what should be “right” and we are inching towards it – but we just aren’t there yet, we say. But we will know when we reach it, we say. Really? Every language invented since Javascript, seems to be solving the same problem of types, patterns, modularity, dependency etc, only in different syntax combinations.

Here is a fundamental question: Have we got anything right at all in the programming industry at the first stab? Or is 50 years too short to judge? The real limit of the problem is that physiologically, human brains can only process a limited amount of information, so it has to break complex into simple pieces. Eventually the simple pieces evolve to become complex enough, or perhaps our brains shrink so it perceives simple things as complex. So they have to be broken again. But the broken simple pieces have something in common with previously broken simpler pieces that they have to re-integrated. Thus we swing into an eternal pendulum of centralization and distribution.

Here is an illustration with evolution of Java, but you can substitute it with any starter technology:

  1. Java and JVM were originally built for small devices
  2. But once it entered application server space, EJBs were added to tackle distributed environments.
  3. EJBs became so complex, it was abandoned and broken into smaller pieces
  4. Smaller pieces were dependency-injected to form a bigger solution, but eventually they grouped into micro services
  5. Micro-services combined with distributed systems to facilitate container architectures, like Docker
  6. Container architectures are becoming complex and are yielding to orchestration solutions like Swarm and Kubernetes
  7. Predictably, By induction, orchestration solutions will become complex and will yield to Local Clouds
  8. Predictably, commodity local clouds will be broken into Fogs, Cirrus, Altostratus, Cumulus … and delivered by drones.
  9. Predictably, Drones functionality will super-evolve, so they will eventually need a new programming language specifically written for small devices… well…

Much like a fractal, as we dissolve complex problems into simple solutions, they just take a new shape of complexity, due to external influences (mobile, bandwidth for eg). The point is – This cycle will never end.

Working as a Web + Network application engineer (on the SDN controller OpenDaylight) I am seeing two ends (philosophically speaking, they are the ends and the means respectively) of a spectrum. Both UI and Network are always evolving in interesting ways, yet we can analyze these two into a fundamental problem statement:

Network: Deliver content (a packet) from one place to another place, efficiently.
UI: Deliver content (html) from one place to another place, for a user friendly read.

(Of course I am intentionally oversimplifying, but as an end user, when your bandwidth drops below 1Mbps, you call up the customer service to negotiate a discount, not enquire the complexity of the network protocols).

But just to solve the packet delivery problem, think about myriads of protocols and schemes created – TCP/IP, VLAN, MPLS, BGP, VXLAN, OpenFlow, NETCONF and many combinations of these. Similarly in the UI world, all we have to do is deliver readable content, but there are so many aspects and choose-your-noun libraries CSS, SASS, WebPack, D3, Material, visualization, accessibility, modularity, layouts and the most subjective of all – Colors!

So why do front-end developers keep seeking and preferring newer frameworks? There are many reasons and not all are technical. The fundamental reason is: Boredom. Human mind always seeks something new. Not because its useful, but it has to be something just different. And perhaps have a personality.

For any framework in general, to be popular and become sustaining, it should satisfy ALL these conditions:

  1. A very nice familiar sounding easy-to-pronounce name
  2. Should NOT be one person driven
  3. Code should be easy to change and testable
  4. Good documentation
  5. Not do too much magic
  6. Ability to teach it easily to others

The golden rule of framework success is this: It takes a few smart developers to create an idea. It takes a few hundred average developers to sustain it, because the smart developers have moved on to a new idea.

ReactJS Gotchas #001

Tags: Mocha, Chai, Sinon, React 0.13.3

Problem Context: Run a unit test using Mocha against a Form.

Error: Invariant Violation: addComponentAsRefTo(…): Only a ReactOwner can have refs. This usually means that you’re trying to add a ref to a component that doesn’t have an owner (that is, was not created inside of another component’s `render` method). Try rendering this component inside of a new top-level component which will hold the ref.

Example:

beforeEach(function () {
rendered = TestUtils.renderIntoDocument(
<MyComponent ref="myComponentRef" param="10" />
);
});

Solution

Don’t use ref attribute for the component when running a unit test (ie remove ref=”myComponentRef”)

React Impressions

Let me start with a mandatory statement that many first-use-React-blogs start with: “I have played a bit with React and its great. And…”

Having recently completed a decent sized project in ReactJS, I have to say, once in a while there comes a library or framework that truly makes you go back to the basics and think. Despite much gained knowledge, it is fascinating to see that human brain collectively misses the fundamentals as “progress” happens. Much like an experienced cricketer suddenly losing ‘form’ and is adviced to go back to basics. It is not that he forgot to how hold his bat, but that intuitive coordination or that ‘elegant touch’ has disappeared. Perhaps we humans are not wired to over-learn! From simple to complicated to simple, we keep oscillating, just like we keep regurgitating the centralized and distributed system concepts.

It also emphasizes two underestimated facts in software engineering:

1. Creating good software is hard
2. Keeping it simple is even harder

I still remember Rod Johnson’s impacting book ‘Java without EJB’, (I was advocating the same in one of my early projects – removing all EJBs because its complexity did not make sense and the book gave me a lot of cannon). Spring framework made a great impact on many web developers then, bringing in IoC principles into a Java container. (Spring wasnt the first one though, PicoContainer was).

Facebook’s ReactJS is one such framework going back to basics. Apart from the usually quoted strong features like VirtualDom, simple lifecycle model and one-way data flow, what is it that really made React tick? After all component based frameworks are not new – Awt/Swing, Tapestry, Vaadin, GWT, JSF/Seam, Wicket (all Java based) and several Javascript-based were successful – but they all fell into the trap of hiding the complexity of Javascript (and to some extent HTML/CSS) from the developer and that turned out to be a costly decision. Everytime Javascript and HTML standards evolve or browsers support something new, these frameworks have to undergo quite a bit of refactoring. Remember, both designers and front-end developers ultimately like to nano-control the HTML output.

Among the existing component frameworks, the gap between the ‘framework’ and Javascript/HTML is shortest in React. Compared to many frameworks that advocate a structure like .html,.js and .css for each component, React neatly rolls them all into one testable unit, surrounding it with a simple lifecycle. And with the virtual-dom concept, you can apply same concepts to mobile or even desktop apps.

Some complain that JSX is ugly. I first thought same, (I am not a big fan of XML based structures), but it took me all of 10 minutes to overcome it. In fact, I think its one of the smarter things that has come out of React: making Javascript expressions look like HTML. It is quicker to learn JSX than yet another templating language. Passing properties and reading state just feels syntactically familiar.

Another common complaint about JSX is that it makes if conditions and for-each syntactically harder, compared to say AngularJS ng:if/ng:repeat or Handlebar-type syntaxes.

AngularJS:
<span ng-if={showElement}>I am here!</span>;

JSX:

return (
showElement ? <span>I am here!</span> : undefined;
)

While the JSX version does not look concise, the ‘if’ and ‘for/map’ are natural and readable in programmatic syntax than in declarative syntax. Not only that, it can be elevated to a function to make the component reusable. And then it also provides an incentive to make the function stateless – so the component is now slowly turning into a dumb component. AngularJS, though concise, feels more XML-ly than React and you cannot reuse the <span> element as-is in some other place or context (unless you elevate it to a directive). The JSX downside appears to be that you keep creating component wrappers, but isn’t that problem to solve – making composability easier?

The biggest advantage of such componentization is, it also makes unit testing much more delightful. In our project, we use Mocha+Chai+Sinon, which seem to work pretty well. In React, you start to think in terms of composability, than just a page. Many a time, we wrote a complex component, but quickly dissolved/abstracted it into dumb components, by breaking out its props and state.

That’s another thing about React – the learning curve and refactoring is comparitively less steep than others, with only a few concepts to learn.

I am also often asked how long will this framework last? Will Polymers and Web Components take over the browsers? Unfortunately, the Delorean only went upto 2015 and the Almanac probably did not mention React.

His Last Framework

Its been a while since I visited my pyschiatrist and now, Chiropractor Dr. Pear – Perspective Readjustment Specialist. Last time, he helped me cope up an aggressive experience with pair programming. The front-desk handed me a big chunk of forms to fill up my software history.

Years of experiencing web frameworks: About 18.
First framework of pain: JSP Model 2, perhaps Applet, can’t remember.
Latest framework of pain: AngularJS
History of major pain points: EJB, WebSphere, WebLogic, Spring, Fusion Middleware, Wicket, ASP.Net Web/MVC, SharePoint, Grails, SpringBoot, Backbone, Ember, AngularJS and innumerable minor ones.
What type of pain is this? (Select one: Sharp, Shooting, Numb): All of them
Approximate location of pain: Initially Back-end. Then moved to Front-end.
Exact location of pain: Depends on Mood.
Frequency of occurrence of major pain: About every 18 months.

I physically submitted the forms and started checking my tweet feed. New tweets from dzone on Top 20 Javascript frameworks; AngularJS Jobs; Best 35 CSS Frameworks; Top 15 build tools for Javascript; AngularJS vs.., no wonder I needed a therapy.

The nurse called me in and after the initial handshake of pulses and vital stats, she said 200. I was idly looking at the magazines – PlayWeb, Maximum Frameworks… Dr. Pear entered after a knock.

Dr. Pear (browsing thru my history rather than looking at me): Oh, Hello! Its been a while from your last visit.
Me: Yes, Doctor.
Dr. Pear: And what brings you here?
Me: Well, Ive been spinning my head with the next web application framework and I haven’t been able to decide quite…
Dr. Pear: Well, the short answer is “As a consultant you should learn all of them !”, ha ha ha… (after a pause) Yeah, yeah I know. Its the viral season of frameworks. The Javascriptarix flu shot we gave last month was really a placebo. So what kind of framework you are really looking for?
Me: (a bit hesitantly) Not sure, Ive been taking the Javascript MVC Kool-aid for the last couple of years, but it looks like the pain isn’t gone…
Dr. Pear looked at me a bit condescendingly and said “MVC eh? in Javascript? Didn’t you get high enough on it in the Java/.Net world?”.
Me: Well, the Java web frameworks are kinda relegated to providing data services these days…

I had not finished. It must have triggered a raw nerve in him and he went on to a mini lecture of sorts.

“You see, web frameworks come in two general flavors – MVC and Component. Java failingly flirted with the Applet component model initially, and then took a stab at MVC via JSP Model 2, while ASP.Net took the component route after its success with Visual Basic. Interestingly, Java increasingly adapted the component framework model, but Microsoft abandoned ASP.Net Web in favor of ASP.Net MVC. Java/JVM probably tasted all kinds of frameworks in many levels – MVC (Struts, Stripes, Spring, Grails, Play) and Component oriented (Seam, Tapestry, JSF & Co, Wicket, GWT). The fundamental problem with MVC is – it sprays your logic and model everywhere. Many developers struggle to organize it between “what-they-are” vs “what-they-do”. As the complexity increases, the code gets re-organized by features, each of which in turn have the m-v-and-c. Add on top of that, the domain models and view-models are usually different, the whole thing about models start looking like a beauty pageant.”

He paused a second, probably aiming for me to get the pun.

“Component frameworks couple model and view tightly, because a view makes no sense without a model. But the biggest problem for them came from somewhere else: Javascript. Many pure java web frameworks are relatively stable, because of a stable and backward-compatible Servlet API. But component frameworks relied too much on encapsulating Javascript and HTML and in JS world, Friday’s release is outdated by next day Matinee show. Ajax, jQuery and related libraries innovated way too fast and library encapsulation really made no sense. When the JS libraries went unsupported, it dragged the Java frameworks along. With NodeJs, NPM-modules and unit-testing frameworks offering a vastly superior build ecosystem, life became miserable for frontend developers, bogged down by verbosity and slowness of JVM build tools.”

Me: Well, that’s exactly why I am here for. What is the JS framework you would really recommend?
Dr. Pear: Of course Backbone. I love backbones, especially if they aren’t straight. Get it? Ha ha !

He realized I didn’t look at him very admiringly. He continued.

Dr. Pear: Ok. Well, I hate to say it depends, because you didn’t pay me to hear that. So let me tell you something else. With JS frameworks like Backbone, Knockout and Ember becoming first class citizens, server-side slowly mellowed down to doing CRUDGE-work (Create, Read, Update, Delete, Gather and throw Exceptions), now gloriously termed as Microservices Architecture. With AngularJS offering a complete MVC stack, it doesn’t make sense to write Views in server-side anymore.”

Me: So should I still stick with AngularJS MVW ? Besides its barrage of new vocabulary redefinitions like service, factory, provider, scope, directive, the next version 2.0 promises to be completely different and backward incompatible…
Dr. Pear: Yeah, I heard that. It appears that you really need to work on your persuasion skills on convincing your team about writing in new framework every year.
Me: (I looked at him awkwardly, wondering what to say)
Dr. Pear: (Ignoring my awkward pause) Well, the web has to evolve. Why do you think Javascript is going bonkers with frameworks – they have to compete with rapidly advancing browser features, handheld and wearable devices. Who knows, Javascript could be running inside your brain tomorrow. Ok, so you are now tired of MVC in Javascript world also. What about Facebook ReactJS?
Me: ReactJS. Yes I’ve heard of it. Isn’t that a component oriented JS framework?
Dr. Pear: It sure is. You write HTML directly within Javascript or CoffeeScript or Typescript in an XML like syntax and compile them to Javascript.
Me: Im not sure I like the sound of that. Thats not even the best practice.
Dr. Pear: Well, its time to Refactor the Best Practices too. IMO, ReactJS may indeed work out well, because unlike the Java component frameworks, its closer to Javascript, it uses Virtual DOM providing fantastic rendering speeds and helps write isomorphic apps. If you get used to JSX, you will love it.
Me: What about Services?
Dr. Pear: Use one of the many flavors of Flux, Reflux, Fluxxor, Alt – the Action oriented one-way-data-flow-store.
Me: Hmm, that eerily reminds me of Struts. So, there is no really a best framework?
Dr. Pear: There is one – come up with your own. You will surely get a few fans.
Me: What if I get bored and jump ship to another framework? What about the developers who depend on my framework?

“Send them to me!” Dr. Pear said calmly, washing his hands for no reason. “If anyone thinks JS frameworks have more than 18 months of fame, they need a Perspective Readjustment”.

As I was leaving, he gave a parting shot, “Just remember one thing, if it makes you feel any better: no matter which framework, you are solving only a single problem: Delivering html to a browser. And by the way, if you haven’t selected your styling framework, I am open on Saturday afternoon”.

“Er.. Thanks, I will think about it”, I said as I flipped my phone to follow @ReactJS feed on my twitter.

As I was walking out I could see him frantically typing on the screen “Successfully readjusted for component framework. Will likely come here in the next 12 months looking for Web Component and Polymer adjustment.”

Note: Title is a tribute to AC Doyle’s His Last Bow.

Defining React Routers via API

One of the issues I faced while using React for my project is which Router to use? React obviously being a new framework on the block, there are plenty of plugins/components that are being actively developed and not many are (yet) stable. Some plugins are just enough different for you to want a new feature, but it wont support other features required. You can see how bullet-point engineering does not help much. You gotta try out a few and see what works best.

For the Router, react-router is the most popular, featured and well-documented, but it (v0.12.0) did not have one feature I was looking for: Ability to define routers closest to the handler itself.

All the routes go into a single file, usually your app.js, like this:

var routes = (
  <Route name="app" path="/" handler={App}>
    <Route name="login" path="/login" handler={Login}/>
    <Route name="logout" path="/logout" handler={Logout}/>
    <Route name="page1" path="/page1" handler={Page1}/>
    <Route name="page2" path="/page2" handler={Page2}/>
    <DefaultRoute handler={Login}/>
  </Route>
);

Router.run(routes, function(Handler) {
  React.render(<Handler/>, document.body);
});

If the application grows, this is less than ideal and can bloat the app.js. And it somewhat breaks modularity too (even though the app.js will eventually know everything, it forces you to wire the routes statically). The react-mini-router defines the routes as part of component as a mixin and I think thats pretty convenient.

With react-router v0.12.2+ the routes can be configured via api. This is not documented yet officially, afaik.

approute.js

module.exports = Router.createRoute({ name: "app", path: "/", handler: App});

auth.js

var AppRoute = require('./approute.js');
Router.createRoute({ name: "login", path: "/login", parentRoute: AppRoute, handler: Login});
Router.createRoute({ name: "logout", path: "/logout", parentRoute: AppRoute, handler: Logout});

page1.js

var AppRoute = require('./approute.js');
Router.createRoute({ name: "page1", path: "/page1", parentRoute: AppRoute, handler: Page1});


app.js

var AppRoute = require('./approute.js');
Router.createDefaultRoute({parentRoute: AppRoute, handler: Login});

Router.run([ AppRoute ], function (Handler) {
  React.render(<Handler/>, document.body);
});

Notice how the variable routes is replaced with [ AppRoute ]. This is important!.

In auth.js, if defining multiple routes, If you dont want to use parentRoute property, you can also do this:

AppRoute.addRoutes([ LoginRoute, LogoutRoute ]);

AngularJS – Startup time environment configuration

Starting to learn a web framework, doesn’t matter which platform – Spring, AngularJS, Grails, Rails etc. is easy. But once you go past the elementary-school level status, a question always looms – how to accomplish tasks the framework-way versus the non-framework way?

A seemingly simple stuff like configuring environment properties at runtime (startup), took me through internals of AngularJS. For server-side folks, the technique is pretty standard: use any of .properties, .xml, .yaml and then apply a built-in framework feature. For eg. in Spring, you can using the @ConfigureProperties, for Grails, use the Config.groovy and the framework will automatically place it in grailsApplication, similar to what Rails does. But how do you do it for a purely client side application (ie ‘static’ site, that relies only on rest calls from some other server)?

AngularJS allows a Module.constant() method, where you can just specify a Javascript object and DI-it where-ever you want. Neat. But often the config values are environment specific. One of the very nice solutions is to use grunt-replace and simply provide the environment specific json at build time and grunt replaces the config values for that build. One may argue that this is the correct way to do, because it provides consistency and tracebality in CI.

But in some cases the values may not be known ahead before build and is known only at application startup. So the question is what is the AngularJS way of loading the configuration file at startup? Would have been nice to have AngularJs look for a config.json properties file at root and automatically load and inject it into a variable like $env.

Obviously, AngularJS offers so much stuff – constant, value, service, factory, provider. I thought I need to stick the $http somewhere and load the config.json file and set it to AngularJS constant or value. Then inject the constant to any controller or service.

1. Inject $http into Module.constant()

Will not work, constant accepts only an object and not services. At this point of time, it is useful to get familiar with config and run phases of AngularJS. This stackoverflow question explains succinctly.

2. Inject $http into Module.config()

Will not work. You cannot inject a AngularJS service into config. You can inject only a provider into config.

var envconfig = angular.module('envconfig', []);

envconfig.config(['$http', function($http) {
var envConfig = {
restUrl: 'http://localhost:8080/rest/api/'
};

$http.get('/config.json').onSuccess(function(data) {
envConfig = data;
});
}]);

So, how about injecting a provider into config, and let the provider use the $http ?

3. Inject $http into the provider

Will not work. You cannot inject a service ($http) directly into a provider, like this:

envconfig.provider('EnvConfig', function($http) {
});

Though you can use a service within the provider, like this:

envconfig.provider('EnvConfig', function() {
var envConfig = {
restUrl: 'http://localhost:8080/rest/api/'
};

var loadConfig = ['$http', function($http) {
$http.get('/config.json').success(function(data) {
envConfig = data;
});
return envConfig;
}];

//This $get, is kinda confusing - it does not return the provider, but it returns the "service".
//In our case, the "service" is the environment configuration object
//The $get is called automatically when AngularJS encounters a DI.
this.$get = loadConfig;
});

If you want to further configure the Provider, you can do this. Notice that the “Provider” suffix must be appended EnvConfig.

envconfig.config(['EnvConfigProvider', function(EnvConfigProvider) {
//Do something with EnvConfigProvider
}]);

While this almost works, there is a problem: $http is an asynchronous call, so its not guaranteed that the data will be returned with the values read from the config.json. Very likely the local envConfig will be returned always.

To get around you have to make $http a sync call, which is not possible. So we go deeper into the underlying framework of AngularJS – jQuery.

4. Load config.json synchronously

In the provider, you can do the following:

envconfig.provider("EnvConfig", function() {
this.$get = function() {
var q = jQuery.ajax({
type: 'GET', url: '/config.json', cache: false, async: false, contentType: 'application/json', dataType: 'json'
});
if (q.status === 200) {
angular.extend(envConfig, angular.fromJson(q.responseText));
}
return envConfig;
}];
});

Notice, that the q.responseText is a string and is converted to Json object using angular.fromJson(). Also by using angular.extend, you can merge the local envConfig with the target one – this allows you to keep some default values that you dont want to immediately expose.

5. How about profiles?

If you have multiple profiles in the config.json, you can simply use the following technique:

this.$get = ['$location', function($location) {
var profile = $location.search().profile || 'default';
var q = jQuery.ajax({
type: 'GET', url: '/config.json', cache: false, async: false, contentType: 'application/json', dataType: 'json'
});
if (q.status === 200) {
angular.extend(envConfig, angular.fromJson(q.responseText)[profile]);
}

return envConfig;
}];

And your config.json is like:

{
  "development": { "restUrl" : "http://devserver/rest/api" },
  "qa": { "restUrl" : "http://qaserver/rest/api" }
}

And then call your url as http://localhost:9000/index.html?profile=development

While adding profile query string may not be recommended for production, if you are testing your UI against several backend instances, this might just help.

Thoughts on SpringBoot

In its new avatar, Spring enters the opinionated frameworks marketspace, and finally takes the Sinatra/Scalatra/Ratpack/Dropwizard/Spark route, where embedded containers (ie no external server containers) will start dictating the future. Whats an opinionated framework anyway? Aint all frameworks opinionated? To me every piece of code is just as opinionated as the developer who wrote it. Here are my observations about SpringBoot, after having worked with it for about 6 weeks.

1. Build support for gradle (+)
SpringBoot apps can be built using both maven and gradle. But I believe Gradle will eventually dominate the world of buildscripts. So the sooner your team moves to gradle, better will be your quality of this life and perhaps, next life too. Are you listening, .Net?

2. Starter poms (+)
Remember your 500 line pom.xml, where the tag <dependency> occurs 25000 times? Say good bye and start using starter-poms. Combined with gradle, a typical build file is now about 30 lines.

3. Configurations (+/-)
Spring is among the last standing action heros of xml-based application configuration in jvm world. Super hit when introduced, but soon became a liability. Thankfully, many developers challenged the xml tag soup and created innovative web frameworks (Wicket, Play etc.). Those who work with the Spring-based framework Grails, may have not touched an xml for a good period of time. If Spring MVC developers are still stuck with Spring without adapting Grails, I fail to understand why (well except for business impositions).

Ironically, SpringBoot seems to have many opinions when it comes to configuration – it supports 3 flavors of bean configurations: xml, annotation and groovy dsl. Having worked with Grails and other non-xml based applications for the last few years, picking up SpringBoot was bit of a chore. Grails autowires beans by name, and when custom beans are required, there is this wonderful bean builder. For me, java annotations are no better than xml configuration. Xml configuration proliferates verbosity but annotations defeat the purpose of wiring objects separate from code. While it brings type-safety, it also lets you mix code, configuration and logic and I think its a total mess. Annotations are like antibiotics – good in small doses (@ToString, @RestController, @AutoClone etc.), too many side-effects with overdose – building whole applications using annotations is @NotMyStyleOfTea.

A typical SpringBoot application has many many annotations – @EnableAutoConfiguration, @Configuration, @ComponentScan, @Conditional etc. Sometimes you end up with more annotations than the code itself and its not intuitive which is doing what when how. SpringBoot is certainly simpler for developers from xml background, but to me as a Grails developer, annotations have been a bit intimidating and sprawling. I don’t see CoC (Convention-over-Configuration), instead I see CoC (Configuration-over-Code).

Thankfully, SpringBoot has great (although not 1:1 xml equivalent) support for groovy beans dsl (via the GroovyBeanDefinitionReader). Groovy dsl is elegant, concise, intuitve and very readable. For some loss of type-safety (which could have been compensated by good tooling support), it comes with a great punch – wiring beans, environmental configurations, programmatic startup logic (as opposed to declarative) etc. I feel Spring can standardize groovy bean as the only configuration mechanism and shed all the fat of xmls and annotations. It would make the framework pretty lean and competitive in the already crowding lean frameworks market. May be thats what grails-boot is?

4. Properties (+/-)
Just like configurations, there are a few ways of defining and injecting properties. Coming from Grails background, the many ways of injecting properties was a bit confusing. SpringBoot supports both .properties and .yaml files. AutoConfiguration uses many default values for properties, but there is no comprehensive documentation on these properties on whats default. Again there are many annotations related to properties @ConfigurationProperties, @PropertySource, @Value, @EnableConfigurationProperties etc. Grails has this amazing single-point grailsApplication bean based on ConfigObject (a glorified Map) and allows nested and runtime evaluate-able configurations – helpful in dynamic scenarios. Again I wish Spring had defacto support for this (injecting properties from a config.groovy).

5. Rest Support (+)
SpringBoot makes it very easy to create rest controllers via @RestController. Instead of creating full-blown web applications, Spring’s eco can be used to create well-rounded rest services backed by Batch/Data/Integration/Security and use js frameworks like Angular/Knockout/Backbone etc. for front end. If using rest over json, Groovy 2.3 is promising to come up with fast json marshaller. While I like the cleanliness of Thymeleaf, the modern js frameworks have a clear advantage over server-side html generators.

6. Logging (+/-)
Yet again, too many logging frameworks in the bag: log4j, log4j2, slf4j, slf4j2, commons.log, java.util.log, logback. I spent some time resolving dependency conflicts, until I finally gave up and switched to logback. Spring team strongly recommends logback – Just Go with it – there is probably a good reason.

7. Data Access (+)
No question about Spring’s versatility here. Name any db and you have SpringBoot autoconfiguration support. Plus the Grails team has done a great job of spinning-off Gorm to standalone component. TexMex, I would say.

8. Testing (+/-)
Many examples still show use of JUnit, but here is a good start on how to use the incredible Spock framework in SpringBoot. Spock is like mom’s recipe – once you taste it, others aint the best.

9. Documentation (+)
It has improved a lot with newer minor releases. It takes time to sift through some old examples in the community, but lets just blame Google for not knowing what you exactly want, though it seems to know all about what food you want to eat when you are near a restaurant.

10. Finally
I think Grails3 (grails-boot?) is going down the trending route of embedded container deployments. I think that’s the only thing against Grails in the current trend. SpringBoot has got there first but I still feel it lacks the simplicity of Grails or the leanness of Play. It has certainly been simplified, but not simple.

If your existing eco system depends a lot on Spring based frameworks, it is worthwhile to adapt SpringBoot. Honestly, Im hoping Grails3 isn’t far off!

The GroovySpringBootBatchGormGroovyDslBeanFactory

See spring-boot-batch-sample at github for the updated source code.

I was recently working on setting up a not-so-trivial Spring Batch application and wanted to use SpringBoot, because of its embeded-web container capabilities. I could have used Grails and Spring Batch plugin, but wanted to give an opportunity for our Ops/DevOps team a peek at running and maintaining apps via plain “java -jar”. I hadn’t used Spring since 2009 (switching between SharePoint and Grails). Xml is no more a necessity for Spring and it has good support for java annotation configuration. But I am not very comfortable with annotation oriented programming. Xml provided the separation of wiring dependencies from code, but its verbose. Annotations are less verbose, but allows to mix configurations, logic and code, which I feel can spiral out of control pretty soon. I still feel wiring dependencies independently of “code” is a very desirable feature for large applications. Both xml and annotations seem to be at opposite ends of the spectrum.

Fortunately there is a middle ground. SpringBoot provides support for Groovy beans dsl, via the GroovyBeanDefinitionReader. Groovy bean dsls solve the problems of xml and annotations: DI wiring + concise readable syntax + some logic (eg environment-based). But this comes at a price of loosing type-safety. I am surprised that groovy dsl has not yet gone mainstream with Spring apps. If Spring comes with a Groovy DSLD schema (may be there is one already?), it could be a killer feature. For example, Spring Integration is already offering a dsl based “workflow”, which is pretty elegant to read, write and maintain.

I was also new to SpringBatch so it took a while to get them all wired up. So here is a starting point, if you want to use Groovy lang + SpringBoot + SpringBatch + Groovy DSL + Gorm. I haven’t figured out dsl equivalents of @EnableScheduling and @Scheduled yet.

Part 1: appcontext.groovy

//Note that config is not defined as a bean, but directly evaluated and then injected into other beans
//the 'grailsApplication' equivalent
ConfigObject configObject = new ConfigSlurper().parse(Config)

//Note the syntax beans {} not beans = {} like in Grails resources.groovy
beans {

	xmlns([ctx: 'http://www.springframework.org/schema/context', batch: 'http://www.springframework.org/schema/batch'])
	ctx.'component-scan'('base-package': 'org.mypackage')
	ctx.'annotation-config'()

	myController(MyController) {
		config = configObject 
	}

	myService(MyService) {
		config = configObject 
  }

  //MyItemReader implements ItemReader
	myItemReader(MyItemReader) { bean ->
		bean.initMethod = 'init' //required if initializing some data from external dao
		bean.scope = 'step' //for job restartability
	}

	myItemProcessor(MyItemProcessor) {
		myService = ref('myService')
	}

	myItemWriter(FlatFileItemWriter) {
		lineAggregator = new DelimitedLineAggregator(delimiter: ',', fieldExtractor: new BeanWrapperFieldExtractor(names: ["id", "title"]))
		resource = '/apps/springboot/myproject/output'
	}

  //create a job
	batch.job(id: 'job1') {
		batch.step(id: 'step1') {
			batch.tasklet {
				batch.chunk(
					reader: 'myItemReader',
					writer: 'myItemWriter',
					processor: 'myItemProcessor',
					'commit-interval': 10
				)
			}
		}
	}

  //the following beans are minimum mandate because there is no equivalent of xml's <batch:job-repository /> in groovy dsl
  //http://stackoverflow.com/questions/23436477/groovy-bean-syntax-for-spring-batch-job-repository
  //thanks to https://github.com/johnrengelman/grails-spring-batch/blob/master/SpringBatchGrailsPlugin.groovy for the bean definitions
	jobRepository(MapJobRepositoryFactoryBean) {
		transactionManager = ref('transactionManager')
	}

	jobRegistry(MapJobRegistry) { }

	jobLauncher(SimpleJobLauncher) {
		jobRepository = ref('jobRepository')
		taskExecutor = { SyncTaskExecutor executor -> }
	}

	jobExplorer(JobExplorerFactoryBean) {
    //dataSource is auto-configured
		dataSource = ref('dataSource')
	}

	jobOperator(SimpleJobOperator) {
		jobLauncher = ref('jobLauncher')
		jobRepository = ref('jobRepository')
		jobRegistry = ref('jobRegistry')
		jobExplorer = ref('jobExplorer')
	}

}

Part 2: Application and Scheduler

@Configuration
@ComponentScan
@EnableAutoConfiguration
@EnableScheduling
class MyJobApplication {

	private static final Logger logger = LoggerFactory.getLogger(MyJobApplication.class.getName())

	@Autowired
	JobLauncher jobLauncher

	@Autowired
	Job myJob

//  You can also create configObject bean like this and refer back in beans.groovy using ref('config')
//	@Bean(name="config")
//	ConfigObject configObject() {
//		return new ConfigSlurper().parse(Config)
//	}

	
  @Scheduled(fixedDelayString = '${myJobFixedDelay}', initialDelayString = '${myJobInitialDelay}')
	public void startMyJob() {
		logger.info "startMyJob()"
    //Add time if your job runs repeatedly on different parameters - this will make it an unique entry in the batch-job tables
		JobParameters jobParameters = new JobParametersBuilder().addLong("time",System.currentTimeMillis()).toJobParameters()
		jobLauncher.run(myJob, jobParameters)
	}

	public static void main(String[] args) {
		logger.info "Starting MyJobApplication..."
		Object[] sources = [MyJobApplication.class, new ClassPathResource("appcontext.groovy")]
		SpringApplication.run(sources, args);
	}
}

Part 3: Datasource

Datasource bean is autoconfigured via @EnableAutoConfiguration and if you define the application.properties (or application.yaml). Just add the right driver in your build.gradle. If you dont specify any, hsql db is used.

Part 4: build.gradle

buildscript {
	ext {
		springBootVersion = '1.0.2.RELEASE'
		spockVersion = '0.7-groovy-2.0'
	}
	repositories {
		mavenLocal()
		mavenCentral()
		maven { url "http://repo.spring.io/libs-snapshot" }
	}
	dependencies {
		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
	}
}

apply plugin: 'groovy'
apply plugin: 'idea'
apply plugin: 'spring-boot'

jar {
	baseName = 'myapp'
	version = '0.1'
}

repositories {
	mavenCentral()
	mavenLocal()
	maven { url "http://repo.spring.io/libs-snapshot" }
}

dependencies {
	compile("org.springframework.boot:spring-boot-starter-web")
	//if you want to use Jetty, instead of Tomcat, replace the above line with the next two lines
	//compile("org.springframework.boot:spring-boot-starter-web:${springBootVersion}") { exclude module: "spring-boot-starter-tomcat" }
	//compile("org.springframework.boot:spring-boot-starter-jetty:0.5.0.M2")
	compile("org.springframework.boot:spring-boot-starter-actuator")
	compile("org.springframework.boot:spring-boot-starter-batch")
	compile("org.springframework.boot:spring-boot-starter-logging")
	compile("org.springframework.boot:spring-boot-starter-jdbc")

  compile("org.codehaus.groovy:groovy-all:2.2.2")
	
  compile("org.springframework:spring-orm:4.0.3.RELEASE")
  //For those Grails guys, just throw in the new and shiny standalone gorm dependency
	compile("org.grails:gorm-hibernate4-spring-boot:1.0.0.RC3") {
    //currently brings in spring-orm:3.2.8, exclude it and explicitly include new one above
		exclude module: 'spring-orm'
	}

	testCompile "org.springframework.boot:spring-boot-starter-test"
	testCompile "org.spockframework:spock-core:${spockVersion}"
	testCompile "org.spockframework:spock-spring:${spockVersion}"
}

task wrapper(type: Wrapper) {
	gradleVersion = '1.11'
}

A few more features for the future:

1. Defining multiple datasources in groovy dsl
2. Using spring-loaded for hot-swapping runtime (as of now I can’t get this to work with Intellij Idea)
3. Using Spring-batch-admin to control jobs via UI