Saturday, June 25, 2011

Javascript objects and jQuery ajax callback scope issues

I was feeling good the other day when writing some Javascript objects. Defining instance values and methods on the prototype chain. In my awesome object organization effort, I decided why not add my jQuery Ajax success, error and pre-submit callback functions in an object. I also thought it would be a good idea to add the Ajax options generation into the object itself.

Here is a sample object:

function AjaxObject(){
   this.ajaxObjectState = undefined;
}

AjaxObject.prototype.callbackSuccess = function (){
   this.ajaxObjectState = "success";
};

AjaxObject.prototype.callbackError = function (){
   this.ajaxObjectState = "error";
};

AjaxObject.prototype.getAjaxOptions = function (){
   var options = {
      type: 'POST',
      error: this.callbackError,
      success: this.callbackSuccess
   };
   return options;
};

var ajaxObjectInstance = new AjaxObject();
$.ajax(url, ajaxObjectInstance.getAjaxOptions();

This blog post is about what's wrong with this and why it won't work.

The issue is that the options definition assigns a reference to the callback function itself, not the object instance of the function. In the jQuery callback scope, the "this" is that of the function of the callback, not the outer object instance that the method is contained in.

In the example above, when the success/error is called, "this.ajaxObjectState" will be undefined.

So how do you get around this? I did the following:

AjaxObject.prototype.getAjaxOptions = function (){
   var options = {
      type: 'POST',
      error: this.callbackError,
      success: this.callbackSuccess,
      callingThisReference: this
   };
   return options;
};

I added a reference to the instance object into the Ajax options as "callingThisReference". The "this" assignment in the options is the outer object instance created via "new". This is now the reference in the callback to get to the object instance variable to change, or read.

In addition, the success and error callbacks need to change a bit, which breaks the normal object organization effort in my initial approach:

AjaxObject.prototype.callbackSuccess = function (){
   this.callingThisReference.ajaxObjectState = "success";
};

AjaxObject.prototype.callbackError = function (){
   this.callingThisReference.ajaxObjectState = "error";
};

Note the "this.callingThisReference" in the callbacks. "this" is the function "this" and the "callingThisReference" is the object "this".

The addition somewhat breaks the object itself now, because it won't work outside of this fashion. The simple "new" and then calling the success or error callbacks would result in "callingThisReference" to be undefined. This can be easily fixed by checking if "callingThisReference" is defined, if not, then just use "this".

See also:


Monday, June 20, 2011

Old fashion password safe using openssl

I don't understand password safe software. What's wrong with just encrypting a plain text file? Maybe it's old fashion. Maybe I am missing something, but this seems pretty good. If I am wrong, set me straight.

Encrypt your file (enter passcode):
openssl enc -aes-256-cbc -a -salt -in passwords.txt -out passwords.txt.encrypted
Then remove the old file. You might want to shred the file, because you can recover simple deletes.

Un-encrypt your file (enter passcode):
openssl enc -d -aes-256-cbc -a -in passwords.txt.encrypted
The un-encrypted data is written to the output stream.

See also OpenSSL: Documents, enc(1). You can use whatever cipher you want.

Of course, you now have a main password to your password file. So protect that and remember it.

Oh, you have windows? Install Cygwin. Don't know how to do that? Just keep your passwords on paper then and put them in a physical safe in your home.

Friday, June 17, 2011

Websphere commerce URL Flattening and Search Engine Optimization (SEO)

There is a lot of information out there about websphere commerce URL flattening and search engine optimization, I wanted to share the best of the best based on my professional experience. Mostly I've provided links to reference materials, which apply both to the IBM stack, as well as in general for any site.

There are many standards and direction offered by the "search engines" in order to index and discover your page properly. The goal is not purely the appearance of the URL, which is what confuses flattening and optimization sometimes I think, but also to get your site ranked higher in results.

You should also be aware of the state management of your site given the dynamic value of the URL and the asynchronous state of the page as well. This solves your bookmarking and linking issues that a user might expect from an indexed URL from a search engine. If you page fails to recognized the state reload, you will find yourself disappointing your visitors.

The following are the reference links collected for the topic of search engine optimization, URL flattening, shortening, and any other terminology needed to describe the web URI:

In general, be cautious to those that talk a big game about SEO. It likely means they know how to purchase keywords from search engines and tell you to make your URL look pretty, but they don't know how to do it themselves. 

What you really want is organic growth in page visits from your site's technical implementation, while allowing the search engines to figure out your page structure and relevants on their own. You also want your HTML and meta tags to contain the proper organization that enable the search engines to do their jobs better as well. 

Wednesday, June 15, 2011

Why websphere commerce developers don't want to be "websphere commerce developers"

I've been doing websphere commerce development now for 2 years. If you aren't familiar with IBM's product, it's basically what it says, an commerce platform. This article isn't about the technical strengths or weakness of the platform, it's about why developers who implement it don't want to be known for doing so, myself being one of them. I am not speaking for anyone, but these are my observations.

First, the platform is very capable and therefore complex. Complex systems when changed frequently and are customized continuously will fail and have more errors than a less complex and less frequently customized system. This failure and being error prone is a turn off for smart people because it makes them look bad. Looks bad to your managers and your technical peers might eye you differently.

Next, implementation and customization of a platform is not glamourous. Someone else has the glory of writing the core application, and us, the implementors, are just working on top of this and customizing for our needs. The issue is that "real" companies would implement their own platform rather than making use of another because, of course, the users of software are always smarter than those that right it (sarcasm). Most developers would rather have their employers allow them to bottom out and waste time writing something from scratch verses using something that already exists.

Websphere equals Java, and Java is heavy, slow and again, not glamourous anymore. You have to compile code, there are all types of layers in the application, then deploy, etc. There are enterprise concepts built into the stack. Application layers are boring. The application management of websphere are also IBM branded. Eclipse, tomcat, apache, etc. They are all twisted for websphere rather than leaving pure. This means special tooling to do development as well that is likely licensed and costs money. It also requires a lot of machine resources as well to run the server and development tools. Server resources for production systems are expense, and large scale.

Nobody talks about websphere in the open community. The web growth and innovation stories don't speak of this platform. They won't be talking about it at any O'Reilly conferences. Amazon isn't going to use it. Google checkout isn't going to use it. Ebay isn't going to use it. Because of this, the core web innovators of commerce aren't pushing the limits of websphere commerce to make it better. They aren't releasing their own products either, but it begs a comparison. Technical people don't like this. It's fun to read about the stack and tools you are using. How are they being used at other companies? How are those companies giving back?

So why are websphere commerce developers still websphere commerce developers? Big companies implement this platform, meaning you get to work on major retail sites that you can tell your mom, dad and friends to check out the site you've worked on. It also pays pretty good and not everyone is good at it, so I think the role is in demand. It's also still mostly web work, meaning you can still work in different areas while trapped in the Java / IBM stack (performance, HCI, information architecture, search, etc). There is plenty of infrastructure work as well, scripting and automation. Many distributed and scalable architectures are at work. You learn many things beyond your direct role, and most technical people like that.

Why do I do it? It's the web and I am still learning. Plus places like Google, Twitter, Amazon, etc, won't hire me. What would pull me away? Something more glamorous that enables me to learn, and pays my bills.

Sunday, June 12, 2011

Sharethis getScript timing and interval retries

I recently posted on some sharethis work I did. Well, now I had an issue with HTTP and HTTPS and needed to switch the script load given the protocol in use. I gave the getScript a go. Problem was that the script load finishes in a callback or thread, and timing is now an issues for later events that depend on the Sharethis code to be loaded.

In my last post, I mentioned the Ajax and the button loading setup process. This was failing because the main Sharethis object didn't exist from time to time with the asynchronous loading of getScript.

Oddly, the reference to the stButton object was throwing an undefined exception even though I had proper checks in place to ensure that it was defined before I acted on it. Since this didn't work, I was forced to just wrap the stButton reference in a try catch block. On the exception was when the asynchronous script load was not completed. In the catch block, I was forced to add a timer to keep retrying to execute the button setup code when it was ready.

try{
   if (stButtons && typeof(stButtons) != undefined){
      stButtons.locateElements();
   }
} catch (stException){
   var intervalId = window.setInterval(function(){
   try{
      stButtons.locateElements();
      window.clearInterval(intervalId);
   } catch(stException){}
}, 2000);


The above code tries to run the "locateElements" function, when it fails or is not defined, then the window is given a function call to retry every 2 seconds. When successful, that process is ended.

See the setInterval and clearInterval methods.

Classic timing issue of threads, I just didn't think about it at the time. The call to the "stButtons.locateElements" could be placed in the getScript callback as well.

Thursday, June 9, 2011

Merging from a deleted branch in SVN with pegged revisions

A co-worker showed me a trick the other day in order to allow me to merge from a delete branch in SVN.

My problem was that I had merged my branch to trunk, then deleted it, but some work for a specific ticket needed to be merged to another branch:
  • Branch A - My branch (deleted @ revision 1000)
  • Trunk - ....
  • Branch B - Other branch
When I merged to trunk, I just reintegrated the whole branch, so the commit wasn't broken down by ticket. I needed one ticket for Branch B. What I didn't know is that you can specify a SVN path on a merge with a given revision, for example, one where a delete path still existed.

Here is the merge command example:

$ svn merge -r 2009:2010 https://svn.hostname.com/project/branches/branch_a@999 svn_branch_b


Where "svn_branch_b" is a local working copy of "Branch B" merging from deleted "Branch A". Notice on the SVN path in the merge "@999", this is the revision before the delete, so the path still existed at that point.

This syntax is a "pegged version", or is at least what my co-worker described as. 

Sunday, June 5, 2011

Asynchronous success and failure

Some background
I was working on a fix the other day that required some data changes in a web application backend without a page refresh. Pretty standard these days, nothing special here, that's AJAX, or asynchronous (I like to just say "asynchronous" because who uses XML anyways).

As I was working, I needed to know if my asynchronous post to the server was successful or failed.

Again, basic, you make a controller call, it forwards to a success or failure depending on what happened on the server. I don't mean a plain HTTP 500 failure, I mean a controller failure and the controller handled it and knows how to report on it in some way.

The bad
To my amazement, this simple success / failure concept was not in place. The simple fact that the asynchronous call triggered a success was the "success". I was unhappy.

First, the response page was "just some page", meaning is was an unrelated view that the controller was forwarding to. The page was HTML that is rendered in the browser for UI, not a data response.

Second, that's heavy, why do you need an HTML page to come across the wire to tell your UI that your asynchronous post was a success?

The success and failure views are very simple and vital to your asynchronous success callback. Here they are as JSON and are used relative to used jQuery AJAX.

Success
{"success":"true"}

This one is easy, success = "true", The controller was successful. Now you can carry on with whatever happy path occurs in the application.

Failure
{"success":"false",
"errorKey":"12345",
"errorMessage":"display this message as error to user"
}

This one is easy, but has a little more to it.
  • success = "false", the controller failed
  • errorKey = ?, the controller failed because of this error, can be whatever key your application implements for tracing down error
  • errorMessage = ?, whatever message the UI should present to the user because of the error
The "errorMessage" being in the response gives you the advantage of managing the error text on the server and not in the browser, you can also make use of the localization of the text on the server in a single place. Your controller likely threw this exception and is reported in your application logs, its now shared to the UI.  

Summary
Both are light weight responses to your UI that allow you to act properly now in your success event. I used "true" and "false" in my JSON for the success value, you can use whatever value you want. The true / false limit gives it simplicity. Even if you are executing the controller via wget or curl, the response structure is more easily managed, verses some generic page load.

Saturday, June 4, 2011

Some simple parameter passing with XSLT in ANT

There is a simple why to manage environment differences in your build process when managing configuration files by using XSLT in ANT.

Say for example your configuration file contains database connection details. These are likely different per environment (testing, QA, staging, etc). Rather then having many copies of the configuration with the unique values, you can create an XSL template.

Let's call the template "database.xsl". The simple template would look like this:



First, take note of the "xsl:param" values at the top of the file. These declare simple variables and default values that you would like to take place on your template. Second, note the "{$....}" formatting, this sets the parameter to be set in that location on your template.

Next, let's look at the ANT XSLT task:



A couple things here. First the input, output and style. Note the "database.xsl" is the style, this is what we just defined above. This will transform the input and generate the output. The goal is for the output to be generated when your ANT target runs. To pass the runtime build values into the transformation, you can see the "param" entries within the "xslt" block. These can be different per environment.

The result is managing the dynamic variables as seperate verses the entire XML structure, or whatever it is (you can XSLT on text files for example, doesn't matter). In this example the values are hard coded, but you can put them in your build.properties file as well.

Why mention if so simple?

I think this is a major plus verses have targeted environment files. It reduces structural updates to the file when there are changes needed. For example, if a new value is added to the configuration, rather than updating several files, you update the template. If it's static, you dont have to transform it, if it is, then it can easily be managed in this way.

Why mention it? I learned this trick about a year ago, not sure how "used" the concept is, so I thought I would share. I didn't invent it, but I like to use it.

Share on Twitter