Friday, December 17, 2010

How to load text files with jQuery

I was looking into this to help out @tarelli with one of his crazy tasks from hell (spare you the details). Here's how you go about loading local files using jquery:

// LOAD file
$.get('file:///C:/myPath/myFile.txt', function(data) {    
    var lines = data.split("\n");

    $.each(lines, function(n, elem) {
       $('#myContainer').append('<div>' + elem + '</div>');

This will only work if you double click on the file that executes the script, obviously a web-server shouldn't allow you to go mess around in the file system (I tried on IIS and couldn't fool it, damn). Obviously the same snippet can be used to load files on a web-server by providing a url to an accessible file.

  • couldn't get it to work without specifying the file full path in format file:///C:/myPath/myFile.txt
  • to get this to work on chrome you'll have launch it with the --allow-file-access-from-files cmd line arg
P.S. happy xmas

Wednesday, September 29, 2010

Javascript - strip off illegal characters from string

Recently had to come up with a piece of javascript to strip off a set of illegal characters from strings before passing down to the persistance layer.

Took me a while to come up with a regex for the replace, not because it's particularly difficult, but because I suck at regexes (and I am no js expert either).

I thought it could be handy to have this functionality as a string prototype:
// strips off illegal chars &%$
String.prototype.stripOffIllegalChars = function() {
 return this.replace(/[&%$]/g, "");
The /g above means that the replace will be global (so not just the first of those characters will be replaced).

It can be used like this on any string:
var dirtyString = "blah$blah%blah&";
var cleanString = dirtyString.stripOffIllegalChars();
Hopefully it'll save some time to the next in line.

Thursday, May 27, 2010

WCF client hangs on big response: make it streamed

I have to blog about this - as I spent a few days having nightmares about it.

An operation from a websphere service was returning a pdf string, for a payload of about 500KB. The WCF client consuming the service was working fine on the test fixture, but when the operation was integrated in the web solution, with the exact same binding and endpoint, the client was hanging for more than a minute onto the response of this particular operation (every other operation with a smaller payload was OK) before coming back with the deserialized response.

I initially blamed the service, but then noticed (sniffing traffic via fiddler) the response was coming back quick enough and only then the client would hang for more than a minute obviously trying to deserialize or God knows what.

After quite a bit of hacking around on and off, I managed to change the transportmode config setting on the binding from buffered to streamed (I had nothing else left to try!), and it did the trick. In light of this it's pretty obvious that the response was being chunked in parts the size of the buffer and that was probably slowing down the whole process.

Morale of the story: if you've got big payloads transportMode="Streamed" could save your sorry ass.

Wednesday, April 7, 2010

You have requested an outdated version of PayPal. This error often results from the use of bookmarks.

Recently started getting this error when linking to paypal:
You have requested an outdated version of PayPal. This error often results from the use of bookmarks.
Solved it by commenting this line on my code (when setting up the paypal form in GWT):
Apparently PayPal recently implemented changes to prevent it from accepting posts with encoding type multipart/form-data.

Hope it helps.

Friday, March 12, 2010

CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress

This is what I got back when I tried to bounce a JAVA WebSphere Axis service from a WCF client:
CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress
Took me a while but eventually figured out what was wrong and ,long story short, WebSphere did not like the HttpHeaders generated by the WCF bindings (basicHttpBinding) when sending the request.

I started sniffing traffic with Fiddler, but initially I was paying attention only to the SOAP envelope. I then tried to directly invoke the service using SoapUI and it basically seemed to work fine (no Unable to create AxisService error message and I was getting back the result I expected). With the SOAP envelope ruled out, the next logical consequence needed to be the HttpHeader, so I compared the header generated by WCF with the one automatically generated by SoapUI (by pointing the tool to the endpoint url):

WCF Generated HttpHeader:
POST /MyWebServiceDomain/aWebService HTTP/1.1

SoapUI Generated HttpHeader:

At this point it was fairly obvious that I needed to tweak the POST line of the HttpHeader to include the full definition of the endpoint, hopefully by WCF configuration - and after asking around a co-worker pointed me in the right direction: hostNameComparisonMode="Exact" on the WCF binding is what I was looking for (it seems to be set to StrongWildcard by default).

Couldn't find anything on the web about any of the above - I hope this helps someone else with the same problem.

Friday, March 5, 2010

Add Service Reference duplicates properties on Faults

Spent the last few days fighting with this, finally found what seems to be a workaround so I thought I'd share hoping that it can be of some use to some other poor devil who's stuck.

Dealing with WebSphere generated web services, the curious occurrence of duplicate properties on Faults (on different partial classes) was encountered when generating proxies through add service reference on Visual Studio 2008, pretty much in a very similar way as described in this post on the msdn forum.

Solution: Long story short, you have to use svcutil to generate your proxy classes and data types with the /useSerializerForFaults attribute, this will cause the XmlSerializer to be used for reading and writing faults (but only those), instead of the default DataContractSerializer (which will still be used for the rest of the stuff).

Note 1: using the option /serializer:XmlSerializer instead of /UseSerializerForFaults on svcutil will cause the Faults to be wrapped in a sub-namespace (the same namespace they were defined in the xsd contract).

Note 2: setting the corresponding option item UseSerializerForFaults to false in the ServiceMap file does not give the same results (instead of generating duplicated properties it started generating duplicated attributes, two on each partial class).

This seems to be a genuine bug. Let's just hope it gets fixed because it's a pain to import stuff manually.

Note that if you kick it old school and import the service as web reference (the .NET 2.0 way) it should work fine as well, but for me this was not a choice.

Saturday, February 27, 2010

Russel's antinomy and stackoverflows

Russel's Antinomy goes like this (an extract from Gödel's proof):
Classes seem to be of two kinds: those which do not contain themselves as members, and those which do. A class will be called normal if, and only if, it does not contain itself as a member; otherwise it will be called non-normal. Let N by definition stand as the class of all normal classes. We ask whether N itself is a normal class. If N is normal it is a member of itself (because by definition it contains all the normal classes); but, in that case, N is non-normal because by definition a class that contains itself as a member is non-normal. On the other hand, if N is non-normal it is a member of itself (by definition of non-normal); but, in that case, N is normal, because by definition the members of N are normal classes. In short, N is normal if, and only if, N is non-normal. It follows that the statement "N is normal" is both true and false.

By reading this sort of stuff, we get reminded how all the stuff we work with as software engineers, comes from Mathematicians.

Anyway, this is what I came up with, inspired by the above:

public class NonNormal
   NonNormal _nonNormal = new NonNormal();
Now you just instantiate NonNormal and you'll get a pretty sweet stackoverflow.

I don't know about you, but this looks like a pretty elegant fuck-up to me.

Wednesday, February 24, 2010

Falling in line to the (micro)templating frenzy

A few weeks ago I would've laughed in your face if you told me I was gonna fall in line to the templating frenzy that seems to be spreading like a virus between coders. Now - one way or the other - I seem to be infected.

I recently tackled the challenge of generating a DTO layer (including mapping logic) from WCF service client using T4 (with no editor whatsoever - don't get me started). I honestly thought I was gonna blog about that sooner or later but then I immediately got shuffled around like a puppet to some front-end work (I am talking web here) and faced the lame-ass ordinary challenge of dynamically injecting repetitive structures into a given page in response to a given event (click-click-click-click ... click).

Coming from that T4 work, templating obviously came to mind as a way not to get bored: wouldn't it be great if there was something like T4 for js? It sounded crazy at first, but I started looking into it and immediately found the John Resig Micro-Templating engine.

There was no way I was gonna pass on that and, to be honest, the only alternatives were pretty lame:
  • implement your own templating engine 
  • shamelessly hard-code the markup to inject in your .js functions (as I always did before)
So I started playing with it and managed to stumble upon the Rick Strahl variation to it, which actually uses T4 syntax (it sure doesn't look like a coincidence) and also has a nice addition for error handling.

Anyway, this is getting too long: here's an example where I am adding divs with a bunch of input fields to a container, ids are generated at runtime depending on how many divs we have in there. It's ugly as fuck it gets, but it drags the message across (I think).

Let's start
you need to shove the templating engine function in a file. I called it templating.js and you can just copy paste whatever Rick Strahl has on his article. Once you've done that, add templating.js and jQuery.js as external script files to your html.

Once you have that in place, it's time to populate you jQuery init function and add your micro-template (the micro-template is added in a script element defined as text). This should be pretty straightforward if you read the comments:
<!-- all this goes into the head section --> <script type="text/javascript"> $(document).ready(function() { //IDs for the first div var idsArray = { divId: "div_0", input1Id: "input1_0", input2Id: "input2_0" }; // logic to add the first div function onLoad() { var templ = $("#myRepeaterTemplate").html(); var parsed = parseTemplate(templ, idsArray); $("#myTarget").html(parsed); } // inject the first div onLoad(); // Add onclick handler to button w/id addBtn $("#addBtn").click(function() { //1. count how many divs var size = $("#myTarget > div").size(); //2. generate name value pairs var myArray = { divId: "div_" + size, input1Id: "input1_" + size, input2Id: "input2_" + size }; //3. invoke parseTemplate var templ = $("#myRepeaterTemplate").html(); var parsed = parseTemplate(templ, myArray); //4. append $("#myTarget").append(parsed); }); }); </script> <script id="myRepeaterTemplate" type="text/html"> <div id="<#= divId #>" > <input id="<#= input1Id #>" type="text" value="some input" /> <input id="<#= input2Id #>" type="text" value="some input" /> </div> </script>

And the following is what you need in the body of the page for this to work:

<div id="myTarget">
   <p>this stuff should be wiped on load</p>
<input id="addBtn" type="button" value="Add" />

This is just a very basic example that should be suitable when you just want to inject some markup given a template, but you can put actual js logic into the template. For a nice example of that have a look at this nice example here.

I am a lazy-ass late adopter, and if I am using this stuff  (talking about the templating frenzy in general) - it generally means it can't be ignored much longer. Do so at your own risk.

Wednesday, February 3, 2010

GetProperties(BindingFlags.DeclaredOnly) returns no properties

If you're trying to use reflection to retrieve a list of properties and you want only the properties declared in a given type but not the inherited ones, msdn says you need to call GetProperty passing down the DeclaredOnly binding flag.

What msdn doesn't say is that if you just pass down DeclaredOnly you'll get nothing back (and if you ended up on this post through a google search that's probably the reason why).

In order to get back the properties you're looking for you need to pass down also the Public and Instance binding flags - smt like this should work:
var properties = type.GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly);
This is probably working "as designed" but could definitely make you waste a bit of time (as it did for me).

In the spirit of human hive intelligence - hope it helps some other average Joe out there.

Friday, January 22, 2010

Are MVP and AJAX a good match?

Can't seem to get through a conversation these days without talking about the MVP pattern - so here's a boring post about just that.

If you need extensive unit testing coverage, adopting the MVP pattern in your app is probably going to make your life easier. In the specific case of web apps though there are a number of considerations to take into account.

First of all, not everyone seems to be aware that MVP was retired a while ago, and it has been split by the author in two different patterns: Passive View and Supervising Controller. In my experience, when talking about MVP, most people implicitly refer to the Passive View model.

Passive View looks like a pretty good fit for the ASP.NET post-back model, where your presenter triggers updates on the view server-side and everyone can feel comfortable they're following the pattern. This is all good till you start sneaking loads of AJAX calls into your page, resulting in a view that has got now inherent behavior (in this discussion I am considering all the client-side scripting as inherent behavior of the view - which might be a wrong assumption, if so please slap me hard). Still nothing wrong, till the AJAX calls from the view go straight down to the model to fetch data which is used by the view to update itself. Bare in mind that here the problem is not so much the view that updates itself, but the fact that, going straight to the model, the view is no longer passive and the presenter doesn't trigger the update (as we obviously don't have a client-side equivalent).

The considerations above seem to rule out the passive view approach for AJAX intensive apps. In fact, in the Passive View pattern the view is only allowed to talk to the presenter (usually through some messaging mechanism). All this to say that if you're adopting MVP in a web project that requires AJAX calls from the client representation of the view to the service layer (the model), you better specify you're not adopting Passive View, because - at all effects - you're not!

But maybe not all is lost.

If Martin Fowler makes a clear distinction between passive view and selective controller there must be a reason (and a pretty good one for sure). In fact, in a scenario where the view selectively talks to the model and updates itself and "the controller/presenter defers as much as it is reasonable to the view", you're basically adopting the Supervising Controller variation of the MVP pattern (even if you don't know). This scenario is particularly well suited in case of databindings, and even if this is not strictly true in the case of loads of AJAX calls down to the model, I would probably see this AJAX scenario a good fit for this variation of the pattern (compared to passive view).

I find it might be easier for people to get it right if they feel they're closely following the pattern - this is definitely true for me!

So in answer to the title question: Are MVP and AJAX a good match?
I think they might - as long as you're not talking about Passive View!

Saturday, January 9, 2010

What a .NET guy likes about AppEngine

I recently completed work on an app built on AppEngine and Google Web Toolkit:

It's basically a logo design app with a twist - follow the link if you wanna find out more on the app itself.

To the point, the team on this project came mostly from a Java background and even though I am mainly a .NET nut I also have a bit of Java under the belt, so this project represented the perfect opportunity for me to get some sweet  AppEngine + GWT action.

So here we go, here's the good old list of things I really liked about the setup on this project (Java/AppEngine/GWT on Eclipse + Google plugins) compared to the setup I am more familiar with (C#/ASP.NET + SQLServer + Azure hosting, all on VS):
  • With AppEngine and the Google-Datastore you don't have to cope with SQL or SQLServer (and if you follow this blog you know how I feel about SQL)
  • You can literally deploy your app with one-click from the eclipse plug-in (all extremely easy to setup - and it works)
  • Hosting is free on google appspot (if you don't go over the free quotas, and quite cheap after that anyway)
  • GWT = virtually no messing around with javascript (I do like javascript but not if I am in a hurry)
  • .NET/VS to JAVA/Eclipse transition turned out to be OK (Eclipse is pretty cool) with people around to rely on
So far AppEngine has proven reliable - the app still runs a bit slow but we did not put any effort into improving performance (I've seen ASP.NET apps running much slower, and it rarely boils down to hosting anyway) - stay tuned for more.