login - help - about
?
header
Welcome to BeNOW.ca, Andy's tech musings, open source projects and other misc comp shite.

News

Subscribe
Browser testing brain dump.
So, ye silent ones of internets vast, this is a braindump of an idea that just hit me... it's probably undeveloped and possibly not very helpful, but here it is: image comparison browser testing. I've had server testing going for a while in the benow code, tho not used too much. A sequence of pages can be recorded and the requests (with cookie state, etc) resubmitted and compared. This can help to detect unexpected differences in server response... most likely bugs (with solid test data and exclusion of dynamic content). It does not test rendering or dhtml issues or render completion time, which is where my latest brainfart comes in.

I thought screenshot comparison might be a way to test rendering, with javascript onload events used to measure render time (and cause screenshot). Some way from javascript, a screenshot could be taken and saved for offline comparison... I'm thinking a hidden iframe with a java applet, or like voodoo. The reference mode could be enabled and a shot taken and stored, along with render time and perhaps other stats. The sequence could then be triggered via java applet fetches of url progressions (or repeated requests), each onload triggering a screen shot save. (the signed java applet could do os trigger of screenshot using the wonderful ImageMagick).

An analysis process could run (perhaps parallel or after) which compares the retry image with the original along with the timings. It could spit out a report and perhaps demonstrate the differences, if any. Timing differences, if any could perhaps be compared with server state to determine processing hiccups... maybe.

The differences might highlight browser or javascript issues.... or maybe code issues affecting delivery (tho it should be constant). At El 9-5, we're using a (good but) thick web framework in which we're seeing browser based issues. Isolating the issues is tricky. Screen shot comparison might give reason, perhaps even moreso when coupled with spidering or random url browing thru a url pool or something.

Another level of funk would be to let the server know that it was being recorded for screenshots and that the screenshot comparison was running. When recording, it could save the join between url and browser output to files. When comparing, it could deliver from the file (or from memory) for the requested url, perhaps from file or even from memory cache. This could be used to determine the breaking point, if any, of the browser.

Quirk generator, or bug tool... maybe both, perhaps neither.

Have a good one, y'all.

- 01:18 AM, 15 Sep 2011

Moving to RackSpace
The good, old benow.ca box is being taken down. The site and all vhosts are being moved across to RackSpace. The box is still working well, but I've put CJSW on RackSpace for reliability/redundancy reasons, and since they were paying the bill, it's now a bit much for me to foot. There should be no interruption to the recent silence.

FYI, my contract was extended, and I'm still down in Sacramento working on a large Documentum document scanning, indexing and presentation project. The benow codebase hasn't seen much work, but then outside of myself it doesn't have many users... sigh.

- 10:19 AM, 13 Aug 2011

In Sacramento, JavaScript pages are soon possible
First post in a while... I'm working down in Sacramento, and not working on the BeNOW code much. It's a nice place, and the 8hr days are nice in comparison to what I do if I'm working on the BeNOW stuff ;) Something to do with nice workstation, speakers and noise. I miss it slightly, but it's fun here. Nice people, lots to do on weekends, etc. Here's some recent pics of the area.

I did cut a nice Proof of Concept tonight, however. I've been experimenting with the JavaScripting API, with focus on JavaScript and am well on my way to adding server side javascript pages (SJS) to the web framework. SJS pages are files which contain javascript which is run on the server, and creates content which is delivered back to the browser. It's somewhat like XSL .page files or .php files. The request comes in (say to /some/page.sjs) which contains a getContent() which might return 'some text';. The site wide structure of the page would be returned, along with the body containing some text. In addition to straight returned text, it's also possible to build up a document (via createElement(String), addElement(Element), etc) and call back into Java. I've also implemented a javascript loader (thanks to this tip), which allows for JS libraries to be included. This is what I was really after, as it allows for reuse of javascript libraries on both the server and in the web browser... unified scripting. It should also prove to be useful for other reasons... and as Mozilla Rhino is included by default in JDK6, it will be a standard part of the Web project! It's not in there yet, but it will be soon.

As it is using the JavaScripting API, it is also possible to support other scripting languages. With this feature should come support for other page languages, such as Python (.spy), Ruby (.srb) and Groovy (.sgy)!

- 02:34 AM, 10 Jul 2011

Recent RipMaster Activity
I've been putting a push on the RipMaster, in order to get it ready for CJSW before I head to California on Sunday. (Yay, paying work, hopefully tollerable).

Recent additions include:

  • Library Location: location of physical media in library. It was being done by genre, but media might fit into multiple categories, so tracking library location means that the media can be decoupled from the genre system. Programmers will know where to find the thing (if required) and what the genre actually is. Multiple genres are being debated, but will probably be done via tags for now.
  • Media Type: the physical media type. This is usually audio cd when ripping, but for manual album creation and importing (and other ripping of other media (dvd, etc)) means that this is not always the case.
  • Albums with multiple discs: ripping a disc in a multi disc set (Album). I put this off for too long. It's non trivial, for a couple reasons... disc selection and album merging. It's now possible to select which disc the disc is within the album and to merge into existing album. So, disc 2 can be ripped and it helps out with the selection of disc 2, then it's persisted. Disc 1 can be ripped and then it's merged in with the existing disc 2, so that there is only one album with two discs. Subdirectories are created within the album directory (Disc1/, Disc2/, etc) for each disc in a multi disc album.
  • It has a favicon.ico!
  • Many small fixes
Hopefully I can have it in working order for some on-site debugging tomorrow.

- 12:13 AM, 23 Apr 2011

Vinyl Ripping Ponderous Noodlings
Well, two posts in one day. Like the Calgary spring, when it snows, it blizzards.

I added missing album support to the ripper last week. A missing album is album information without audio files. The ripper now provides creation of blank albums where the album/artist/label/track information can be looked up or entered. There is currently partial support for adding audio files after-the-fact via upload or drag into a directory. This quiet little feature is the wedge towards vinyl ripping.

I was talking to the great JD of MegaWatt Mahem fame on Tuesday, and he mentioned that he was ripping his large vinyl collection... to make it more manageable and easier to reference. He said it was a long and tedious task (as have others). The process requires much babysitting and is basically:

  • clean the record - minimize the unwanted noise
  • play and record - position the needle, hit record on audacity, start playing the record
  • process (depop/denoise/normalize/etc) - apply audacity plugins to improve the capture quality
  • mark tracks and split - identify silence in audacity and put track markers and split
  • encode - encode each track to output format (flac, etc)
  • rename - rename the output directory and track files appropriately
  • tag - tag the output files appropriately
Such a pain! Fortunately, I think it all can be automated!!

First off is auto recording. I did a bit of research this week, and I think it's possible via gstreamer (and gst-java). A line recording can be continually occuring and pipe to a level and a circular queue (buffer). The level can be tuned to trigger on volume detection, and cause the queue to dump to a file. This is nifty, as it is always recording (and discarding)... even with silence. On noise, the buffer is big enough to start saving directly to a wav file without missing a beat. Level detection logic could also be put in to identify track breaks and end of album. The processing could then be done via audacity scripting. Encoding could be done and then renaming and tagging from the information entered about the missing album. End result: operator has to clean the record and hit play... thats it!

With multiple soundcards (or multi multi-channel more likely), multiple tables could be setup (say 10) and recording would go _much_ faster (tho still slow in comparison to CD ripping).

A little nice subproject. Anyone want to help?

- 02:45 AM, 18 Apr 2011

Optimizing page delivery in Web Framework
I had an 'experimental' day today (and about time too). I'm looking at optimizing page delivery. I'm using firebug and google page-speed to diagnose speed issues and have made the following optimizations:
  • Caching of dynamic server js to files
  • Moving away from imports in CSS
  • Collection, concatination, minification and caching of javascript and CSS
  • Adding of gzip filter for dynamic content
If you're interested, read on for the details... and strategies that you might want to apply to alternate html delivery.

- 01:01 AM, 18 Apr 2011 Read more... (6607 characters)

Autocomplete Demo
There's now a working autocomplete demo. It gives a nice idea of the basic features of the autocompletion library within the web project.

- 02:35 PM, 04 Apr 2011

Mass code checkin.
I've checked in and built many of the projects (primarily repository, web, java and RipMaster). There have been many changes, including
  • many ripper fixes and enhancements (barcode, statistics, etc)
  • New AutoComplete code in web. It's much better and took me 2 fulltime weeks to do
  • A move to Object Oriented JavaScript in web. Old functions still exist but are deprecated and deprecation notice will be dumped into firebug console.
  • Postgres and MySQL support into Repository. Improved Derby support and better schema migration.
  • ... and many other small fixes

- 11:47 PM, 03 Apr 2011

Web javascript libraries undergoing overhaul
I'm lazily refactoring many of javascript libraries within the web framework. The three main javascript libraries are
  • /js/org.benow.util.Util.js: miscellaneous frequently used utilities. Included in any dynamic page (*.page)
  • /js/org.benow.util.DOM.js: utilities associated with DOM objects (Elements, Documents, etc)
  • /js/org.benow.util.Request.js: ajax request/response handler used for calling services
The util functions are being moved into a Util object and as prototypes. For example, util_log(msg) which logs to the firebug log, if present, is now Util.log(msg). The scoping avoids function name collision. Some common methods are prototyped. For example addClass(elem,aClass), which adds a class to the class attribute of an element, is now [Element].addClass(aClass) and can be called directly, elem.addClass(aClass).

Element and document prototyping is taken further in og.benow.util.DOM.js. The getByPath(node,'/some/xsl/@path') has been prototyped to the node, such as [Element|Document].getByPath(path). This makes the code much cleaner node.getByPath('/some/xsl/@path'). The DOM utils are now even better for easily working with in memory DOM objects. Another nice change is the migration of DOM_toString(doc), which now returns an XML string via doc.toString().

The refactorings also work quite well with the Eclipse javascript plugin, which I'm really starting to like. I'm moving towards using a javascript documenter (like javadoc for javascript) in the build process, so that javascript libraries can be easily discovered and reused by developers. Hopefully I'll be able to move forward with plans to create dev jars, which contain packaged documentation and dynamically plug into the web development functionality.

I've also modified the javascript generator for services. This is the thing that generates javascript for ajax calls of java service methods. The generated javascript is now scoped. For example, a call to org.benow.ripper.RipperService.getAlbum(String):Album is now done via RipperService.getAlbum(string) (instead of getAlbum(string)). This will lessen name collisions.

I've seen this already, with DiscogsService and MusicBrainzService, which both have a getRelease(string) method. Previously the hacky response was to name the methods differently, but now they can have the same name and be referenced by class: MusicBrainzService.getRelease(key) and DiscogsService.getRelease(key).

The javascript transition is being done gradually. All replaced functions still exist, but are marked via Util.deprecated(reason). There is still much more that could be done with objectifying javascript libraries, and there will be more changes in time.

- 03:16 PM, 30 Mar 2011

Deprecating javascript functions
With my recent experience with object oriented javascript during the autocomplete refactor, I've noticed some older javascript library functions which should be in prototypes. To make the changes without breaking existing code, I've kept the method declarations and had them call the new prototyped functions. I now just needed a way to nicely deprecate the functions. Here it is, using firebug and stack traces:
var _shownDeps=new Array(); function deprecated(because) { if (typeof console != 'undefined' && console.warn) { var sig=''; var from=''; try { i.dont.exist+=0; // cause an exception } catch(e) { var lines=e.stack.split('\n'); sig=lines[1].split(')')[0]+')'; from=lines[2].split(')')[1].substring(1); } if (!_shownDeps[from]) { console.warn('DEPRECATED: '+sig+'\ncalled from: '+from+'\n'+because); _shownDeps[from]='true'; } } }
A simple call to deprecated('reason for deprecation') causes a nice display in firebug log. Due to the shown tracking, deprecation messages are only shown once for each calling line. Here's a result from the firebug console log:
Having the call location shown (and only shown once) is very useful for quickly fixing the deprecated call. Note that this only works for firefox due to the firebug console and stack mechanism. It will require rework for other browsers. Thanks to Eric Wendelin for the stack code.

... and, yes, the deprecated function should be deprecated as it's not been prototyped ;)

- 04:24 PM, 29 Mar 2011

older > >