Friday, March 28, 2014

A simple alternative to C++ exceptions

I’ve been doing a lot of C++11 research and coding ultimately, and I’m still upset with the C++ exceptions. When reading the concepts, it seems to be a reasonably good alternative to simply return a boolean, but as you try to stuff it in your code, you begin to feel like manipulating a stressed hedgehog. And it’s very frustrating, since you have to stop thinking about your code – what really matters –, to think about the implications of the code you are writing upon the code itself.

Usually, my needs are really simple, I just need an extended way to report a success or a failure from a function. I never wrote something that really needed to tackle differents kinds of errors: it’s just success, or failure with a reason, which usually popped in an alert message box.

After quitting the exceptions, when trying to solve this problem, my first move was to use an extra string pointer as the last parameter to hold eventual error messages. It goes like this:
bool Func(int p1, int p2, wstring *pErr=nullptr) {
	if(false) {
		if(pErr) *pErr = L"Error message.";
		return false;
	}
	if(pErr) *pErr = L""; // all good
	return true;
}

bool Second(float nn, wstring *pErr=nullptr) {
	if(!Func(42, 1337, pErr)) { // will return error message from inner function
		return false;
	} else if(false) {
		if(pErr) *pErr = L"Another error.";
		return false;
	}
	if(pErr) *pErr = L""; // all good
	return true;
}

{
	wstring err;
	if(!Func(42, 1337, &err))
		scream(err.c_str());
}
There is no problem with this approach, and I used it for quite some time. It is clear, and one would have no doubts about what is going on. It is, however a bit cumbersome to write all those pErr checks. So at some point I started considering something else.

My second approach was to use a standard pair as the return type to the function. Something like this:
pair<bool, wstring> Func() {
	if(false)
		return make_pair(false, L"Error description message.");
	return make_pair(true, L"");
}

{
	pair<bool, wstring> ret = Func();
	if(!ret.first)
		scream(ret.second.c_str());
}
I never really used this. It doesn’t look clear enough, and the declaration of the pair variable, just to hold the return value, doesn’t amuse me.

Then I start thinking about writing a “bool on steroids” class, specifically overloading the operator bool, so there would be no need to declare a variable just to hold the result value of the invoked function. This is the class:
#include <string>

class Failed {
public:
	Failed(bool allGood)                   : _hasFailed(!allGood) { }
	explicit Failed(std::wstring reason)   : _hasFailed(true), _reason(reason) { }
	explicit Failed(const wchar_t *reason) : _hasFailed(true), _reason(reason) { }
	virtual ~Failed()                      { }
	operator bool() const         { return _hasFailed; }
	const wchar_t* reason() const { return _reason.c_str(); }
private:
	bool _hasFailed;
	std::wstring _reason;
};
And this is the usage:
Failed Func() {
	if(false)
		return Failed(L"This function failed.");
	return true;
}

Failed Second() {
	if(Failed f = Func()) { // if Func() returned false
		return f; // return from inner function
	} else if(false) {
		return Failed(L"Another error message.");
	}
	return true;
}

{
	if(Failed f = Second())
		scream(f.reason());
}
This approach looks a lot elegant to me. It’s clean, allows the chaining of error messages, and it even allows the inheriting of the Failed class to add more specific error data – although I never really needed this.

The only thing which makes me uncomfortable with this class is the _hasFailed member, which holds a reversed boolean value of the object meaning: if there’s a success, it’s false; if there’s a failure, it’s true. I implemented it this way so that I could use a declaration of the class within an if statement, with no need to declare a variable just to hold the result, as I did on the previous example when I returned a standard pair. Also, it explains why I chose the name “Failed” instead of something like “BigBool”: to explicit the idea of a failure being handled by the if.

Other than that, I consider it a neat approach, and that’s the best solution I could come up with so far.

Saturday, February 15, 2014

Queen 2011 remasters: horrible

As an audophile, I care a lot about my music collection. Some albums do have multiple pressings, with new remastered versions of the same old material. These remasters rarely surpass the originals, they tend to be overcompressed due to the infamous loudness wars, but I always seek to hear them, waiting for a miracle.

Right now I’m listening to the Queen 2011 remasters, which sound all really bad to my ears. I’ve read somewhere that Bob Ludwig was responsible for this work, but it’s hard to believe that such a great professional have done such a shitty job. Some songs – particularly the heavier ones – have so much compression that distortion is openly audible. The Jazz album is the worst one so far, with burnt peaks on all tracks, a mess of noise.

The Sheer Heart Attack song, from News of the World album, is a full block of deafening noise, that gave me a headache before the second minute ended, and I had to skip it. The same for the heavy parts of Brighton Rock and Stone Cold Crazy, from Sheer Heart Attack album. Basically, the songs are ruined, sounding really harsh, overcompressed on every single opportunity.

Label managers of today are not only stupid, they are deaf too. When I see things like this, I only can hope that digital music distribution kills this industry as soon as possible, as it is already happening. With the home studios and the internet, we don’t need their greed anymore.

Wednesday, January 29, 2014

JavaScript anonymous closures

Today I was going to write about my fruitful experiments in JavaScript using anonymous closures, scope and namespacing in general. But I stumbled across two articles that are so good, that I just felt the need to publish the links to both, because they explain everything I was about to write in great detail:
As of today, 3+ years old articles, which are still contemporary and relevant to JavaScript development. Highly recommended reading.

Thursday, January 9, 2014

Logitech, please bring back the wired Trackman Wheel

About 2 years ago I was suffering from some wrist pain, due to countless hours of computer work with an ordinary mouse. I started searching for some device to relieve the pain, and after test several gadgets, I came across the Logitech Trackman Wheel. It’s a trackball which sphere is moved with the thumb, leaving the whole hand resting on the desk. And it’s absolutely great, precise, comfortable, I could use that thing for hours without any signs of tiring.

This week, however, the left click just started acting numb, unresponsive, and I found a video teaching a way that could possibly fix it. After three days, the problem seemed to be unfixable, so I decided to buy another unit of this wonderful thing. But to my surprise, I found out the Logitech discontinued the Trackman Wheel, in favor of a wireless version called M570, which like any laggy wireless device, relies on batteries, which require you to figure out when they start dying, so you can buy more batteries to replace the old ones, if you don’t have them dying in the middle of an online game, for example, case when you’d be screwed. Oh, and also it has an annoying switch to turn on and off, so that’s another thing you have to remember to do. The Trackman is supposed to stay fixed on the desk, so why the hell do you need a wireless version of it?

I’m not the only one who wants the wired Trackman Wheel back, there’s even a group on Facebook with people discussing this. Unfortunately, Logitech didn’t reply the user complains, and so far there’s no hope to have a wired version of our beloved Trackman Wheel.

I could not find any unit to buy, not even on eBay. By now, I’m desperately trying to find a way to fix my old and good Trackman Wheel, to use it for as long as I can. And to Logitech, for blindly joining this wireless bandwagon, I just want to leave a sincere fuck you, Logitech.

Update, Jan 11:
This morning I found a skillful electrician who managed to swap the bad left click component with the middle click one, which was working fine. Now I don’t have a middle click anymore, but at least my beloved Trackman Wheel is usable again.

Tuesday, December 17, 2013

Creating project templates for Visual Studio 2013

There are a couple of project settings that I set for every native C++ Win32 project I want to start, basically, more aggressive optimizations for the release build.

Right now, messing around with my brand new Visual Studio 2013 – which I’m appreciating so far –, I discovered how to create a template project with all the settings I need, including subfolders and files. I found this article which explains the process for an ASP.NET MVC project, but the steps are the same for a C++ one.

The templates are exported as ordinary ZIP files, and importing existing ones is trivial, you just need to copy this ZIP file into the base directory. On my computer, this is the full path for the custom user templates:
C:\Users\Rodrigo\Documents\Visual Studio 2013\Templates\ProjectTemplates
In addition to the project settings, I also put in the template a subdirectory called “src”, my very basic main.cpp file, which contains my wWinMain entry point. It seems that I’ll never have to write it again.

Monday, December 16, 2013

A simple C++ smart pointer class

Until I started using move semantics of C++11, I used this smart pointer class, which is template-based, heavily relies on operator overloading, and uses an instance counter:
template<typename T> class Ptr {
public:
	Ptr()                 : _ptr(NULL), _counter(NULL) { }
	Ptr(T *ptr)           : _ptr(NULL), _counter(NULL) { operator=(ptr); }
	Ptr(const Ptr& other) : _ptr(NULL), _counter(NULL) { operator=(other); }
	~Ptr() {
		if(_counter && !--(*_counter)) {
			delete _ptr;     _ptr = NULL;
			delete _counter; _counter = NULL;
		}
	}
	Ptr& operator=(T *ptr) {
		this->~Ptr();
		if(ptr) {
			_ptr = ptr; // take ownership
			_counter = new int(1); // start counter
		}
		return *this;
	}
	Ptr& operator=(const Ptr& other) {
		if(this != &other) {
			this->~Ptr();
			_ptr = other._ptr;
			_counter = other._counter;
			if(_counter) ++(*_counter);
		}
		return *this;
	}
	bool isNull() const         { return _ptr == NULL; }
	T& operator*()              { return *_ptr; }
	const T* operator->() const { return _ptr; }
	T* operator->()             { return _ptr; }
	operator T*() const         { return _ptr; }
private:
	T   *_ptr;
	int *_counter;
};
Example usage:
struct When {
	int month;
	int day;
};

Ptr<When> GetLunchTime()
{
	Ptr<When> ret = new When(); // alloc pointer to be owned
	ret->month = 12;
	ret->day = 16;
	return ret;
}

int main()
{
	Ptr<When> lunchTime = GetLunchTime();
	// ...
	// no need to delete lunchTime
	return 0;
}
It works fine, but with the implementation of move semantics – which is truly great –, it seems that I don’t need it anymore. So I’m publishing it here for historical reasons.

Sunday, December 15, 2013

C++11 move semantics are amazing

Right now I’m testing Visual C++ 2013, which implements some interesting innovations from C++11 specification. I started taking a look at some of them, and the move semantics instantly caught my attention because they solve a problem I was just facing. I implemented a smart pointer class so that I could return a string object from a function – my own String class, I don’t use STL –, and it was something like this:
Ptr<String> Function() {
	Ptr<String> ret = new String();
	return ret;
}
This worked perfectly fine. What bugged me was that two more allocations are needed on this approach: the internal smart pointer instance counter, and the pointer itself. These two allocations, of course, are summed up to the internal String array allocation, so I ended up with three memory allocations for a trivial string return. That sounded too much to my optimization paranoia.

Now, with the new C++11 move semantics, when we write the constructor and the assignment operator for a class, we write specific code to use when receiving a temporary object, which will be destroyed right after the operation completes – it means we can do whathever we want with this temporary object, no one will bother. After implementing the move semantics on my String class, and on the underneath Array class which powers it, I could rewrite the above function like this:
String Function() {
	String ret;
	return ret;
}
My return object is allocated on the stack, not heap-allocated, so we have saved one allocation. And since I’m not returning a pointer – but rather returning the object my value – we don’t need the smart pointer anymore, so we saved another allocation. In the end, we shrinked down from three to only one single allocation, the internal string array itself. With the move semantics implemented, the internal string array just flies from an object into another, without cloning the whole array.

Right now I feel I won’t even need my smart pointer class anymore, as I’ll refactor all my classes to take advantage of the move semantics: the implications are huge. This will result in a general optimization, saving several memory allocations all around. And that’s amazing. All right, call me a C++11 guy now, it has just got me, I’ve been converted.

Saturday, November 2, 2013

Sound Forge 11: no FX chain apply?

I’m a long time Sound Forge user, since the golden pre-Sony Sonic Foundry days, when the horrendous .NET Framework, a toy for script kiddies, was not needed. I’ve been following the version upgrades until today, when I gladly installed version 11. One of my most used options – applying the FX chain right away – simply disappeared. I’ve been searching around, and I found that I’m not the only one who complains about this.

There’s an alternative way to apply the FX chain through the FX Favorites menu, but it’s cumbersome and you cannot work upon the waveform while it’s open. Ironically, the new FX chain window is one of the “new” things Sony is marketing about. Well, congratulations for messing it out, Sony. I’ll pass.

I rolled back to Sound Forge 10, and I’m keeping it. Let’s see if a future Sound Forge 12 will bring that facility back.

Saturday, August 31, 2013

Plotting icons in OpenStreetMap with GPS coordinates

I’ve been working on a project with GPS data, where a map plotting would be handy. At first I thought about using Google Maps, but it’s always better go open source, and since I’m an enthusiast of OpenStreetMap, I started searching a way to do it, and I found the OpenLayers project, which is insanely cool.

My goal: plot some small images on the map (icons), and when the cursor goes over an image, a popup would appear. On my way to make it happen, I had to develop some interesting things, that I share now with everyone, in the hope that it can be useful.

The first difficulty I found was to make OpenLayers understand GPS coordinates without much fuss. To do so, you must have a coordinate transformation, which is not exactly trivial. Below is a function to make this conversion trivial, yes, so that you’ll never need to worry about it again. The map argument is the map object you’re working upon, and lon and lat are floating point numbers.
function Geo(map, lat, lon) {
	return new OpenLayers.LonLat(lon, lat)
		.transform(new OpenLayers.Projection('EPSG:4326'),
			map.getProjectionObject());
}

// Example usage:
var map = new OpenLayers.Map('myMapDivId');
map.addLayer(new OpenLayers.Layer.OSM());
var coor = Geo(map, -5.773274, -35.204948);
map.setCenter(coor, 8);
Second, the image (icon) plotting. When you plot an image at a given point, this image is centered at that point. I wanted to plot an image of a pushpin, which points at the wrong location if centered – the very end of the pushpin will point to a location below the center of the image. It had to be moved above, so that the bottom of the image was exactly over the point.

The function below creates the image and plots it with a bottom offset, if needed. The img argument is a valid image URL, cx and cy are the size of the image in pixels, and bottomOffset is a boolean value. The function is asychronous and waits the image to be loaded; use the onDone callback, which receives the new marker as argument.
function PlotMarker(map, img, lat, lon, bottomOffset, onDone) {
	var imgObj = new Image();
	imgObj.src = img;
	imgObj.onload = function() {
		var sz = new OpenLayers.Size(imgObj.width, imgObj.height);
		var off = new OpenLayers.Pixel(-(sz.w / 2),
			bottomOffset ? -sz.h : -(sz.w / 2));
		var ico = new OpenLayers.Icon(img, sz, off);
		if(onDone !== undefined && onDone !== null)
			onDone(new OpenLayers.Marker(Geo(map, lat, lon), ico));
	};
}

// Example usage:
var map = new OpenLayers.Map('myMapDivId');
map.addLayer(new OpenLayers.Layer.OSM());
var markers = new OpenLayers.Layer.Markers('Markers');
map.addLayer(markers);
PlotMarker(map, 'pushpin.png', -5.773274, -35.204948,
	true, // it's a pushpin, I want it bottom-aligned
	function(mk) {
		markers.addMarker(mk);
	});
Below you have a fully functional web application composed of three files, which displays an OpenStreetMap map at the whole page, loads points from a separated JSON file, and renders it accordingly. There’s no server-side processing involved, but the JSON file could easily be something processed by the server, which would load the data from some sort of database. I used jQuery library to handle the mouseover event which shows the popup over the images, but it’s not really necessary if you want to use something else.

1) index.html
<!DOCTYPE html>
<html>
<head>
	<meta charset="UTF-8"/>
	<title>Rodrigo's map</title>
	<style>
	* { -moz-box-sizing:border-box; -webkit-box-sizing:border-box; box-sizing:border-box; }
	body { margin:0; font:10pt Arial; color:#242424; }
	#mapArea { position:absolute; width:100%; height:100%; }
	</style>
	<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.0.3.min.js"></script>
	<script src="http://openlayers.org/api/2.13.1/OpenLayers.js"></script>
	<script src="index.js"></script>
</head>
<body>
	<div id="mapArea"></div>
</body>
</html>
2) index.js
function PlotData(divId, data) {
	function Geo(map, lat, lon) {
		return new OpenLayers.LonLat(lon, lat)
			.transform(new OpenLayers.Projection('EPSG:4326'), map.getProjectionObject());
	}
	function PlotMarker(map, img, lat, lon, bottomOffset, onDone) {
		var imgObj = new Image();
		imgObj.src = img;
		imgObj.onload = function() {
			var sz = new OpenLayers.Size(imgObj.width, imgObj.height);
			var off = new OpenLayers.Pixel(-(sz.w / 2),
				bottomOffset ? -sz.h : -(sz.w / 2));
			var ico = new OpenLayers.Icon(img, sz, off);
			if(onDone !== undefined && onDone !== null)
				onDone(new OpenLayers.Marker(Geo(map, lat, lon), ico));
		};
	}

	var map = new OpenLayers.Map(divId);
	map.addLayer(new OpenLayers.Layer.OSM());
	map.setCenter(Geo(map, data.center[0], data.center[1]), data.zoom);

	var markers = new OpenLayers.Layer.Markers('Markers');
	map.addLayer(markers);

	var $map = $('#'+divId),
	$det = $('<div></div>').css({
			'position':'absolute',
			'padding':'0 4px',
			'display':'none',
			'box-shadow':'2px 2px 3px #999',
			'background-color':'rgba(255,255,255,.85)',
			'border':'1px solid #AAA'
		}).appendTo('body');
	$.each(data.points, function(i, pt) {
		PlotMarker(map, pt.icon, pt.coor[0], pt.coor[1], true, function(mk) {
			markers.addMarker(mk);
			mk.events.on({
				mouseover: function(ev) {
					$det.html(pt.text).css({
						left: (ev.pageX < $(document).width() / 2) ?
							ev.pageX+'px' : (ev.pageX - $det.outerWidth())+'px',
						top: (ev.pageY < $(document).height() / 2) ?
							ev.pageY+'px' : (ev.pageY - $det.outerHeight())+'px',
						display: 'block'
					});
					$map.css('cursor', 'pointer');
				},
				mouseout: function(ev) {
					$det.empty().css('display', 'none');
					$map.css('cursor', 'auto');
				}
			});
		});
	});
}

$(document).ready(function() {
	$.getJSON('data.json', function(data) {
		PlotData('mapArea', data);
	});
});
3) data.json – assumes image “pushpin.png” exists
{
	"center": [ -5.797942,-35.211782 ],
	"zoom": 14,
	"points": [
		{
			"coor": [ -5.799500,-35.21951 ],
			"icon": "pushpin.png",
			"text": "First point<br/>Anything here"
		},
		{
			"coor": [ -5.790982,-35.19409 ],
			"icon": "pushpin.png",
			"text": "Other <b>point</b> here"
		},
		{
			"coor": [ -5.802083,-35.20877 ],
			"icon": "pushpin.png",
			"text": "Something else<br/>placed here"
		}
	]
}
Enjoy.

Thursday, August 8, 2013

Firefox 23 is slower than Firefox 22

In last June, I’ve been amazed with the Firefox 22 launching. It featured a new JavaScript engine called OdinMonkey, and particularly on Linux, the browser became really fast and smooth. The upgrade was instantly noticeable with Firebug, which became a lot more responsive.

Yesterday, however, Mozilla rolled out Firefox 23 – with a horrible new logo, without contrast and which looks blurry at small size. But mainly, to my dismay, right after upgrade, everything became slower. It felt like last year’s versions, with a sluggish performance, and essentially a pain to use. On Firebug, this is felt very strongly.

I forgot to backup my profile, but luckily the profile structure was not modified, and I was able to downgrade, download Firefox 22 again and removing this horrible Firefox 23, in the hope that they can fix it on the next version.

Update, Sep. 19:
I’ve just tested Firefox 24, and it seems to be even slower than 23. So, I’m still keeping Firefox 22.

Update, Nov. 7:
Apparently the slowness is fixed on Firefox 25, which I’m testing right now. Finally.

Tuesday, July 30, 2013

PHP and JavaScript internationalization

These days I’ve been working on a PHP/JavaScript project which needed to be translated from English into other languages. I made a search and the results were pretty disappointing. All the solutions I’ve came across were either too complicated, too heavy or too messy. I wanted something more hygienic, that could be easily integrated into any project, so I ended up writing my own.

Basically, all you have to do is add a subdirectory called “i18n” (or any other name you want) and copy the “i18n.php” file there, then create a file to each language you need. One of those language files will be your default – probably the English one –, where the strings will also serve as the mapping keys. The other language files will simply have all the lines, one by one, translated.

Once the “i18n.php” file is included into your PHP script, you’ll have to set up the translation by calling i18n_set_map(), choosing the source (the default) language file, and the target language. And it’s done. Every string passed to the I() function will be translated, both in PHP and JavaScript.

There is a comprehensive example on the GitHub repository, where I published everything under the MIT license, in the hope that it can be useful to someone else. The repository is at github.com/rodrigocfd/php-js-i18n.

Saturday, July 20, 2013

Sibelius 7 sucks

As a musician with classical training, I love sheet music. They are not only beautiful pieces of graphic art, but also great repositories of musical knowledge, which are always ready to have music pulled out of them. And music writing is universal: sheet music written in China is the same written in Canada.

Obviously, when it comes to computers, I always was interested in sheet music editors. My first editor was Encore, a very simple one. But soon I moved to a more powerful editor: Sibelius 2. And since then, I’ve been a faithful Sibelius user. Well, up to version 6.

Sibelius Software was acquired by Avid, know for its sluggish Pro Tools. Unfortunately, Sibelius 7 suffered a huge shift in development direction, jumping on the ribbon bandwagon – the single worst thing ever invented in computer user interface history – and bringing up a very confusing and keyboard-unfriendly screen. Now, all the actions and option dialogs – previously organized into regular menus, easy to reach with keyboard shortcuts only – are spread among the ribbon tabs, with distracting and childish icons, wasting precious screen space and requiring you to execute many, many additional mouse clicks while searching for something among that mess. Oh, and there’s still a File ribbon tab, with even more additional tabs at left and options distributed like mucus dropped from a strong sneeze!

But Sibelius 7 didn’t change only the interface, they also brought improvements, right? Wrong. There is only one improvement, relative to text flowing inside textboxes. Everything else is insignificant for music writing. I felt really ashamed by watching the what's new videos from Sibelius Software, with nothing new to show besides that horrible user interface. Don’t they have some critical sense, at least?

So it seems that I’m stuck with Sibelius 6, which is a bit slow, but it’s usable. Well, until some other company comes out with something better. And without a ribbon.