Your Ad Here

Monday, February 23, 2009

Open Source Software, Self Service Software


Have you ever used those self-service checkout machines at a grocery store or supermarket?


self service checkout


What fascinates me about self-service checkout devices is that the store is making you do work they would normally pay their employees to do. Think about this for a minute. You're playing the role of the paying customer and the cashier employee. Under the watchful eyes of security cameras and at least one human monitor, naturally, but still. We continue to check ourselves out. Not only willingly, but enthusiastically. For that one brief moment, we're working for the supermarket at the lowest possible pay scale: none.


That's the paradox of self-checkout. But to me it's no riddle at all: nobody else in that store cares about getting Jeff Atwood checked out nearly as much as Jeff Atwood does. I always choose self-service checkout, except in extraordinary cases. The people with the most vested interest in the outcome of the checkout process are the very same people that self-checkout puts in charge: me! How could it not work? It's the perfect alignment of self-interest.


I don't mean this as a dig against supermarket employees. They're (usually) competent and friendly enough. I should know; I worked my way through high school and part of college as a Safeway checker. I tried my level best to be good at my job, and move customers through my line as quickly as possible. I'm sure I could check someone out faster than they could do it themselves. But there's only one me, and at most a half-dozen other checkers working the store, compared to the multitudes of customers. It doesn't scale.


If you combine the self-interest angle and the scaling issue, self-service checkout seems obvious, a win for everyone. But self-service is not without issues of its own:



  • What if the item you're scanning isn't found, or can't be scanned?
  • Some of the self-service machines have fairly elaborate and non-obvious rules in place, to prevent fraud and theft. Also, the user interface can sometimes be less than ideal on the machines.
  • How do you handle coupons? Loyalty cards? Buying 20 of the same item? Scanning the wrong item?
  • The self-service stations are lightly manned. The ratio between employee monitors and self-checkout machines runs about 1:4 in my experience. If you have a problem, you might end up waiting longer than a traditional manned checkout.
  • How do you ring up items like fruit and vegetables which don't have UPC codes, and have to be weighed?
  • What about unusual, awkwardly shaped items or oversize items?
  • Customers who have trouble during self-checkout may feel they're stupid, or that they did something wrong. Guess where they're going to lay the blame for those feelings?


There are certain rituals to using the self-service checkout machines. And we know that. We programmers fundamentally grok the hoops that the self-service checkout machines make customers jump through. They are, after all, devices designed by our fellow programmers. Every item has to be scanned, then carefully and individually placed in the bagging area which doubles as a scale to verify the item was moved there. One at time. In strict sequence. Repeated exactly the same every time. We live this system every day; it's completely natural for a programmer. But it isn't natural for average people. I've seen plenty of customers in front of me struggle with self-service checkout machines, puzzled by the workings of this mysterious device that seems so painfully obvious to a programmer. I get frustrated to the point that I almost want to rush over and help them myself. Which would defeat the purpose of a.. self-service device.


I was thinking about this while reading Michael Meeks' article, Measuring the true success of OpenOffice.org. He reaches some depressing conclusions about the current state of OpenOffice, a high profile open source competitor to Microsoft Office:



Crude as they are, the statistics show a picture of slow disengagement by Sun, combined with a spectacular lack of growth in the developer community. In a healthy project we would expect to see a large number of volunteer developers involved, in addition - we would expect to see a large number of peer companies contributing to the common code pool; we do not see this in OpenOffice.org. Indeed, quite the opposite. We appear to have the lowest number of active developers on OO.o since records began: 24, this contrasts negatively with Linux's recent low of 160+. Even spun in the most positive way, OpenOffice.org is at best stagnating from a development perspective.


This is troubling, because open source software development is the ultimate self-service industry. As Michael notes, the project is sadly undermining itself:



Kill the ossified, paralysed and gerrymandered political system in OpenOffice.org. Instead put the developers (all of them), and those actively contributing, into the driving seat. This in turn should help to kill the many horribly demotivating and dysfunctional process steps currently used to stop code from getting included, and should help to attract volunteers. Once they are attracted and active, listen to them without patronizing.


Indeed, once you destroy the twin intrinsic motivators of self-determination and autonomy on an open source project, I'd argue you're no better off than you were with traditional closed source software. You've created a self-service checkout machine so painful to use, so awkward to operate, that it gives the self-service concept a bad name. And that's heartbreaking, because self-service is the soul of open source:



Why is my bug not fixed? Why is the UI still so unpleasant? Why is performance still poor? Why does it consume more memory than necessary? Why is it getting slower to start? Why? Why? The answer lies with developers: Will you help us make OpenOffice.org better?


In order for open source software projects to survive, they must ensure that they present as few barriers to self-service software development as possible. And any barriers they do present must be very low -- radically low. Asking your customers to learn C++ programming to improve their Open Office experience is a pretty far cry indeed from asking them to operate a scanner and touchscreen to improve their checkout experience. And if you can't convince an audience of programmers, who are inclined to understand and love this stuff, who exactly are you expecting to convince?


So, if you're having difficulty getting software developers to participate in your open source project, I'd say the community isn't failing your project. Your project is failing the community.





[advertisement] Tired of restoring deleted files? Get PA File Sight and track down the culprit. PA File Sight ? file auditing made easy. Download the Free Trial!



Read More...

Your Ad Here

How to fix underexposed images

This technique will show you how to fix an underexposed photo. I have fixed
underexposed pictures that was underexposed by as much as two stops using this
technique. Many people are unaware of how easy it is to correct underexposed
pictures with photoshop.

Step One :
Drag your layer
onto the create a new layer icon in the layers palette. Change the blend mode
from normal to screen.


Duplicate layer and select Screen

St ...






Read More...
Your Ad Here

The Sad Tragedy of Micro-Optimization Theater


I'll just come right out and say it: I love strings. As far as I'm concerned, there isn't a problem that I can't solve with a string and perhaps a regular expression or two. But maybe that's just my lack of math skills talking.


In all seriousness, though, the type of programming we do on Stack Overflow is intimately tied to strings. We're constantly building them, merging them, processing them, or dumping them out to a HTTP stream. Sometimes I even give them relaxing massages. Now, if you've worked with strings at all, you know that this is code you desperately want to avoid writing:



static string Shlemiel()
{
string result = "";
for (int i = 0; i < 314159; i++)
{
result += getStringData(i);
}
return result;
}


In most garbage collected languages, strings are immutable: when you add two strings, the contents of both are copied. As you keep adding to result in this loop, more and more memory is allocated each time. This leads directly to awful quadradic n2 performance, or as Joel likes to call it, Shlemiel the painter performance.



Who is Shlemiel? He's the guy in this joke:


Shlemiel gets a job as a street painter, painting the dotted lines down the middle of the road. On the first day he takes a can of paint out to the road and finishes 300 yards of the road. "That's pretty good!" says his boss, "you're a fast worker!" and pays him a kopeck.


The next day Shlemiel only gets 150 yards done. "Well, that's not nearly as good as yesterday, but you're still a fast worker. 150 yards is respectable," and pays him a kopeck.


The next day Shlemiel paints 30 yards of the road. "Only 30!" shouts his boss. "That's unacceptable! On the first day you did ten times that much work! What's going on?"


"I can't help it," says Shlemiel. "Every day I get farther and farther away from the paint can!"



This is a softball question. You all knew that. Every decent programmer knows that string concatenation, while fine in small doses, is deadly poison in loops.


But what if you're doing nothing but small bits of string concatenation, dozens to hundreds of times -- as in most web apps? Then you might develop a nagging doubt, as I did, that lots of little Shlemiels could possibly be as bad as one giant Shlemiel.


Let's say we wanted to build this HTML fragment:



<div class="user-action-time">stuff</div>
<div class="user-gravatar32">stuff</div>
<div class="user-details">stuff<br/>stuff</div>


Which might appear on a given Stack Overflow page anywhere from one to sixty times. And we're serving up hundreds of thousands of these pages per day.


Not so clear-cut, now, is it?


So, which of these methods of forming the above string do you think is fastest over a hundred thousand iterations?


1: Simple Concatenation



string s =
@"<div class=""user-action-time"">" + st() + st() + @"</div>
<div class=""user-gravatar32"">" + st() + @"</div>
<div class=""user-details"">" + st() + "<br/>" + st() + "</div>";
return s;


2: String.Format



string s =
@"<div class=""user-action-time"">{0}{1}</div>
<div class=""user-gravatar32"">{2}</div>
<div class=""user-details"">{3}<br/>{4}</div>";
return String.Format(s, st(), st(), st(), st(), st());


3: string.Concat



string s =
string.Concat(@"<div class=""user-action-time"">", st(), st(),
@"</div><div class=""user-gravatar32"">", st(),
@"</div><div class=""user-details"">", st(), "<br/>",
st(), "</div>");
return s;


4: String.Replace


string s =
@"<div class=""user-action-time"">{s1}{s2}</div>
<div class=""user-gravatar32"">{s3}</div>
<div class=""user-details"">{s4}<br/>{s5}</div>";
s = s.Replace("{s1}", st()).Replace("{s2}", st()).
Replace("{s3}", st()).Replace("{s4}", st()).
Replace("{s5}", st());
return s;


5: StringBuilder



var sb = new StringBuilder(256);
sb.Append(@"<div class=""user-action-time"">");
sb.Append(st());
sb.Append(st());
sb.Append(@"</div><div class=""user-gravatar32"">");
sb.Append(st());
sb.Append(@"</div><div class=""user-details"">");
sb.Append(st());
sb.Append("<br/>");
sb.Append(st());
sb.Append("</div>");
return sb.ToString();


Take your itchy little trigger finger off that compile key and think about this for a minute. Which one of these methods will be faster?


Got an answer? Great!


And.. drumroll please.. the correct answer:

 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


It. Just. Doesn't. Matter!



We already know none of these operations will be performed in a loop, so we can rule out brutally poor performance characteristics of naive string concatenation. All that's left is micro-optimization, and the minute you begin worrying about tiny little optimizations, you've already gone down the wrong path.


Oh, you don't believe me? Sadly, I didn't believe it myself, which is why I got drawn into this in the first place. Here are my results -- for 100,000 iterations, on a dual core 3.5 GHz Core 2 Duo.








1: Simple Concatenation606 ms
2: String.Format665 ms
3: string.Concat587 ms
4: String.Replace979 ms
5: StringBuilder588 ms


Even if we went from the worst performing technique to the best one, we would have saved a lousy 391 milliseconds over a hundred thousand iterations. Not the sort of thing that I'd throw a victory party over. I guess I figured out that using .Replace is best avoided, but even that has some readability benefits that might outweigh the miniscule cost.


Now, you might very well ask which of these techniques has the lowest memory usage, as Rico Mariani did. I didn't get a chance to run these against CLRProfiler to see if there was a clear winner in that regard. It's a valid point, but I doubt the results would change much. In my experience, techniques that abuse memory also tend to take a lot of clock time. Memory allocations are fast on modern PCs, but they're far from free.


Opinions vary on just how many strings you have to concatenate before you should start worrying about performance. The general consensus is around 10. But you'll also read crazy stuff, like this:



Don't use += concatenating ever. Too many changes are taking place behind the scene, which aren?t obvious from my code in the first place. I advise you to use String.Concat() explicitly with any overload (2 strings, 3 strings, string array). This will clearly show what your code does without any surprises, while allowing yourself to keep a check on the efficiency.


Never? Ever? Never ever ever? Not even once? Not even if it doesn't matter? Any time you see "don't ever do X", alarm bells should be going off. Like they hopefully are right now.


Yes, you should avoid the obvious beginner mistakes of string concatenation, the stuff every programmer learns their first year on the job. But after that, you should be more worried about the maintainability and readability of your code than its performance. And that is perhaps the most tragic thing about letting yourself get sucked into micro-optimization theater -- it distracts you from your real goal: writing better code.





[advertisement] Did your buddy just get his ear chewed off for another server crash? Help him out by recommending PA Server Monitor. He just might buy you lunch. Download the Free Trial!



Read More...

Your Ad Here

Free Adobe Lightroom Presets

I have decided to share my set of black and white presets that I use in Adobe Lightroom. The set has 3 black and white presets and then 4 split tone presets for each black and white preset, making a total of 15 presets which I am giving away free.

The beauty of using presets is that they are really easy to use and results are instant.

To install the presets, switch to the develop module in Adobe Lightroom, right click on the presets pane and select import.
< ...






Read More...
Your Ad Here

IE ACCESS DENIED

I HAVE SET AN SCHEDULED TASK ON WINDOW SERVER 2003,WHICH IS GIVING ME THE FOLLOWING ERROR:

C:WINDOWSsystem32>cd c:Program FilesInternet Explorer
Access is denied.

C:WINDOWSsystem32>iexplore.exe
iexplore.exe is not recognized as an internal or external command,
operable program or batch file.

Read More...
Your Ad Here

How To Test if your ISP is manipulating BitTorrent traffic (throttling P2P traffic)

Does your ISP throttling P2P traffic?, actually many ISP around world do, but of course not all of them
ISPs might be throttling P2P traffic without letting the users know about these practices. There are not many good tools available to test ISPs behavior and if they are throttling some kind of traffic. Google along [...]


Related posts:
  1. How To Bypass Your ISP And/or Your Country Restrictions


Read More...
Your Ad Here

Default gateway problem

Hey, I am new to this forum and I was wondering if you guys could help?

I think my Default Gateway has a problem. I cannot click into some google links, msn wont sign in. I read somewhere its due to faulty microsoft registry and I am scared.

Is there anyway I can fix this?

Read More...

[Source: Webmaster Forum - Posted by FreeAutoBlogger]
Your Ad Here

How To Bypass Your ISP And/or Your Country Restrictions

“Site is blocked”, “the administrator has blocked this site”, “Error 403 Forbidden Unauthorized”. All of this is just example of the frustrated error messages you get if you were trying to access website that has been blocked, may be you at work and the administrator is blocking some cretin sites, or may be your ISP [...]


Related posts:
  1. How to Access Blocked Websites
  2. How To Test if your ISP is manipulating BitTorrent traffic (throttling P2P traffic)
  3. Making Vista Faster a Complete How To Guide Part One


Read More...
Your Ad Here

A Visit With Alan Kay


Alan Kay is one of my computing heroes. All this stuff we do every day as programmers? Kay had a hand in inventing a huge swath of it:



Computer scientist Kay was the leader of the group that invented object-oriented programming, the graphical user interface, 3D computer graphics, and ARPANET, the predecessor of the Internet


So as you might imagine, I was pretty thrilled to see he was dabbling a little in Stack Overflow. It's difficult to fathom the participation of a legend like Alan in a site for regular programmers. Maybe we should add a Turing Award badge. At least people can't complain that it is unobtainable.


Jeff Moser, an avid Stack Overflow user with a an outstanding blog of his own, had the opportunity to meet Alan recently and ask him about it. Jeff gave me permission to reprint his field report here.



Since I knew I'd be seeing Alan Kay at Rebooting Computing, I decided to verify his Stack Overflow usage in person. According to Alan, he found the original question using an automated search alert just like Atwood had guessed.


We then proceeded to discuss how it's sad that identity is still hard online. For example, it's hard to prove if I'm telling the truth here. As for that, the best I can offer is to look at my picture on my blog and compare with this picture from the Summit:


moser-kay.jpg


(Alan is on my right)


Alan is a great person to talk to because of his huge experience in the computing field.


He's currently working at the Viewpoints Research Institute where they're doing some classic PARC style research of trying to do for software what Moore's Law did for hardware. A decent explanation by Alan Kay himself is available here (wmv). For specifics, you might want to check out the recent PhD thesis of Alessandro Warth, one of Alan's students.


One of the greatest lessons I've personally learned from Alan is just how important computing history is in order to understand the context of inventions. One of Alan's greatest heroes is J.C.R. Licklider (a.k.a. "Lick"). Our discussions a few months ago led me to read "The Dream Machine" and write a post about it.


A consequence of studying history well is that you'll notice that a ton of the really cool and interesting stuff was developed in the ARPA->PARC days and it's slowed down since. I'd assume that's why he's curious about anything post-PARC's peak days (e.g. 1980+).


I'd say that Alan firmly believes that the "Computer Revolution Hasn't Happened Yet" (still) even though he's been talking about it for decades.


For example:




Speculating from discussions, I'd say that the problem he sees is that computers should help us become better thinkers rather than "distracting/entertaining ourselves to death." Alan likes to use the example that our "pop culture" is more concerned with "air guitar" and "Guitar Hero" rather than appreciating genuine beauty and expressiveness of real instruments (even though it takes a bit longer to master). Check out 1:03:40 of this video from program for the Future. In effect, we're selling our potential short.


I think that's my biggest take away from Alan about computing: computers can do so much more than we're using them for now (e.g. provide "a teacher for every learner").


Hope this helps provide some context.



Indeed it does, Jeff. If you'd like to get a sense of what Alan is about and the things he's working on, I recommend this Conversation with Alan Kay from the ACM.



It's not that people are completely stupid, but if there's a big idea and you have deadlines and you have expedience and you have competitors, very likely what you'll do is take a low-pass filter on that idea and implement one part of it and miss what has to be done next. This happens over and over again. If you're using early-binding languages as most people do, rather than late-binding languages, then you really start getting locked in to stuff that you've already done. You can't reformulate things that easily.


Let's say the adoption of programming languages has very often been somewhat accidental, and the emphasis has very often been on how easy it is to implement the programming language rather than on its actual merits and features. For instance, Basic would never have surfaced because there was always a language better than Basic for that purpose. That language was Joss, which predated Basic and was beautiful. But Basic happened to be on a GE timesharing system that was done by Dartmouth, and when GE decided to franchise that, it started spreading Basic around just because it was there, not because it had any intrinsic merits whatsoever.


This happens over and over again. The languages of Niklaus Wirth have spread wildly and widely because he has been one of the most conscientious documenters of languages and one of the earlier ones to do algorithmic languages using p-codes (pseudocodes)?the same kinds of things that we use. The idea of using those things has a common origin in the hardware of a machine called the Burroughs B5000 from the early 1960s, which the establishment hated.



Any similarity between the above and PHP is, I'm sure, completely coincidental. That sound you're hearing is just a little bit of history repeating.


To me, the quintessential Alan Kay presentation is Doing with Images Makes Symbols: Communicating With Computers.




As the video illustrates, computers are almost secondary to most of Alan's work; that's the true brilliance of it. The real goal is teaching and learning. I'm reminded of a comment Andrew Stuart, a veteran software development recruiter, once sent me in email:



One subtle but interesting observation that I would make - your article points out that "what software developers do best is learn" - this is close to the mark, though I would rearrange the words slightly to "what the best software developers do is learn." Not all software developers learn, but the best ones certainly do.


And this, I think, lies at the heart of everything Alan does -- computing not as an end in itself, but as a vehicle for learning how to learn.





[advertisement] In charge of a mountain of Windows servers? PA Server Monitor to the rescue! Download the Free Trial!



Read More...

Your Ad Here