Tuesday, December 26, 2006

Provider Injection, not Dependency Injection

The term "provider injection" should replace the term "dependency injection."

The term misleads by seeming to say that you are "injecting" a dependency. This would mean that the thing got the "dependency" injected into it now depends upon something that it didn't depend upon before. This is not what "dependency injection" means.

The term "dependency injection" actually means that if Object A depends upon having a "B" object that you "inject" a B object into object A (instead of object A fetching or creating a B on its own.) A classic example would be a Person object that depends upon having a database connection in order to create/update/edit/delete people in the database. Instead of the Person object creating its own database connection it gets one passed to it (i.e. "injected") perhaps as an argument to its constructor. But, see, you are not injecting a dependency. You are injecting a provider.

A "dependency" is not the thing upon which you depend. The thing you depend upon is a tool, resource, skill, or capability and you get those things by making them yourself, or from a provider.

So, I think the term "provider injection" should replace the term "dependency injection."

By the way, I want to thank Mike Ward for originally introducing me to the term "dependency injection" - whatever phrase is used, the actual practice is one I have done in the past and it is good to have a phrase to describe this technique.

See also:
Wikipedia entry on Dependency Injection (which I have attempted to edit.)
Inversion of Control Containers and the Dependency Injection pattern - 2004 article by Martin Fowler
Spring in Action - Book on Spring

Technorati Tags: ,

Saturday, December 09, 2006

countperl - count lines, packages, subs and complexity of Perl files

I recently released a new version of Perl::Metrics::Simple (0.03) that includes the countperl script. The program reports on how complicated your Perl code is - giving you direction on where to start refactoring it to make it easier to understand, debug, and maintain. Try to keep the McCabe Complexity at 9 or less for every subroutine and "main" section of your code.
countperl is a command line tool that you execute with a list of one or more files andor directories. The program examines the named files and recursivesly searches named directories for Perl files.
The countperl program produces a report on STDOUT of total lines, packages, subroutines/methods, the minimum, maximum, mean, standard deviation, and median size and mccabe_complexity (aka cyclomatic complexity) of subroutines and the 'main' portion of each file (everything not in a subroutine.)

Output Format

Line counts do not include comments nor pod.
The current output format is human-readable text. For example, a report based on analyzing three files might look like this:
 Perl files found:                3

Counts
------
total code lines: 856
lines of non-sub code: 450
packages found: 3
subs/methods: 42

Subroutine/Method Size
----------------------
min: 3 lines
max: 32 lines
mean: 9.67 lines
std. deviation: 7.03
median: 7.50

McCabe Complexity
-----------------
Code not in any subroutine::
min: 1
max 1
mean: 1.00
std. deviation: 0.00
median: 1.00

Subroutines/Methods:
min: 1
max: 5
avg: 1.00
std. deviation: 1.36
median: 1.00

Tab-delimited list of subroutines, with most complex at top
-----------------------------------------------------------
complexity sub path size
5 is_perl_file lib/Perl/Metrics/Simple.pm 11
5 _has_perl_shebang lib/Perl/Metrics/Simple.pm 13
5 _init lib/Perl/Metrics/Simple/Analysis/File.pm 30
4 find_files lib/Perl/Metrics/Simple.pm 11
4 new lib/Perl/Metrics/Simple/Analysis.pm 10
4 is_ref lib/Perl/Metrics/Simple/Analysis.pm 8

Chris Chedgey has a posting that describes a common situation - you have accumulated a huge pile of "complexity debt" - well, using countperl is one tool to help "keep a lid on it."



Technorati Tags: , , , , ,

Thursday, November 02, 2006

Very Simple Wrapper for Perl DBI

I've released a new Perl module: DBIx::Wrapper::VerySimple which provides a very simple (object-oriented) wrapper around DBI.

DBIx::Wrapper::VerySimple provides three methods that between them cover about 98% of the SQL calls I've seen in Perl code over the years:
  • fetch_all() - run a SQL statement (typically a SELECT statement) and get back an arrayref of hashrefs (one hashref for each result row.)

  • fetch_hash() - run a SQL statement (typically a SELECT statement) and get back a single hashref (for just one row.)

  • Do() - run a non-SELECT statement such as CREATE or DELETE.
DBIx::Wrapper::VerySimple will use bind-variables if you pass them as additional arguments to fetch_all(), fetch_hash() or Do().

DBIx::Wrapper::VerySimple also provides a get_args() method and a dbh() method. get_args() returns the arguments originally passed to new() so you can re-connect, etc. if need be. dbh() returns the raw DBI databse handle so you have ready access to all the features of DBI.

DBIx::Wrapper::VerySimple is available on the CPAN: http://search.cpan.org/dist/DBIx-Wrapper-VerySimple/

Technorati tags: CPAN, DBI, Perl

Tuesday, October 03, 2006

Perl::Metrics::Simple

Counts files, packages, subroutines, and calculates cyclomatic complexity of Perl files.

I recently released an alpha version of new Perl module: Perl::Metrics::Simple

There is an included script, in the examples/ directory which produces output like this:


Perl Files: $file_count

Line Counts
-----------
lines: $lines
packages: $package_count
subs: $sub_count
all main code: $main_stats->{lines}

min. sub size: $lines{min} lines
max. sub size: $lines{max} lines
avg. sub size: $lines{average} lines
median sub size: $lines{median}

McCabe Complexity
-----------------
min. main: $main_complexity{min}
max. main: $main_complexity{max}
median main: $main_complexity{median}
average main: $main_complexity{average}

subs:
min: $complexity{min}
max: $complexity{max}
avg: $complexity{average}
median: $complexity{median}
std. deviation: $complexity{standard_deviation}

Friday, September 22, 2006

Ruby plug-in for Eclipse

I recently found out there is a Ruby plug-in for the Eclipse IDE: The "Ruby Development Tool."

Quick install instructions:
  1. Start up Eclipse.
  2. Help > Software Updates > Find and Install...
  3. Select "Search for new features to install"
  4. Click "Next"
  5. Click "New Remote Site..."
  6. Enter the URL of stable release branch is: http://updatesite.rubypeople.org/release
  7. Click "OK"
  8. Click "Finish"
Ruby Development Tool (RDT) web site: http://rubyeclipse.sourceforge.net/index.rdt.html

Article on using RDT: http://www-128.ibm.com/developerworks/opensource/library/os-rubyeclipse/

Sunday, September 03, 2006

Count Perl Code

I've started working on a module to analyze perl code report of files, lines, packages, subroutines, etc.

A report could look like this:


% analyze.pl path/to/directory/of/perl/code

files: 39
lines: 15929
packages: 39
subs: 336


The module is (very tentively) named Perl::Code::Analyze and uses
Adam Kennedy's PPI module for the real work.

A (very) alpha version of the module is at
http://g5-imac.matisse.net/~matisse/Perl-Code-Analyze-0.01

Here's an example of a script that would use Perl::Code::Analyze
to produce the report shown above:


#!/usr/bin/perl

use strict;
use warnings;
use Perl::Code::Analyze;
my $analzyer = Perl::Code::Analyze->new;

my $analysis = $analzyer->analyze_files(@ARGV);

my $file_count = $analysis->file_count;
my $package_count = $analysis->package_count;
my $sub_count = $analysis->sub_count;
my $lines = $analysis->lines;

print <<"EOS";

files: $file_count
lines: $lines
packages: $package_count
subs: $sub_count

EOS

exit;

Thursday, August 31, 2006

Apple releases Launchd as Open Source

Apple has released their launchd process manager under the Apache Open Source License (version 2.0)


Launchd combines most of the features of the venerable Unix cron and init facilities. You can use launchd to make sure a process is always running ("watchdogging"), to run jobs at specific times, to run jobs when/if a file appars in a specified directory, etc.

I cover the basics of launchd in my book, "Unix for Mac OS X Tiger", and ars technicha covers it in their Tiger review.

There is already a FreeBSD port of launchd.

Macscripter.net has an article describing how to use launchd with AppleScript to access a Flashdrive.

Sunday, August 20, 2006

Mock Classes in Perl Unit Tests

UPDATE:
November 2, 2006: I finally released the Wrapper module described here on the CPAN system.

Today I added unit tests to a Perl module that uses the DBI module. My module, DBIx::Wrapper::VerySimple is one that I wrote years ago.

The tests use three tiny mock classes that take the place of the real DBI module. This allows the units tests to run in complete isolation from the actual DBI module.

Back when I wrote DBIx::Wrapper::VerySimple wasn't experienced enough to bother creating unit tests for my code. These days I create unit tests any time I create a new module for a project and sometimes for scripts as well.

The challenge in this case was that constructor method, DBIx::Wrapper::VerySimple->new() calls DBI->connect(), and I didn't want my unit tests to require connecting to an actual database. My solution is to have three tiny mock classes in my test code including a DBI.pm that the tests load before DBI::Wrapper can load the real DBI.pm.

Here's what my mock DBI.pm looks like:

# Mock class - for testing only
package DBI;
use strict;
use warnings;

# warn 'Loading mock library ' . __FILE__;
my $MOCK_DBH_CLASS = 'DBI::Mock::dbh';

my %ARGS = ();

sub connect {
my ( $class, @args ) = @_;
my
$fake_dbh = {};
bless $fake_dbh, $MOCK_DBH_CLASS;
$
ARGS{$fake_dbh} = \@args;
return
$fake_dbh;
}

1;




My test script loads the mock DBI.pm and two other mock classes before DBI::Wrapper is loaded for testing. This way, when DBI::Wrapper is compiled my mock DBI.pm is used instead of the real one:

# Ensure that DBI::Wrapper loads our mock DBI.pm
use
lib "$Bin/mock_lib";
use DBI;
use
DBI::Mock::dbh;
use
DBI::Mock::sth;


This approaches works very well, and I was able to create unit tests that intercept all the calls that DBI::Wrapper makes that would normally go to the real DBI, and my intercepting these I can check that DBI::Wrapper is passing the expected values to DBI. I am deliberately not testing the real DBI module which has its own extensive set of unit tests. I merely test that my module will make the expected calls to DBI.


More about DBI::Wrapper

The module is available online at http://www.matisse.net/perl-modules/DBI/Wrapper/
The version with the new unit tests is version 0.04.

From the README:

DBI::Wrapper is a simple module that provides a high-level interface
to the Perl DBI module. The provided methods are for fetching
a single record (returns a hash-ref), many records (returns
an array-ref of hash-refs), and for executing a non-select statement
(returns a result code).

The intention here is that your application will have much cleaner code,
so instead of writing:

$sql = 'SELECT name,address FROM $table WHERE zipcode=?';
$sth = $dbh->prepare($sql);
$rv = $sth->execute($zipcode);
@found_rows;
while ( my $hash_ref = $sth->fetchrow_hashref ) {
push( @found_rows, $hash_ref );
}

You would write:

$sql = 'SELECT name,address FROM $table WHERE zipcode=?';
$found_rows = $wrapper->FetchAll($sql,$zipcode); # An array_ref of hash_refs


I wrote the my version of DBI::Wrapper several years ago after I got the idea from a co-worker when I was doing a gig at TechTV.

Friday, June 02, 2006

Comparing Pair Programming to Solo Programming

Brian Slesinsky wrote recently comparing Pair Programming to Code Reviews and argues that Pair Programming is better. I agree with his reasoning. I also think we need more hard-data on comparison of pairing vs. solo programming. It is a multi-dimensional issue, involving at least:
  • Total developer time spent.
  • Code quality and increase/reduction in the cost of product quality (more bugs == higher cost.)
  • Time-to-market. Two people spending 70 hours in parallel is faster to market than one person spending 100 hours.
  • Personalities and social dynamics. People have widely differing and strong held feelings about pairing vs. solo programming.

Technorati Tags: , , , , , ,

Thursday, June 01, 2006

How much eXtremism is too extreme?

Blain Buxton recently wrote about how Pair Programming all the time could be bad, and in general makes the case for not being extreme about being eXtreme.

I think the most important point Blaine makes is that one should always be focused on what is best for the process, not on adhering to any particular technique.

Technorati Tags: , , , ,

Wednesday, May 31, 2006

Pair-Programming Experiment at Stanford University

Stanford University is in the middle of a formal experiment on Pair Programming. I participated in the experiment this past weekend and had a lot of fun and, hopefully, helped advanced the state of knowledge about this controversial software development technique.

A PDF document describing the experiment is available at http://hci.stanford.edu/research/pairs/PairProgramming-WhenWhy.pdf

The experiment is currently in the second ("Laboratory Experiment") phase as described in that document.
In the first phase ("Ethnographic Field Study") of the experiment the researchers visited two companies where pair-programming is regularly practiced and observed and recorded audio of programming pairs. These observations helped the researchers develop some hypothesis which may be checked by further work in the experiment.

The researchers are focusing on the interaction between the people in the pairs (the "socio-cognitive factors" is how they say it.) The current phase of the experiment involves controlled tests where a pair of programmers or a solo programmer complete a warmup task and then a larger task (which takes around 5-6 hours all together.) The programmers are recorded via audio, video, and screen captures.

During my pairing session this past weekend we talked a bit about the notion that once we humans have a goal we often have a strong impulse to start acting towards that goal, without necessarily doing much planning or testing - like driving nails before deciding where, exactly to drive the nails, and that we continue with our actions without doing much checking if we are in fact heading towards our goal. I don't think that Pair Programming stops individuals from having this impulse, but the presence of a second person makes it more likely, perhaps, that someone else will stop us. Hence my new joking slogan: "Pair Programming: It's not twice as smart, it's half as dumb!"

Pair Programming as a formally recognized technique has been around for over a decade now, and as an informal reality I would guess it has been around for as long as there has programming. There has been only a little formal study of Pair Programming though, and Pair Programming is perhaps the most controversial technique in the eXtreme Programming methodology, touching as it does directly on people's self-image and personalities.

I hope this experiment will help advance the state of knowledge about software development in general, and Pair Programming in particular, so that we can make better decisions about how to build things.

The experiment is led by Robert P. Plummer, PhD., a Lecturer in the Stanford Department of Computer Science. Inquiries about the experiment can be sent to experiment at cs.stanford.edu.

Technorati Tags: , , , ,

Test-Driven Development in the Physical World

It's occurred to me that we routinely practice Test Driven Development in the physical world, such as building houses.

Consider how we use a "test first" approach in construction:

Typically, if you want a hole in the wall 30 inches below the ceiling:
1. You take a ruler and make a mark on the wall 30" below the ceiling.
2. You drill a hole where the mark is.
3. If your hole is where the mark is/was, your test passes, otherwise, go to step 1.

Now consider what that would be like without doing it "test first":

1. You drill a hole in the wall.
2. You measure hole far from the ceiling the hole is.
3. If the hole is 30" from the ceiling, you win . Otherwise, go to step 1.

Wednesday, May 24, 2006

Software Best Practices Wiki?

I've been thinking about creating a wiki for software development "best practices."
Something that might contain entries like this:

1. Where Applications Save Data


Many applications need to save data between invocations of the application.
Some examples are:
  • User Preferences.
  • Documents or other work product from the application (web pages, spreadsheet, reports, images, etc.)
  • Cache files.
  • Other long-lived data (bookeeping data, emails, audit trails, etc.)

1.1. Forces at Work


Some of the forces at work in determing where an application should save data include:
  • Access ability: the application needs write and read access.
  • Upgradeability: some data must survive a software upgrade.
  • Multiple users: some applications are used by many users.
  • Privacy and security: Some data must be protected from unauthorized access.
The types of data an application saves are:
  • Temporary vs. Long-Lived
  • User-associated vs. Application-associated
  • Internal use by the application vs. Suitable for distribution outside the application environment.
  • 1.2. General Principles

  • Application shall:
  • Save Long-Lived data where it will survive an upgrade.
  • Save Temporary data where it is easily garbage-collected.
  • Save per-user data in a place belonging to the user.
  • Save data for distribution in a place the user is explicitly informed about.
  • Use encryption to help secure privacy.

Technorati Tags:

Monday, May 15, 2006

Wiki's in the Workplace

Peter Thoeny and Dan Woods, co-authors of a Book on Wikis in the Workplace have started a company to help business use Wikis: StructuredWikis: http://www.structuredwikis.com/

I've seen wiki engines used at three very different organizations: The Burning Man tech-team, Technorati.com, and Barclays Global Investors. I think wikis have a important role to play in almost all modern businesses that have an Intranet of any kind.

Technorati Tags: ,

Monday, April 24, 2006

Ruby / Flash Gateway: Alph

I saw a cool demo yesterday of Alph. Alph allows you to use Ruby to control a Flash movie.

Technorati Tags: , ,

Friday, April 14, 2006

The WELL is Down!

My long-time online home, The Whole Earth 'Lectronic Link has been unavailable for several hours now - the longest unplanned outage I can recall in over 10 years.

I no longer have any staff member phone numbers but I called their office and spoke with Gail Williams, Director of Community and she pointed me to http://www.salon.com/wellstatus/ - where recent status info should be available. She also told me that people have sent flowers and left goodwill messages ("beams") on the WELL's voice mail.

Update: Looks like the WELL's command-line interface came back up just before midnight April 14th, Pacific Time.

Tuesday, April 11, 2006

Wanted: Blogging Platform with Revision History

I'd love to have a blogging platform that supports a Revision History of Published Posts:

Desired features:
  • Each published version of a posting would be saved.
  • Readers would be able to see a list of versions.
  • Reader can choose to view a specific version.
  • Reader may compare any two versions.
I'm imagining that each posting would be stored under some kind of Revision Control system, similar to how wiki's treat pages.

I'd be glad to help make this happen.

I've posted this in the BloggerDev group - if you are reading this and agree, please go there and post a "me too" comment.

Technorati Tags: ,

Sunday, April 09, 2006

Making Things is a Fractal Process

/*
* All building and making of things involves a fractal (recursive) process
* like this pseudo-code:
* Takes a set of Requirements and returns something new
*/
public Thing makeNewThing ( Requirements requirements ) {
Thing newThing = ();
while ( requirements.not_satisfied ) {
Things requiredPieces = requirements.whatCanWeMakeOneFrom;
Things piecesWeHave =
this.whatDoWeHaveAlready(requirements,requiredPieces);
Things piecesNeeded =
requiredPieces - piecesWehave;
newThing.assemblePieces(piecesWeHave);
foreach Thing neededPiece ( piecesNeeded ) {
piecesWeHave.add( makeSomething(neededPiece) );
newThing.assemblePieces(piecesWeHave);
}
requirements.check(newThing);
}
return newThing;
}

Technorati Tags: , ,

Run Windows, Mac OS X, Solaris, and Linux Simultaneously

See: Parallels Workstation: http://www.parallels.com/en/products/workstation/

The folks at Parallels, Inc. have made a real breakthrough - building upon Intel's virtualization feature of their newest CPUs, Parallels allows you to boot into one operating system, and then run one or more others as "hosted" operating system in windows in the primary one.

In other words: You boot into say, Mac OS X, and then run a real copy of Windows XP, in a window. The hosted OS is not running on a chip emulator - each OS is running as a virtual machine.

For example:

Technorati Tags: , ,

Software Development is like Building Construction

The process of Software Development is a lot like the process of Building Construction, and although they also have important differences, these differences are mainly differences of degree rather than kind.

Software folks occasionally make a big deal of the differences between software development and building construction, and building trades/architecture folks often think you cannot do iterative design on buildings.

The fact is, software development and building construction have many important similarities, and many processes that are effective in both.

Both software development and building construction involves the marshaling of numerous interdependent systems designed, built, and assembled by a variety of highly specialized people.


Dependencies

In the software world we have architecture issues (client/server, message-passing, batch processing, micro-kernel, etc.) data structures (databases), business logic, graphic and other user-interface design, etc.

In the world of buildings we have architecture, foundation, frame or shell, electrical, plumbing, wall coverings, HVAC, etc.

Both software development and building construction have numerous layers of dependencies. There are many variations on the actual stack of what depends upon what, but there are always dependencies.

Typical examples of the stack of dependencies in software are hardware, then the operating system, then the programming language, then some kind of data structure, then something to manipulate the data, etc. Buildings too have their stack of dependencies: connection to the Earth (foundation), the load-bearing frame or envelope, utilities, skin/weather-proofing, etc.

Some might say that software may be "built" in many different ways, but that buildings must be built from the ground up. Well, if you think buildings must be built ground-up, do a search for "Linn Cove Viaduct" and see http://www.dot.state.oh.us/se/SI9/19Freeby.pdf, and consider how a space station can be built in orbit.


Incremental Design

These days (early 21st Century) the concept of incremental design is reasonably well known in the software world, and is arguably becoming more popular. There is a long history of incremental design in building - think of the Wright brothers many iterations on their aircraft design and their invention of and use of wind tunnels to test and refine (shall we say "refactor"?) their designs.

Both mathematical and physical models have been in the repertoire of the making process for thousands of years.

These days (2006) we in the software world try to use intensive tests (such as unit tests) and clever modeling in our design/development processes.

Almost 20 years ago I was building full-scale mockups, tests, and experiments of buildings as part of the design process and it works. I was working with Chris Alexander's Center for Environmental Structure and among many other examples of iterative design, complete with unit tests was the work we did on the exterior walls of the Julian Street Homeless Shelter in San Jose ( see this exterior photo of the main building, and this interior of the dining hall - those are sprayed-concrete trusses, also the results of a test-driven design process.)

The exterior wall system went through many iterations, and there were many kinds of tests, of both the visual and physical composition. In each test we would establish a certain standard or required result - sort of making an assertion about the design or structure, and then make a change or build something to implement the assertion.

Some of the tests were physical such as "Is the cover glaze sufficiently free of cracks?" Probably the most frequent kind of tests were tests to verify that a particular feature or system maintained or improved the life in the overall structure. For example, such a test might be "Check that the way the tiles are embedded creates a tangible border that increases the life of the whole wall." And then we would embed tiles using one or another method and check, if the test passed.


Conclusion

The design and development of software and the design and development of the built environment, like all building processes are iterative processes that involve the repeated assembly of existing materials in new combinations and structures, informed by human creativity and driven by human needs and desires. Humans have been building physical structures for many thousands of years, and some techniques have remained essential unchanged in all that time, while other techniques and materials have come into existence more, or less, recently. Those who build - builders of any kind - would do well to learn from each other as builders, and learn to search for the commonalities in our processes, successes, and failures, and to recognize objectively the differences, which are fewer than is commonly believed.

Thanks to Jeff Thalhammer for 'code review' on this document.


See Also

Technorati Tags: , , , ,

Saturday, April 08, 2006

The WELL Gopher

UPDATE - (June 2009) The WELL's Gopher server was decommissioned some months back, but the content is still available via a web server at: https://www.matisse.net/the-well/gopher/

The WELL is an online discussion system that started in 1985. At the very end of 1991, in fact, on New Years' Eve, The WELL connected to the Internet (via 56K connection to BARRNet, router configuration by Erik Fair.)

In those days I was the Customer Support Manager at The WELL and I wanted us to give something back to the Internet community. In those days "anonymous FTP" sites were a common way for organizations to share information with the public over the Internet. I suggested that The WELL create an anonymous FTP site where we would provide access to interesting information, drawing heavily on our association with The Whole Earth Catalog (The WELL was half owned by The POINT Foundation, publisher of The Whole Earth Catalog, and we were in the same building with the catalog staff.)

I recruited a small group of volunteer WELL users to be the editorial staff of the new site: Jerod Pore, Eric S. Theise, Jon Lebkowsky, and Paul Holbrook, and we started discussing what should be in the new site. During this (online) discussion a WELL user who was very involved with the Internet, Ed Vielmetti, suggested we check out this cool new rodent-based technology developed at the University of Michigan  Minnesota (the "Golden Gophers"). This new thing was called "gopher", and I immediately saw that it was much easier to use than FTP.

Eric Theise and I had a bit of a fight over that - Eric felt that FTP was the widely used and well known standard and we'd be cutting out all the people who didn't yet have access to a gopher client. I was all steve-jobsian about it "the interface is so much better, people will get a gopher client, this is the future, blah blah blah". I think it was the one place where I strongly asserted my WELL-staff role. We went with gopher.

The content in The WELL Gopher was a direct outgrowth of the WELL's connection to The Whole Earth Review - part of my evil plan was to bring the editorial approach of WER to cyberspace - in fact the top-level categories in the gopher were originally taken directly from the categories that the Whole Earth Catalog used.

There was a point when the WELL gopher and the spies.com gopher were the two hottest places in gopherspace.  This all came down to three really important factors:

  1. Content: We had great editors like Jon, Jerod, Eric, and Paul choosing great stuff. See this great story, "Forces Adrift, a Tale of Our Forces Afloat", by Chuck Charlton.
  2. Content: We were choosing from a pool of ideas that mapped very well onto the population using cyberspace. Roger Karraker's "Highways of the Mind" article is still relevent.
  3. Content: We had exclusive content - no offsite links for the first year or so. With maybe a couple exceptions the WELL gopher was the exclusive online location of all the content. This made us editors think more, and choose less.
These days there are many things like the WELL gopher, and nothing like it - like a magazine with a particular editorial group, the result is really all about the people and historical circumstances at the times of creation.

Notes:

  • Gopherspace is part of webspace. "The web", at least circa 1993 was all the resources reachable via (at least) the FTP, gopher, and HTTP protocols. You could throw in WAIS, telnet and maybe a couple of others.
  • The WELL gopher used a server that responded to both the gopher and HTTP protocols, so you could reach it with the URL gopher://gopher.well.com/ or http://gopher.well.com:70/ Initially the server only handled the gopher protocol but was soon switched to the GN server which handles both protocols, so even if you mistakenly think that a web site must utilize HTTP, the WELL gopher still qualified.

Technorati Tags: ,

Saturday, April 01, 2006

The WELL Turns 21

Jennifer made this excellent post about The WELL:
http://fierce.jnfr.com/archives/2006/04/the_well_turns.html

I've been using The WELL since 1987, and it's been around since 1985. If you want to read about it on the web, read this, but if you want to use it, learn to use the command-line interface and and connect to The WELL via SSH, not with a web browser.

Sunday, March 26, 2006

Net Neutrality - where are we now?

Does this sound familiar:
The battle is about who will build, own, use and pay for the high-speed data highways of the future and whether their content will be censored.
Roger Karraker wrote that in his 1991 article, "Highways of the Mind" and the article is still relevant today. Have a look.

(See also: The Communications section of The WELL gopher.)

Saturday, March 18, 2006

Perl BEGIN Blocks Considered Harmful

Adam Kennedy pointed this out to me:
  1. Create a new Perl file containing the following:

    BEGIN {
    print "hello world\n";
    }

  2. Check the syntax with

    perl -c filename

This will print hello world, even though "all" you did was compile the file.

What's going on here?

Well, BEGIN blocks get executed when Perl compiles the file.

So what?

If you run perl -c on a file that contains nasty code in a BEGIN block, you're screwed.

Consider this:
  1. You associate the filename extension .pm with a fancy Perl editor, like Komodo, or Affrus.
  2. You click on a link in your web browser.
  3. The web server redirects you to a URL for nasty_file.pm
  4. Your browser opens the associated editor and performs a"syntax check" by compiling the file
  5. The file's BEGIN block has naughty code in it...

You're screwed.

So, keep in mind that "just compiling" Perl code may actually execute the code, so, be careful!

Here's a bug report on this subject that I filed for the Eclipse/EPIC Perl IDE.

Tags: ,

Thursday, March 16, 2006

Testing if a Perl script compiles - compile_ok

A co-worker and I were pair programming a test suite the other day, and needed to test if a Perl script compiles - not a library or module but an executeable script, so the typical:
    require_ok()
use_ok()
functions provided by Test::More don't do exactly what we wanted.

The short story is that we ended up doing something like this:
  eval {
$output = `perl -c $script 2>&1`;
chomp $output;
};
is($output, "$script syntax OK", "$script compiles");
We later looked for a more general, better solution, and a colleague posted on the perl-qa mailing list, but the bottom line is that a more elegant solution still involves executing a new perl interpreter, but in a platform-independent way (perhaps using IPC::Open3 or something.) It's also likely that a better test is to ignore the output altogether and just check the return code from the compile process (the call to perl -c.)

March 18, 2006 update: Jeff Thalhammer found that a colleague Pierre Denis provides a syntax_ok method in Test::Strict that does what we want.
More update - I was prodded by Randall Schwartz to post why you should be careful when "only" compiling Perl code - see BEGIN Blocks Considered Harmful.
Tags: ,

Swimming Snake Robot

OK, ready to be creeped-out and fascinated?

http://www.hackaday.com/entry/1234000957073583/

Tuesday, February 21, 2006

RFC: Mozilla Architecture Changes

Dave Liebreich is looking for people to participate in conversations about proposed major changes in the architecture of Mozilla.

Dave provided the following URLs for current discussions about the issues (garbage collection approach and using exceptions) and asks that interested people jump in and participate:
Mozilla Wiki
Mozillazine.org
Benjamin Smedbergs on Exceptions
Benjamin Smedbergs on Garbage Collection
Tags: ,

Should Arab Media Take Lessons from the West?

An excellent debate among journalists from and working in the Arab world, debating the motion "This house believes that the Arab media need no lessons in journalism from the West."

Saturday, February 18, 2006

Bunchball - put interactive stuff in your blog

http://www.bunchball.com/

The idea is you put a code snippet in your web page, blog etc. and then you can upload photos etc. to bunchballs servers and they show up in your page. Also does games, and various other interactive thingies.

Friday, February 17, 2006

Perl IDE Demonstration and Comparison

The San Francisco Perl Mongers are sponsoring a demo of three Perl IDE's on Tuesday, February 28, 2006:

When & Where:
When: Tuesday, February 28, 2006, 8:00 p.m.
Where: Perpetual Entertainment, Fifth floor;
149 New Montgomery Street, San Francisco, CA
RSVP: qw@sf.pm.org - please let us know if you are attending, and if you want thai food, bring some $$

If you would like to attend, please contact Quinn Weaver at the address above - and bring cash if you want Thai food!

Sunday, February 12, 2006

Lets Make A Deal Game Theory

(originally posted 10:08 AM, Feb 11, 2006)

A couple days ago (yes, in a bar) I became intrigued with a fairly well known game problem, here's my version:

  1. There are three doors, one of which is a winner.
  2. You pick one door, but it isn't opened yet.
  3. At this point at least one of the unpicked doors (maybe both) are losers.
  4. A losing unpicked door is opened.
  5. You now have the option of either sticking with your original choice, or switching to the remaining unopened door. Which has a better chance of being the winner?
Here is a Perl script I wrote which will play the game using both the "switch" and "stay" strategies: http://eigenstate.net/misc_scripts/make_a_deal.pl

What do you think?

Tags: , ,







12 Feb 2006 - addition:

Here is one way of explaining what is going on:

When the losing choice is revealed, you gain some new information.

At the start, you have equal knowledge of all the choices, so at first every door has 1/n chance (1/3 if there are three doors).

You then divide doors into two groups: one door in group A (your first pick) and the rest of the doors in group B. With three doors group A carries 1/3 of the chances and group B carries 2/3.

When one of the doors in group B is revealed to be a loser you know something new:

You know that the group of chances that was in group B at the start must now be redistributed among the remaining doors in group B.

So if group B had 2/3 of the chances at the start, it *still has* 2/3 of the chances, but those all ride on the one remaining door in group B. So, the remaining door in group B has a 2/3 chance fo winning, whilst the door you originally picked, in group A still has only a 1/3 chance.

What has happend is that the revelation of the loser in group B has told you that all the other doors in group are more likely to be a winner.

If you try it with 10 doors it is still better to switch - to pick any remaining door in group B - than to stay with the door in group A.

Where N is the number of doors, the chance of the first pick winning is 1/N, and the chance of a door in group B after eliminating a loser from group B is: (N-1)/N

It is the -1 that makes swicthing worth it.

We can generalize the problem further by saying that T is the Total number of doors, A is the number of doors in group A, and B is the number in group B before a loser is revealed. Then the chance of a door in group A is A/T (for example, 1/3) and the chance of a door in group B, after the elimination is B - 1 / T.

So if there are 10 door total, and you put 3 doors in group A at the start:

T = 10
A = 3
B = 7

Each group A door has a 1/10 chance (3/10 for the group as a whole.) After eliminating a loser from group B, the group as a whole still has 7/10 chances and thus each remaining group B door has a 6/10 11 2/3% chance of being the winner. (thanks to keith for the correction)

Thursday, February 09, 2006

Unit tests for Mozilla / Firefox coming soon

Dave Liebreich of Mozilla.com has posted the source code for jssh-driver - a unit test framework for use in Mozilla browsers, (e.g. Firefox). This little framework will allow you to have a directory of HTML files, and test if the browser renders them correctly. Basically the test loads the "golden master" versions into the browser and compares the rendered version to the version on disk. (When the browser renders an HTML page it takes it apart and puts it back together - hopefully correctly.)


Thursday, February 02, 2006

US Internet biz profits from oppression

From the Washington Post:

House: Internet Companies Give in to China

By FOSTER KLUG
The Associated Press
Wednesday, February 1, 2006; 10:37 PM

WASHINGTON -- Lawmakers on Wednesday accused U.S.-based Internet companies of
giving in to pressure from China and helping to censor Web users in violation
of American principles of free speech.

I think it's disgusting that these companies are seeking to profit from cooperation with oppression while simultaneously benefiting from the freedoms afforded them here in the US.

Wednesday, February 01, 2006

More tests, fewer bugs

Mozilla is looking for a few good test-driven engineers...

I visited the Mozilla Corporation offices yesterday, at the invitation of Dave Liebreich, who is heading up the effort to bring more testing to the Mozilla code base. There is a wiki page: http://wiki.mozilla.org/SoftwareTesting describing some projects and ideas. Dave is a good guy doing good work - let's help!

Tuesday, January 31, 2006

Concrete Canvas

http://www.concretecanvas.org.uk/

Instant semi-permanent structure.
Fill bag with water, activate chemical gas-pack, it inflates and hardens and is ready for use in 12-hours.

Update (1 Feb 2006):
My friend and Bob Theis (http://www.bobtheis.net/) raised these important issues:

Clever idea. Two reactions:

1. Hard to imagine a shell that thin being dimensionally stable enough for such a span in compression ( how does it resist local buckling? ). ESPECIALLY when you cover it with earth to gain some insulation ( see below ).

2. The plastic sheet interior would condense ALL the water vapor that hits it when the temperature outside is low ( despite the website claims, the insulation value of the skin should be next to nil ), where it would then freeze in colder situations. So insulation and ventilation would be serious habitability issues.

Saturday, January 14, 2006

Adding Assertions in Perl

I've been working on a Perl module that provides a set of assertion methods that will work in Perl 5.6.1 or later. (Note that Perl 5.10 should have some form of builtin support for assertions.)

So far I've implemented:

assert( expr, $optional_message ); # passes if expr is true
assert_is($$this,$that, $opt_msg); # compare with eq
assert_isnt($this,$that, $opt$msg); # Compare with ne

# all values must be == to each other
assert_num_equals($arrayref, $opt_msg);

save_data($key, $ref_to_data); # saves clone of data

# Assert that some data has (or not) the same values as the
# previsouly saved data - does a deep compare.

assert_data_not_different($key,$ref_to_data, $opt_msg);
assert_data_different($key,$ref_to_data, $opt_msg);


I've also implemented methods to set the pass and fail behaviors:

set_pass_behavior('silence');
set_pass_behavior('warn');
set_pass_behavior( \&my_sub ); # CODE ref


set_fail_behavior('silence');
set_fail_behavior('warn');
set_fail_behavior('confess'); #die with stack trace
set_fail_behavior( \&my_sub ); # CODE ref


I'm doing this for a client and am not sure if we'll be allowed to release the code publicly, but I hope so.

Saturday, January 07, 2006

Adding Unit Tests to Legacy Code

I've started adding Unit Tests to a "legacy" code library. So far, the basic approach I am taking is:

  1. Create the test harness.
  2. The first test is to compile the old code library. Of course that fails at first because of all the things the library depends on.
  3. Create enough "fake" class files that the library compiles.
  4. Pick one subroutine (method) to test, and add a test that runs that subroutine. Of course it fails because of all the missing dependencies - the subroutine under test uses a bunch of subroutines defined in others.
  5. In the test suite, create a FakeMethods class that defines stub versions of the missing external methods/subroutines, for example, the subroutine I am testing calls GetTotalAmount($cost,@items) so in the test suite I have something like this:

    sub GetTotalAmount {
    cluck('fake sub called'); # Print a stack trace showing we were called
    my($cost,@items) = @_; # Document the arguments expected
    return; # Return nothing for now
    }

  6. Some of the fake subroutines will need to return some actual values for the subroutine I am testing to run. Add just enough input so the test passes, even when this seems ridiculous. For example, I found that one subroutine wanted the name of a file to open, and that the subroutine would run even if I passed in the name of a non-existent file. OK, that's what I did. Later, we can add a test expecting the subroutine to throw an exception if the file doesn't exist, and then add defensive code to throw the subroutine.
  7. Continue repeating steps 5 and 6 until the subroutine I am testing runs without throwing an exception.
  8. Add tests to run the subroutine with variations on its arguments and/or environment. I may need to add more fake subroutines, mock data, etc.
  9. The result is that I have a pretty clearly documented view of what the subroutine/method actually requires to run as of today.