Archive for the ‘Development’ Category

New site design is live (also upgraded to the latest version of WordPress – 2.5) Apr 1 2008

New site design is finally live!

In preparation for upgrading a whole mess of sites to using the latest version of WordPress I decided it was time to finally upgrade my own site and to implement the new design I’d been working on for a while (for over a year now).

Check it out:

How to fix PEAR installation of XML_RSS / XML_Tree invalid package.xml file problem Dec 18 2007

As part of getting an upgraded server set up and transferring sites from the older server, I had to re-install PEAR XML_RSS. One of its dependencies is XML_Tree.

Normally, this command should have worked with no problems:

pear install XML_RSS

It downloaded XML_RSS just fine, but then choked on XML_Tree, which was a required dependency.

Initial output:

# pear install XML_RSS
Did not download dependencies: pear/XML_Tree, use --alldeps or --onlyreqdeps to download automatically
pear/XML_RSS requires package "pear/XML_Tree"
No valid packages found
install failed

My next try was to specify installing XML_Tree:

# pear install XML_Tree
WARNING: "pear/XML_Tree" is deprecated in favor of "pear/XML_Serializer"
downloading XML_Tree-1.1.tgz ...
Starting to download XML_Tree-1.1.tgz (4,826 bytes)
.....done: 4,826 bytes
XML error: not well-formed (invalid token) at line 10
Download of "pear/XML_Tree" succeeded, but it is not a valid package archive
Error: cannot download "pear/XML_Tree"
Download failed
install failed

After some troubleshooting, here is how I fixed this problem:

  1. Change directory into the PEAR download cache directory: cd /tmp/pear/cache/
  2. Clear out all the old cache files: pear clear-cache
  3. You should see the downloaded file: XML_Tree-1.1.tgz
  4. Extract the files: tar xvfz XML_Tree-1.1.tgz
  5. Now edit the extracted package.xml file: vi package.xml
  6. Go to line 10 and change the ö to an o. This is the character that is causing the XML validation of the package file to fail. Next, save the file (ESC :wq)
  7. Next, remove the old .tgz file: rm -rf XML_Tree-1.1.tgz
  8. We’re going to recreate the package: tar cfv - package.xml XML_Tree-1.1/ | gzip > XML_Tree-1.1.tgz
  9. Next, we’re going to install this package and not use the network: pear install -O XML_Tree-1.1.tgz

XML_Tree is now installed. You should now be able to continue installing XML_RSS normally.

Google Chart API is very cool Dec 6 2007

Gabriel Serafini 2007

Google’s new Charts API is very cool. I am interested to see how we might possibly use that with Collabofit to do additional charting.

Here’s the code used to generate that graph above:

Very simple, clean, elegant and neat.

New car insurance site is now up Oct 18 2007

I’ve had this domain name for a while and finally built something on it. The idea is to figure out good ways to save money on car insurance. We have USAA for ours, which makes it nice because it is a pretty good rate. Some people need extra help, though, to find the best prices. Hopefully this site will assist them in their search.

Check it out: Car Insurance Connection

It is built using WordPress, and will hopefully pay for itself through the Google Adsense ads.

WP-Cache 2.0 – useful for surviving the digg/reddit effect for your WordPress blog Aug 7 2007

Found this useful plugin from this site while working on a site that was in the middle of getting dugg/reddited:

WP-Cache is an extremely efficient WordPress page caching system to make your site much faster and responsive. It works by caching Worpress pages and storing them in a static file for serving future requests directly from the file rather than loading and compiling the whole PHP code and then building the page from the database. WP-Cache allows to serve hundred of times more pages per second, and to reduce the response time from several tenths of seconds to less than a millisecond.

Get the plugin here: Ricardo Galli, de software libre – WP-Cache 2.0

Some excellent WordPress plugins I found recently – Breadcrumbs and better Links page management Aug 2 2007

I’m working on a new site relating to Gramm-Leach-Bliley Act compliance (it will replace the useless site that is currently up at and was looking for breadcrumb functionality and a better way to manage outputting links from the Blogroll on the links.php page.

I had searched for a good WordPress breadcrumbs plugin a while ago, and the ones I found didn’t really impress me. I’m really happy with the one I ended up finding this time. It’s called Breadcrumb Navigation XT and it does exactly what I was looking for. I’ll be using this probably for any future needs when it comes to breadcrumbs and WordPress.

The second plugin that I found was related to fixing how WordPress currently outputs links using the default get_links_list() function used in most links.php template pages right now. I had used on the links page another WordPress function (wp_list_bookmarks()) that only works for WordPress 2.1 and higher, and is still undergoing active development / documentation. That function, however, only allows you to output the description, and doesn’t appear to include the functionality to show notes. Since there is a larger amount of allowable text for the notes field, and that is what I needed, I still needed to find (or write) a solution to outputting all links, ordered by category, displaying the notes field as well as link title, url, image, etc. This is the plugin that I found that does just this (found it after writing about 80% of the same functionality myself). It does just what I was looking for, shows all the categories that contain links and the links within each category.

Plugin author’s latest post about WordPress 2.1 support: Link Library now supports WordPress 2.1 is fast again, thanks to a new and improved SQL database schema Jul 26 2007

For quite a while, has been slow. Ever since it hit, oh, around 1.4 million items in its database (now up over 2 million) the queries that were responsible for building the home page view had gotten slower and slower.

The old design

The problem was one of normalization, and the desire to not have data going into two places. The design originally worked this way:

  1. Look up the latest posts. Find all the latest ones that have a distinct feed_id. This was running against the feed_data table, which now has over 2 million rows.
  2. Next, look up the feed information for each feed_id found. Originally, I couldn’t figure out a way to do this all in one query. So this was another 10 individual SQL queries, looking up information about the feed needed to display on the home page (like the name of the feed, URL, etc.) This query did a join of the feed_data with the feed_info tables. This is what took progressively longer and longer, the more rows in the feed_data table.
  3. Next, combine the results of finding the 10 most recent posts from the 10 most recently updated feeds into an array using PHP, then pass those results from the Data Access Object (DAO) to the presentation layer for output on the home page.

This ended up being really, really slow on the homepage load (like over 2 minutes). This was obviously too slow. I implemented caching at a number of layers to mitigate the problem, which made for a very speedy second page load, but the first one was still way too slow.

After consulting with my friend Ben, he agreed that it was an inherently expensive query to run. We agreed that it needed to be re-architected to be able to significantly improve performance.

The new design

The new design required creating a new table dedicated to the home page posts where, when a RSS feed is read, it inserts the most recent new item into the new table.

The new process works like this:

  1. RSS feed is read. Any new items are first inserted into the main feed_data table, and the feed_info table is updated as well with a last checked timestamp.
  2. Next, the new table is updated, using a MySQL REPLACE command, with the combined feed information as well as item information
  3. The complicated 11 queries that used to be run are now replaced by a dead-simple single query that will only ever have to search through the total number of feeds in the system (currently 753 rows), instead of the over 2 million rows before.

These new changes have made using much more usable again, and it is way faster. The homepage loads in under a 1/2 second now, just like it should.

Check it out: – Really Simple News For You

A List Apart: Articles: Conflicting Absolute Positions Jul 17 2007

Nice article on ALA about how to create a layout using almost pure CSS where there is a fixed-width sidebar and a dynamically resizing right-hand main content area without using Javascript to dynamically resize the page.

All right, class. Using CSS, kindly produce a liquid layout that contains a fixed-width, scrolling side panel and a flexible, scrolling main panel. Okay, now do it without JavaScript.

Check it out: A List Apart: Articles: Conflicting Absolute Positions

Web Devout Jul 11 2007

Some excellent articles here about good web building practices.

Welcome to Web Devout! This site is aimed at promoting the health of the Web by providing web developers and the public with the knowledge and tools needed to help the Web move forward.

Check it out: Web Devout

Thanks for the pointer, Ben.

Home – Find Car Wallpapers, Car Desktops and Car Pictures at Jun 7 2007

New logo

Bruce and I just finished getting the template finished for the new site.

Porsche CGT wheel picture that Bruce took

Check it out: Home – Find Car Wallpapers, Car Desktops and Car Pictures at

sell diamonds