Archive for the ‘Development’ Category
As part of getting an upgraded server set up and transferring sites from the older server, I had to re-install PEAR XML_RSS. One of its dependencies is XML_Tree.
Normally, this command should have worked with no problems:
pear install XML_RSS
It downloaded XML_RSS just fine, but then choked on XML_Tree, which was a required dependency.
# pear install XML_RSS
Did not download dependencies: pear/XML_Tree, use --alldeps or --onlyreqdeps to download automatically
pear/XML_RSS requires package "pear/XML_Tree"
No valid packages found
My next try was to specify installing XML_Tree:
# pear install XML_Tree
WARNING: "pear/XML_Tree" is deprecated in favor of "pear/XML_Serializer"
downloading XML_Tree-1.1.tgz ...
Starting to download XML_Tree-1.1.tgz (4,826 bytes)
.....done: 4,826 bytes
XML error: not well-formed (invalid token) at line 10
Download of "pear/XML_Tree" succeeded, but it is not a valid package archive
Error: cannot download "pear/XML_Tree"
After some troubleshooting, here is how I fixed this problem:
- Change directory into the PEAR download cache directory:
- Clear out all the old cache files:
- You should see the downloaded file: XML_Tree-1.1.tgz
- Extract the files:
tar xvfz XML_Tree-1.1.tgz
- Now edit the extracted package.xml file:
- Go to line 10 and change the Ã¶ to an o. This is the character that is causing the XML validation of the package file to fail. Next, save the file (
- Next, remove the old .tgz file:
rm -rf XML_Tree-1.1.tgz
- We’re going to recreate the package:
tar cfv - package.xml XML_Tree-1.1/ | gzip > XML_Tree-1.1.tgz
- Next, we’re going to install this package and not use the network:
pear install -O XML_Tree-1.1.tgz
XML_Tree is now installed. You should now be able to continue installing XML_RSS normally.
Google Chart API is very cool Dec 6 2007
Here’s the code used to generate that graph above:
Very simple, clean, elegant and neat.
New car insurance site is now up Oct 18 2007
I’ve had this domain name for a while and finally built something on it. The idea is to figure out good ways to save money on car insurance. We have USAA for ours, which makes it nice because it is a pretty good rate. Some people need extra help, though, to find the best prices. Hopefully this site will assist them in their search.
Check it out: Car Insurance Connection
It is built using WordPress, and will hopefully pay for itself through the Google Adsense ads.
Found this useful plugin from this site while working on a site that was in the middle of getting dugg/reddited:
WP-Cache is an extremely efficient WordPress page caching system to make your site much faster and responsive. It works by caching Worpress pages and storing them in a static file for serving future requests directly from the file rather than loading and compiling the whole PHP code and then building the page from the database. WP-Cache allows to serve hundred of times more pages per second, and to reduce the response time from several tenths of seconds to less than a millisecond.
Get the plugin here: Ricardo Galli, de software libre – WP-Cache 2.0
Some excellent WordPress plugins I found recently – Breadcrumbs and better Links page management Aug 2 2007
I’m working on a new site relating to Gramm-Leach-Bliley Act compliance (it will replace the useless site that is currently up at grammleachblileyact.com) and was looking for breadcrumb functionality and a better way to manage outputting links from the Blogroll on the links.php page.
I had searched for a good WordPress breadcrumbs plugin a while ago, and the ones I found didn’t really impress me. I’m really happy with the one I ended up finding this time. It’s called Breadcrumb Navigation XT and it does exactly what I was looking for. I’ll be using this probably for any future needs when it comes to breadcrumbs and WordPress.
The second plugin that I found was related to fixing how WordPress currently outputs links using the default get_links_list() function used in most links.php template pages right now. I had used on the XyzAnt.com links page another WordPress function (wp_list_bookmarks()) that only works for WordPress 2.1 and higher, and is still undergoing active development / documentation. That function, however, only allows you to output the description, and doesn’t appear to include the functionality to show notes. Since there is a larger amount of allowable text for the notes field, and that is what I needed, I still needed to find (or write) a solution to outputting all links, ordered by category, displaying the notes field as well as link title, url, image, etc. This is the plugin that I found that does just this (found it after writing about 80% of the same functionality myself). It does just what I was looking for, shows all the categories that contain links and the links within each category.
Plugin author’s latest post about WordPress 2.1 support: Link Library now supports WordPress 2.1
For quite a while, RSS2.com has been slow. Ever since it hit, oh, around 1.4 million items in its database (now up over 2 million) the queries that were responsible for building the home page view had gotten slower and slower.
The old design
The problem was one of normalization, and the desire to not have data going into two places. The design originally worked this way:
- Look up the latest posts. Find all the latest ones that have a distinct
feed_id. This was running against the
feed_datatable, which now has over 2 million rows.
- Next, look up the feed information for each
feed_idfound. Originally, I couldn’t figure out a way to do this all in one query. So this was another 10 individual SQL queries, looking up information about the feed needed to display on the home page (like the name of the feed, URL, etc.) This query did a join of the
feed_datawith the feed_info tables. This is what took progressively longer and longer, the more rows in the
- Next, combine the results of finding the 10 most recent posts from the 10 most recently updated feeds into an array using PHP, then pass those results from the Data Access Object (DAO) to the presentation layer for output on the home page.
This ended up being really, really slow on the homepage load (like over 2 minutes). This was obviously too slow. I implemented caching at a number of layers to mitigate the problem, which made for a very speedy second page load, but the first one was still way too slow.
After consulting with my friend Ben, he agreed that it was an inherently expensive query to run. We agreed that it needed to be re-architected to be able to significantly improve performance.
The new design
The new design required creating a new table dedicated to the home page posts where, when a RSS feed is read, it inserts the most recent new item into the new table.
The new process works like this:
- RSS feed is read. Any new items are first inserted into the main
feed_datatable, and the
feed_infotable is updated as well with a last checked timestamp.
- Next, the new table is updated, using a MySQL
REPLACEcommand, with the combined feed information as well as item information
- The complicated 11 queries that used to be run are now replaced by a dead-simple single query that will only ever have to search through the total number of feeds in the system (currently 753 rows), instead of the over 2 million rows before.
These new changes have made using RSS2.com much more usable again, and it is way faster. The homepage loads in under a 1/2 second now, just like it should.
Check it out: RSS2.com – Really Simple News For You
Check it out: A List Apart: Articles: Conflicting Absolute Positions
Web Devout Jul 11 2007
Some excellent articles here about good web building practices.
Welcome to Web Devout! This site is aimed at promoting the health of the Web by providing web developers and the public with the knowledge and tools needed to help the Web move forward.
Check it out: Web Devout
Thanks for the pointer, Ben.