More than ever these days I'm living in the cloud. Google has my mail, Apple has my calendar, del.icio.us has my bookmarks, Flickr has my photographs, and Amazon S3 has my files.
Day-to-day I rely on a lot of cloud infrastructure, and while I'm old enough to remember having to wade though card catalogues, and still know five fun things to do with microfiche, I no longer go to the library when I need a journal article. NASA's ADS and Cornell's pre-print archive provide instant semantically tagged access to both the historic and latest literature. I haven't physically set foot in a library in several years now.
I've moved away from my old arrangement, where I had a desktop machine in the office, and then a laptop for traveling. My main machine is now one of the new 13-inch Aluminium Macbooks, when I'm in the office I hook this up to one of Apple's new LED displays; off of which hangs several 500GB disks for backup and scratch space, a full sized keyboard, and a mouse. So whether I'm on a plane, a train, or siting in my office, everything is just the same. The screen gets a bit bigger or smaller, and my desktop background changes, but that's about it.
That said when I'm travelling long haul, rather than lugging my Macbook around, I've even started to leave that behind. I'm using my Dell mini 9 netbook as a thin client to the cloud and, at least for short trips, this seems to be going fairly well.
I'm on tender-hooks to see whether Apple is going to venture into the netbook territory, after all, I've been waiting for a replacement for my old 12-inch Powerbook for a long time now. However if an officially sanctioned Apple netbook doesn't show up in the next few months I might get round to installing OSX on my mini9. Then again, I might not. It's surprising how tolerable Windows XP turns out to be, at least if all you're using it as is a platform to run Google Chrome and some web applications.
But there is a dark side of the cloud, it isn't always there, and here I'm not talking about the offline problem. After all, that what Gears is there to fix...
Recently I had my AdSense account shut down. Totally ignoring the loss in future revenue, Google also locked me away from my data. The information about what ads sold, on which page, when. I'm paranoid about backups, and expect other people to be too, but that isn't data I had elsewhere. While I could have exported it, I didn't. Mainly because it would be fairly hard to analyse outside of Google's own infrastructure.
Google also hosts my email and my blog, and its RSS feed now that they've acquired Feedburner. Which puts them locking me away from my own data in a very different light. Blogger doesn't have an export function, and it's not alone. With Yahoo in trouble I've started to worry about all the pictures I have hosted on Flickr. They also don't have any way to back up your content.
To be clear, I'm not just talking about the raw content. Especially in the case of Flickr the meta-data attached to the content; the date, time, geo-location and associated tags are as equally important as the content itself. If you can't export the content with the meta-data attached, it's hardly worth doing. Even worse, there are services where taking your own data out of the context of the service makes it worthless. Exporting my data from Twitter, taking it out of the Twitter timeline, is fairly pointless.
Which of course brings me to the well trodden path of data portability. My calendar, address book and email are all portable because they are in standard formats. I can easily migrate between services, and some of those services even encourage me to do so...
Other content is not as portable, and that is of course because there aren't any standards to make it portable. How would you go about writing the export service for Flickr, or Blogger? Especially one that made sure it exported all the meta-data in a decently digestible format. Who would implement code to read from the format. Could the network even support thousands of users making a run on Flickr, for instance, and grabbing all their archived pictures?
This is a problem we're all going to face as our lives, and the data trail we generate, move into the cloud. Because that's our data I'm talking about. It doesn't belong to the companies that host it. They may be providing the services that display it, but the data is ours. They really need to remember that...
The often deranged postings of yet another hacker, pretending to be an Astronomer, pretending to be a hacker who has written a book or two for O'Reilly Media.
Tuesday, December 16, 2008
Wednesday, November 19, 2008
Solving the iPhone Calendar Colour Problem
A long standing irritation of mine, and something many people have stumbled across when syncing multiple calendars to their iPhone, is the calendar colour problem. While the calendar entries are synchronised correctly, the entries show up in the wrong colours. This is actually far more irritating than you might immediately suspect...
The problem is, theoretically at least, solved if you use MobileMe to sync rather tha syncing directly from your Mac, but at least for some people this doesn't seem to resolve the problem.
It currently looks like if you were a .Mac subscriber, and your calendars were already synced to .Mac, then you still get randomly assigned colours when the calendars sync to your iPhone from MobileMe. However interestingly, the calendar colours are correct inside the MobileMe web application. This recently suggested a fix to me...
Go to MobileMe and select each calendar in turn, hit the calendar actions button which is just to the right of the
Irritating, but there you go...
The problem is, theoretically at least, solved if you use MobileMe to sync rather tha syncing directly from your Mac, but at least for some people this doesn't seem to resolve the problem.
It currently looks like if you were a .Mac subscriber, and your calendars were already synced to .Mac, then you still get randomly assigned colours when the calendars sync to your iPhone from MobileMe. However interestingly, the calendar colours are correct inside the MobileMe web application. This recently suggested a fix to me...
Go to MobileMe and select each calendar in turn, hit the calendar actions button which is just to the right of the
Month
drop-down at the top of the calendar. Select Calendar Info
and you'll get a pop-up which has the calendar name a selector to choose the colour for the calendar. Re-select the correct colour, and hit Ok
to save the colour choice. Now, after syncing with your iPhone, it'll show up with the correct colour. Even if the colour you just selected was the same colour you always had.Irritating, but there you go...
Labels:
Apple,
Calendar,
iCal,
iPhone,
iPod touch,
MobileMe,
Web Application
Tuesday, November 18, 2008
The Tablet PC Trial
I've had a tablet PC on loan from the Open University for the last six months or so and, as it's getting shipped back to them tomorrow, I thought I'd bounce a few ideas around about how I got on with it...
The OU lent me a Toshiba Tecra M7, which is about two years behind the cutting edge, and had fairly lackluster reviews even back then. However I know at least one person who, despite the relatively poor uptake of tablet PCs in general, swears by theirs and wouldn't have a normal laptop, so I was really interested to get my hands on one for an extended test.
However even after six months with the Toshiba, using it for all my OU teaching support and marking, I'm not a convert. In practice I found the tablet an ergonomic nightmare to use. While in the end I worked out a method of propping the tablet and my elbows up to different levels using stacks of books, so that I could use it for several hours at a stretch to mark scripts, it was hardly an elegant solution. Using the tablet on its own for any length of time severely exacerbated my RSI, making it almost entirely un-portable.
I don't really want to get into issues specific tablet model I was testing, for instance placing the power jack directly under where you'd normally want to put your elbow was an act of twisted genius, but suffice to say there were many.
However I can see why the OU lent it to me, in theory being able to write comments, draw freehand diagrams, and scribble equations onto student work allows a much more flexible approach to marking work submitted electronically by the students. In practice the tablet only partially lives up to what, in theory, it should be easily capable of...
It really didn't help that the software integration of the tablet into the OS is also pretty poor. Writing large chunks of text you intend to be read by the OCR software is a laborious process, and spinning the display around so I could use the keyboard to do so wasn't really practical, or particularly convenient. I'll draw a polite veil over the possible comments I could make about painstakingly spelling out words on the software keyboard.
Ergonomically therefore, the tablet PC was a total bust. I'd almost go as far as saying it was unusable. It was certainly almost entirely un-portable, it also counts as one of the heaviest laptops I've ever had the misfortune to have to carry around. If you've followed the blog for any length of time, you'll know that I subscribe to the notion that there are two main core demographics for laptop users. The road warriors, who would kill for another half hour of battery, or half a kilogram less of laptop, and the power users who desperately want another couple of inches of screen real estate, and another hundred gigabytes of hard drive.
I definitely fall into the road warrior category, the tablet PC I had on trial weighed three or four times as much as the Dell mini I recently picked up to use while traveling.
So it's not exactly with a heavy heart that I'm saying goodbye to my loaned PC. I can see the problem the tablet PC is trying to solve, but at least for me, it doesn't even come close to living up to the hype.
CREDIT: TabletPCReview.com |
The Toshiba Tecra M7 |
The OU lent me a Toshiba Tecra M7, which is about two years behind the cutting edge, and had fairly lackluster reviews even back then. However I know at least one person who, despite the relatively poor uptake of tablet PCs in general, swears by theirs and wouldn't have a normal laptop, so I was really interested to get my hands on one for an extended test.
However even after six months with the Toshiba, using it for all my OU teaching support and marking, I'm not a convert. In practice I found the tablet an ergonomic nightmare to use. While in the end I worked out a method of propping the tablet and my elbows up to different levels using stacks of books, so that I could use it for several hours at a stretch to mark scripts, it was hardly an elegant solution. Using the tablet on its own for any length of time severely exacerbated my RSI, making it almost entirely un-portable.
I don't really want to get into issues specific tablet model I was testing, for instance placing the power jack directly under where you'd normally want to put your elbow was an act of twisted genius, but suffice to say there were many.
However I can see why the OU lent it to me, in theory being able to write comments, draw freehand diagrams, and scribble equations onto student work allows a much more flexible approach to marking work submitted electronically by the students. In practice the tablet only partially lives up to what, in theory, it should be easily capable of...
It really didn't help that the software integration of the tablet into the OS is also pretty poor. Writing large chunks of text you intend to be read by the OCR software is a laborious process, and spinning the display around so I could use the keyboard to do so wasn't really practical, or particularly convenient. I'll draw a polite veil over the possible comments I could make about painstakingly spelling out words on the software keyboard.
Ergonomically therefore, the tablet PC was a total bust. I'd almost go as far as saying it was unusable. It was certainly almost entirely un-portable, it also counts as one of the heaviest laptops I've ever had the misfortune to have to carry around. If you've followed the blog for any length of time, you'll know that I subscribe to the notion that there are two main core demographics for laptop users. The road warriors, who would kill for another half hour of battery, or half a kilogram less of laptop, and the power users who desperately want another couple of inches of screen real estate, and another hundred gigabytes of hard drive.
I definitely fall into the road warrior category, the tablet PC I had on trial weighed three or four times as much as the Dell mini I recently picked up to use while traveling.
So it's not exactly with a heavy heart that I'm saying goodbye to my loaned PC. I can see the problem the tablet PC is trying to solve, but at least for me, it doesn't even come close to living up to the hype.
Monday, November 17, 2008
Worryingly senior...
Astronomy is one of the more computing intensive of the sciences, and historically we've pushed the boundaries of the available computing resources. But we're also dependent on a thinning cadre of dedicated hero programmers...
Despite industry-led criticisms of the hero programmer paradigm, such software-scientists are a required. Building complicated bespoke systems to do science takes domain knowledge, not just of software engineering, but also of the underlying science behind what you want to accomplish. Simply put scientists, and the institutions the employ them, can't afford to support the large structured software teams that would be necessary if those hero programmers didn't consitently punch above their weight. Scientists also generally aren't that keen to get involved in the software design process made necessary by more formal processes that larger teams would entail.
Unfortunately historically those same scientists have been reluctant to provide the necessary support and career advancement that would be required to keep people like me around, sometimes through a misguided belief that software is easy and robust software can be produced by any wet-behind-the-ears graduate student.
While there is of course a huge oversupply of hopeful candidates for any long term posting in astronomy, but if you talk to software-scientists at those watering holes where we usually congregate, like the recent ADASS conference in Quebec, you'll find more than the expected amount of doom-and-gloom going around. My situation isn't unique, I'm not the only worrying senior programmer living contract-to-contract...
Of course up until recently, despite our complaints, it's been other people worrying how senior we've become, not us. Most of the programmers that have managed to stick around inside academia for any length of time, and there are many that just come and go, are usually fairly good at what they do. That means they knew they could go out and get a 'real job', probably paying more than they were earning in academia, when or if it came to it...
Unfortunately, amougst other things, the current economic turmoil has taken away our comfort blanket and left us very much out in the cold. Although, perhaps, with a better winter coat and a set of decent boots than many these days. None the less, its not a situation that's going to encourage people to specialise in software.
I don't see any of this changing in the near future. In fact I see the situation getting worse, the current generation of students are further away from the software, and underlying hardware, than I've ever seen. A culture of black boxes is very much in evidence. But you have to ask, what happens when the black boxes break?
Our ability to provide comprehensive software suites to our users hinges on our ability to hire staff experienced in both scientific data analysis and software engineering... In the absence of such people, much larger teams containing both astronomers and industry programmers under formal project management need to be formed. - Economou et al. 2004However with the data reduction systems and the telescopes themselves becoming more and more automated replacements for those hero programmers are becoming hard to find because of the lack of experienced developers with an appropriate astronomical background, and the fact that it's not really a respectable profession...
We have found it extremely hard to hire good people to work on astronomical software. There is no career path within the universities for software specialists, despite the fact that there's no logical distinction between building hard- and soft-ware instruments. Smart and sensible graduate students, desirous of a career in astronomy, simply don't choose to specialise in the software required to reduce modern observational datasets. - Lupton et al. 2001Which of course is the reason we're having to replace those hero programmers in the first place, without some sort of established career path the astronomical software community is suffering from 'leakage' around the edges. My own situation is typical, I'm generally described by faculty as a "worrying senior" fellow.
Despite industry-led criticisms of the hero programmer paradigm, such software-scientists are a required. Building complicated bespoke systems to do science takes domain knowledge, not just of software engineering, but also of the underlying science behind what you want to accomplish. Simply put scientists, and the institutions the employ them, can't afford to support the large structured software teams that would be necessary if those hero programmers didn't consitently punch above their weight. Scientists also generally aren't that keen to get involved in the software design process made necessary by more formal processes that larger teams would entail.
Unfortunately historically those same scientists have been reluctant to provide the necessary support and career advancement that would be required to keep people like me around, sometimes through a misguided belief that software is easy and robust software can be produced by any wet-behind-the-ears graduate student.
While there is of course a huge oversupply of hopeful candidates for any long term posting in astronomy, but if you talk to software-scientists at those watering holes where we usually congregate, like the recent ADASS conference in Quebec, you'll find more than the expected amount of doom-and-gloom going around. My situation isn't unique, I'm not the only worrying senior programmer living contract-to-contract...
Of course up until recently, despite our complaints, it's been other people worrying how senior we've become, not us. Most of the programmers that have managed to stick around inside academia for any length of time, and there are many that just come and go, are usually fairly good at what they do. That means they knew they could go out and get a 'real job', probably paying more than they were earning in academia, when or if it came to it...
Unfortunately, amougst other things, the current economic turmoil has taken away our comfort blanket and left us very much out in the cold. Although, perhaps, with a better winter coat and a set of decent boots than many these days. None the less, its not a situation that's going to encourage people to specialise in software.
I don't see any of this changing in the near future. In fact I see the situation getting worse, the current generation of students are further away from the software, and underlying hardware, than I've ever seen. A culture of black boxes is very much in evidence. But you have to ask, what happens when the black boxes break?
Labels:
Astronomers,
Programming,
Scientists,
Software,
University
Friday, November 14, 2008
The non-arrival of the (next) Dell mini 9
So after buying one of Dell's new netbooks, the Inspiron mini 9, for myself as a travel laptop and living with it for a month or so, my wife was sufficiently impressed with it to order one herself.
She placed the order on the 16th of October with an expected delivery date of the 31st of October. A couple of days before it was due to arrive her expected delivery date was put back until the 17th of November. Today, a couple of days before it was due to arrive, she received another revised delivery date of the 26th of November. That's a full month lead time now, and two slips in the shipping date.
She isn't happy, and since the reason she was getting a new laptop in the first place was an incident involving a dog, a baby, a low table, a full cup of milky tea and her previous Dell laptop you can probably figure out why. Her unhappiness hasn't really helped by the fact that my original mini 9 was delivered over a week early...
This has started me wondering why the shipping dates for Dell's "off the shelf" mini 9's are slipping, and whether this has anything to do with their deal with Vodafone. Are UK destined netbooks having to be diverted to fulfill Dell's obligation to its partner? Is Vodafone putting pressure on Dell to slow down shipments of stock netbooks to encourage sales of their own WWAN-enabled version? You have to wonder...
Update (17/Nov): Well you have to be reasonably impressed by that. Having spotted my complaint on the blog someone, somewhere, did something. One phone call and an email later, the laptop shipped. We all know things go wrong, and delays happen. But if you point out a problem, and the problem gets fixed, that's good customer service.
Update (19/Nov): The laptop has now been delivered.
She placed the order on the 16th of October with an expected delivery date of the 31st of October. A couple of days before it was due to arrive her expected delivery date was put back until the 17th of November. Today, a couple of days before it was due to arrive, she received another revised delivery date of the 26th of November. That's a full month lead time now, and two slips in the shipping date.
She isn't happy, and since the reason she was getting a new laptop in the first place was an incident involving a dog, a baby, a low table, a full cup of milky tea and her previous Dell laptop you can probably figure out why. Her unhappiness hasn't really helped by the fact that my original mini 9 was delivered over a week early...
This has started me wondering why the shipping dates for Dell's "off the shelf" mini 9's are slipping, and whether this has anything to do with their deal with Vodafone. Are UK destined netbooks having to be diverted to fulfill Dell's obligation to its partner? Is Vodafone putting pressure on Dell to slow down shipments of stock netbooks to encourage sales of their own WWAN-enabled version? You have to wonder...
Update (17/Nov): Well you have to be reasonably impressed by that. Having spotted my complaint on the blog someone, somewhere, did something. One phone call and an email later, the laptop shipped. We all know things go wrong, and delays happen. But if you point out a problem, and the problem gets fixed, that's good customer service.
Update (19/Nov): The laptop has now been delivered.
Missing Google ads?
So if you follow the blog by actually going to the website rather than getting posts via my RSS feed, which actually accounts for most of my readership anyway, you'll have noticed something over the last few days. No advertisements, my AdSense account has been disabled.
At this stage I'm not entirely sure what's going on, I'm presuming it's something to do with out of the ordinary click activity originating on the site, and considering Google's track record about such things I don't really anticipating finding out either way, even if I do by some sort of miracle get my account reactivated.
So, for now at least, enjoy your Daily ACK advert free...
Update: ...and that, is very much, that,
At this stage I'm not entirely sure what's going on, I'm presuming it's something to do with out of the ordinary click activity originating on the site, and considering Google's track record about such things I don't really anticipating finding out either way, even if I do by some sort of miracle get my account reactivated.
So, for now at least, enjoy your Daily ACK advert free...
Update: ...and that, is very much, that,
...after thoroughly reviewing your account data and taking your feedback into consideration, we have re-confirmed that your account poses a significant risk to our advertisers. For this reason, we are unable to reinstate your account.
Monday, October 27, 2008
This is the Earth I was looking for...
Well I wasn't waiting all that long as today Google released Google Earth for the iPhone.
Sure enough there is geo-location support and the controls are fairly intuitive, including the use of the accelerometer to control your viewing angle which is a fairly neat trick.
Disappointingly, at least from my point of view, there isn't any support for Google Sky. Or at least there isn't any support yet, I'm still hopeful. Time to start lobbying the people I know in Google. I guess the Google Sky and MS WWT tutorial at ADASS will be a good place to start. I'm already talking there anyway.
To get Google Earth on your iPhone, visit the App Store in iTunes...
Sure enough there is geo-location support and the controls are fairly intuitive, including the use of the accelerometer to control your viewing angle which is a fairly neat trick.
Disappointingly, at least from my point of view, there isn't any support for Google Sky. Or at least there isn't any support yet, I'm still hopeful. Time to start lobbying the people I know in Google. I guess the Google Sky and MS WWT tutorial at ADASS will be a good place to start. I'm already talking there anyway.
To get Google Earth on your iPhone, visit the App Store in iTunes...
Labels:
Apple,
Earthscape,
Google,
Google Earth,
Google Sky,
iPhone,
iPod touch
Sunday, October 26, 2008
Living with the Dell mini 9 and Apple's iDisk
This is another quick note about living with Dell's new netbook...
I'm storing most of my files off-board in the Cloud using Jungle Disk and Amazon S3. But since it's been around longer and I've got a bunch of files on it, at least for now, I also needed to mount my Apple iDisk. While there is the iDisk Utility for Windows XP from Apple it grates that you have to use a seperate bit of software for something like this. Fortunately you don't actually need it...
Like a lot of seemingly proprietary bits and pieces from Apple, the iDisk isn't, it's basically just a simple WebDAV share, and Windows has built-in support for that. All you need to do to connect to your iDisk is go to
CREDIT: Dell |
I'm storing most of my files off-board in the Cloud using Jungle Disk and Amazon S3. But since it's been around longer and I've got a bunch of files on it, at least for now, I also needed to mount my Apple iDisk. While there is the iDisk Utility for Windows XP from Apple it grates that you have to use a seperate bit of software for something like this. Fortunately you don't actually need it...
Like a lot of seemingly proprietary bits and pieces from Apple, the iDisk isn't, it's basically just a simple WebDAV share, and Windows has built-in support for that. All you need to do to connect to your iDisk is go to
My Computer
and click on Tools > Map Network Drive
and enter \\idisk.mac.com\username
in the pop-up and select an unused drive letter. Enter the username and password when asked. Your iDisk should now show up as a network disk in Windows Explorer.
Friday, October 17, 2008
Living with the Dell mini 9 and ISO images
Just over a month ago I picked up a one of Dell's new netbooks...
I rarely use the DVD drive on my Macbook, and generally there is only one reason that I need to fire it up, that's to install commercial software. Which is exactly what I need to do with the mini today.
However rather than go out and buy an external USB DVD drive I decided to work around the mini's lack of internal DVD by using my Macbook to create an ISO image, transfer the ISO onto a USB memory stick, and then mount it directly on my mini 9.
Inserting the CD into my Macbook I opened up a Terminal window and unmounted the disk from the command line,
Then I created an ISO file with the
or this,
depending. You can test the ISO image by mounting the new file using the command line,
or simply by double clicking on it in the Finder. If all is well, copy the ISO image onto a USB memory stick and plug it into your netbook.
If your mini is running Linux, you've now got everything you need. Login as root and create a directory to use for your mount point, and then mount the image on the mount point as follows
On the other hand if your mini is running Windows XP like mine, there isn't anything pre-installed that will let you mount an ISO image. Fortunately however there is an unsupported, and more or less unadvertised, freeware utility from Microsoft that lets you do just that, the "Virtual CD-ROM Control Panel for Windows XP" allows you map an ISO image and make it look just like a normal drive to the operating system.
At which point you should be able to install your software as normal and even, for those bits of software that demand the original disk in the (non-existant) drive, run it as if a disk were present by leaving the ISO image mounted as a mapped drive. Although, depending on how picky your particular bit of software's thrice-cursed DRM turns out to be, your mileage may vary on that one...
CREDIT: Dell |
I rarely use the DVD drive on my Macbook, and generally there is only one reason that I need to fire it up, that's to install commercial software. Which is exactly what I need to do with the mini today.
However rather than go out and buy an external USB DVD drive I decided to work around the mini's lack of internal DVD by using my Macbook to create an ISO image, transfer the ISO onto a USB memory stick, and then mount it directly on my mini 9.
Inserting the CD into my Macbook I opened up a Terminal window and unmounted the disk from the command line,
$ diskutil unmountDisk /dev/disk1
Then I created an ISO file with the
dd
utility, you'll either need to do this,$ dd if=/dev/disk1 of=image.iso bs=2048
or this,
$ dd if=/dev/disk1s0 of=image.iso bs=2048
depending. You can test the ISO image by mounting the new file using the command line,
$ hdid image.iso
or simply by double clicking on it in the Finder. If all is well, copy the ISO image onto a USB memory stick and plug it into your netbook.
If your mini is running Linux, you've now got everything you need. Login as root and create a directory to use for your mount point, and then mount the image on the mount point as follows
# mkdir /mnt/iso
# mount -t iso9660 image.iso /mnt/iso/ -o loop
On the other hand if your mini is running Windows XP like mine, there isn't anything pre-installed that will let you mount an ISO image. Fortunately however there is an unsupported, and more or less unadvertised, freeware utility from Microsoft that lets you do just that, the "Virtual CD-ROM Control Panel for Windows XP" allows you map an ISO image and make it look just like a normal drive to the operating system.
At which point you should be able to install your software as normal and even, for those bits of software that demand the original disk in the (non-existant) drive, run it as if a disk were present by leaving the ISO image mounted as a mapped drive. Although, depending on how picky your particular bit of software's thrice-cursed DRM turns out to be, your mileage may vary on that one...
Thursday, October 16, 2008
Three's 3G Wi-Fi router
Back in May I noted that Three were thinking about rolling out a line of home routers...
So this is just to note that they're now offering their D100 Wireless Router for £69.99 when purchased with one of their USB broadband dongles which I recently had for review. The nice thing here is that, unlike some other 3G routers, this one uses the USB dongle to provide the network connection, which means that you can pull the dongle out and take it with you when you're traveling.
It's nice to see this sort of technology trickling down into the consumer market at last. Of course, I'm still more interested in getting my hands on a femtocell. Is there any network that's even doing a closed beta trial of femtocells in the UK?
Update: Unboxing video from KCJH (via 3mobilebuzz)...
Labels:
Broadband,
D100,
Femtocell,
Home Router,
Router,
Three,
WiFi,
Wireless,
Wireless Router
Monday, October 06, 2008
The Mini 9 from Vodafone
So as I've mentioned before Dell's new netbook, the Inspiron mini 9, is going to be available for free with an 18 month mobile broadband contract on Vodafone here in the UK.
The launch date is the 13th of October, but the netbooks have already started to arrive at Vodafone's offices in Newbury. Disappointingly it looks like the rumours were correct, and I won't be able to just install a WWAN card in the off-the-shelf mini 9 which I picked up a couple of weeks ago.
Perhaps I should grab another from Vodafone, and then install Mac OSX on my current mini 9?
Tempting, but I'd really like to see how integrated the WWAN is into Windows before signing up for an 18 month contract. Or, thinking about it, whether anyone can get Vodafone's card to work under OSX if it comes to that. One of the things that seriously put me off the HSDPA USB dongle I had on loan from 3 was the hassle involved in actually using it...
Update: Dean Bubley has a cost analysis of Vodafone's offer, comparing it against an off-the-shelf mini direct from Dell with a 3G dongle. To cut a long story short it's more expensive, which at least to me, isn't exactly unexpected. You're paying, or at least being charged, for the extra convenience of having things built-in rather than having to carry around extra "stuff". Essentially you're paying an early-adopter premium.
I must admit I'm still very disappointed that I was unable to specify a vanilla 3G module when I bought my mini 9 directly from Dell. To be honest I wouldn't even mind having to post-purchase a WWAN card from Dell's co-marketing partner, in this case Vodafone, and slot it into my mini myself. However if the off-the-shelf mini's really are lacking the internal antenna infrastructure needed to support the card, that's probably a non-starter. Oh well...
The launch date is the 13th of October, but the netbooks have already started to arrive at Vodafone's offices in Newbury. Disappointingly it looks like the rumours were correct, and I won't be able to just install a WWAN card in the off-the-shelf mini 9 which I picked up a couple of weeks ago.
Netbooks causing a stir at the Vodafone offices... |
Posted to Flickr by jonmulholland. |
Perhaps I should grab another from Vodafone, and then install Mac OSX on my current mini 9?
The mini 9 running OSX |
CREDIT: UNEASYsilence |
Tempting, but I'd really like to see how integrated the WWAN is into Windows before signing up for an 18 month contract. Or, thinking about it, whether anyone can get Vodafone's card to work under OSX if it comes to that. One of the things that seriously put me off the HSDPA USB dongle I had on loan from 3 was the hassle involved in actually using it...
Update: Dean Bubley has a cost analysis of Vodafone's offer, comparing it against an off-the-shelf mini direct from Dell with a 3G dongle. To cut a long story short it's more expensive, which at least to me, isn't exactly unexpected. You're paying, or at least being charged, for the extra convenience of having things built-in rather than having to carry around extra "stuff". Essentially you're paying an early-adopter premium.
I must admit I'm still very disappointed that I was unable to specify a vanilla 3G module when I bought my mini 9 directly from Dell. To be honest I wouldn't even mind having to post-purchase a WWAN card from Dell's co-marketing partner, in this case Vodafone, and slot it into my mini myself. However if the off-the-shelf mini's really are lacking the internal antenna infrastructure needed to support the card, that's probably a non-starter. Oh well...
Sunday, September 28, 2008
Exploding custard
Not something I'd normally talk about, but since I was not more than a couple of miles away at the time I thought I'd point everyone towards the exploding custard truck near Chagford yesterday...
Fire crews raced to the blaze after being alerted but the desserts were too well alight and the whole lorry was consumed in just 20 minutes.
...you couldn't make this stuff up.
CREDIT: The Telegraph/SWNS |
...you couldn't make this stuff up.
Friday, September 26, 2008
An ADS to KML mashup
The idea of ADS to KML came up over morning coffee on the last day of the .astronomy meeting, and by the close of the conference I had most of it hacked together...
What am I talking about? A lot of papers on ADS now have links to the SIMBAD database for further information on the objects they discuss. For instance I was recently a co-author on an exo-planet paper which links to the relevant objects in SIMBAD...
The mashup at that point was obvious. Do an ADS query and look for all the papers with links into SIMBAD, then do a series of follow-up queries on SIMBAD and grab all of the objects mentioned in the papers. Then generate a KML file of your publication history, which you can either display directly in Google Sky, or embed into a Google Maps for Sky as I've done above.
Of course not all papers reference objects, and not all papers with objects have SIMBAD links, especially older papers. None the less, having run my script to generate a KML file for several colleagues now it actually gives a fairly good representation of their research interests.
You can grab the perl source code and have a play around with it yourself, you'll need my Astro::ADS module which you can grab from CPAN.
You could imagine several ways to extend my quick hack. If you had a large enough group of astronomers, and therefore a large enough number of papers, you could produce heat maps of the sky instead of using simple push pins. You could cross-correlate your own publications with that of a group or institute where you're thinking of applying for a job, or the publication output of a survey team with the footprint of their survey...
Comments welcome, but yes, I already know it's an interesting but essentially pointless hack. I mean other comments...
Publications for Allan, A. as KML |
What am I talking about? A lot of papers on ADS now have links to the SIMBAD database for further information on the objects they discuss. For instance I was recently a co-author on an exo-planet paper which links to the relevant objects in SIMBAD...
The mashup at that point was obvious. Do an ADS query and look for all the papers with links into SIMBAD, then do a series of follow-up queries on SIMBAD and grab all of the objects mentioned in the papers. Then generate a KML file of your publication history, which you can either display directly in Google Sky, or embed into a Google Maps for Sky as I've done above.
Of course not all papers reference objects, and not all papers with objects have SIMBAD links, especially older papers. None the less, having run my script to generate a KML file for several colleagues now it actually gives a fairly good representation of their research interests.
You can grab the perl source code and have a play around with it yourself, you'll need my Astro::ADS module which you can grab from CPAN.
You could imagine several ways to extend my quick hack. If you had a large enough group of astronomers, and therefore a large enough number of papers, you could produce heat maps of the sky instead of using simple push pins. You could cross-correlate your own publications with that of a group or institute where you're thinking of applying for a job, or the publication output of a survey team with the footprint of their survey...
Comments welcome, but yes, I already know it's an interesting but essentially pointless hack. I mean other comments...
Labels:
ADS,
API,
Astronomy,
Google,
Google Maps,
Google Sky,
Hack,
KML,
NASA,
Perl
Tuesday, September 23, 2008
What are you waiting for?
So one of the more interesting posters here at the .Astronomy conference is from the WETI Institute. Who ask, "What are you waiting for?"...
More from Stuart and Chris...
Waiting is a notoriously underappreciated method in our efforts to search for extraterrestrial intelligence. It is cheaper and less stressful than any other type of research. It is also environmentally friendly and does not cause global warming, terrorism or nuclear conflicts.
More from Stuart and Chris...
Monday, September 22, 2008
.astronomy
I'm currently sitting in the .astronomy conference in Cardiff, talking about astronomy and the new media. You can watch along with us over the next few days; we're broadcasting live on Ustream, Twitter and slightly delayed on YouTube...
Update: My conference talk is now online...
via Ustream.TV |
Update: My conference talk is now online...
Tuesday, September 16, 2008
GDD08: Wrapping Up
After picking up my free Google t-shirt, I'm back downstairs in Space Invaders wrapping up the day with the closing keynote...
Closing keynote...
After announcing the launch of the UK Developer Blog, we're turning around and heading back upstairs for beer, food and random fun...
Developer Day Wrap-up Video
We're done for the day, another year, and another Google Developer Day. Pictures from throughout the day can be found on my Flickr photo-stream...
Closing keynote...
After announcing the launch of the UK Developer Blog, we're turning around and heading back upstairs for beer, food and random fun...
Developer Day Wrap-up Video
We're done for the day, another year, and another Google Developer Day. Pictures from throughout the day can be found on my Flickr photo-stream...
GDD08: The Google Web Toolkit
I was a bit undecided about the last session, in the end I decided to go to the Google Web Toolkit: The Technical Advantage given by Sumit Chandel.
Sumit Chandel talking about GWT
What are the advantages of GWT? Firstly you get faster AJAX applications, it's faster than write-by-hand code, because the compiler takes care of cross-browser issues for you. You get free optimization, but of course that doesn't mean that you can throw general good programming practices out of the window, so in-efficient algorithms in GWT are still going to be in-efficient after optimization.
The next advantage is deferred binding. Why give the user more than they asked for? Users only download what they need to run your application. The compiler makes different bindings for your application at compile-time and choose the right one later.
Another advantage is that, with deferred binding in place you get to skip the browser quirks, you only need to code the abstraction of a given widget rather than having to handle them by hand.
Next, no more memory leaks. It's almost impossible to trace memory leaks in Javascript because there are so many ways to cause them. So provided you only code in GWT, this shouldn't happen to you.
GWT also means that your application gets history support, an implementation of the RSH protocol...
You also get code reuse though design patterns, something as a Perl person I'm not sure I believe in all that much. Although possibly that's just because I think loosely typed languages are a good idea and have never really understood Java programmer's obsession with the Gang of Four and patterns.
Another advantage is (supposedly?) faster development with IDE's and code support. Now here again, I'm not sure. I've never really been sold on development environments in general. I know good people who swear by them, and good people that think they're horrible. Perhaps I'm getting old?
Next advantage is proper testing of your AJAX application, and debugging with hosted mode. This is a definite advantage, testing AJAX applications, or Javascript code in the browser, is really hard.
Moving on, we're talking about what's new in GWT 1.5. Released at the end of August it includes Java 5 support, easier interoperability with JavaScript using JSO overlays, enhanced DOM class for full specification compliance and better application performance.
...and we're done.
Sumit Chandel talking about GWT
What are the advantages of GWT? Firstly you get faster AJAX applications, it's faster than write-by-hand code, because the compiler takes care of cross-browser issues for you. You get free optimization, but of course that doesn't mean that you can throw general good programming practices out of the window, so in-efficient algorithms in GWT are still going to be in-efficient after optimization.
The next advantage is deferred binding. Why give the user more than they asked for? Users only download what they need to run your application. The compiler makes different bindings for your application at compile-time and choose the right one later.
Another advantage is that, with deferred binding in place you get to skip the browser quirks, you only need to code the abstraction of a given widget rather than having to handle them by hand.
Next, no more memory leaks. It's almost impossible to trace memory leaks in Javascript because there are so many ways to cause them. So provided you only code in GWT, this shouldn't happen to you.
GWT also means that your application gets history support, an implementation of the RSH protocol...
You also get code reuse though design patterns, something as a Perl person I'm not sure I believe in all that much. Although possibly that's just because I think loosely typed languages are a good idea and have never really understood Java programmer's obsession with the Gang of Four and patterns.
Another advantage is (supposedly?) faster development with IDE's and code support. Now here again, I'm not sure. I've never really been sold on development environments in general. I know good people who swear by them, and good people that think they're horrible. Perhaps I'm getting old?
Next advantage is proper testing of your AJAX application, and debugging with hosted mode. This is a definite advantage, testing AJAX applications, or Javascript code in the browser, is really hard.
Moving on, we're talking about what's new in GWT 1.5. Released at the end of August it includes Java 5 support, easier interoperability with JavaScript using JSO overlays, enhanced DOM class for full specification compliance and better application performance.
...and we're done.
Labels:
Developer,
Developer Day,
GDD08,
GDD08UK,
Google,
GWT,
Sumit Chandel,
Tutorial
GDD08: What's New in Geo
After lunch, and I decided to skip the code labs and head for What's new in Geo with Jean-Laurent Wotton and Russell Middleton. Which means, oddly enough, I'm back in Donkey Kong...
Jean-Laurent Wotton and Russell Middleton
Russell kicked the session off with a Google Maps introduction to get everyone up to speed with the API. Handing over to Jean-Laurent we're being shown how to use the Maps geocoding service.
Moving on from the introductory Maps material we're talking about cool new features. First up, is the AJAX Search API, which has actually been around for a while...
Next up is Static Maps API, which lets you embed a Google Maps image on your webpage without requiring JavaScript or any dynamic page loading, and last week started serving satellite imagery as well as the normal map type. Interestingly there is also a Static Map Wizard to allow you to build a (moderately) sophisticated map without any knowledge of coding.
Now Russell is talking about the Flash API, which lets you write the code in ActionScript 3, compile it against the Google interface library and output a SWF containing the Map. I'm not a Flash guy, no pun intended, but it looks fairly solid.
Jean-Laurent and the Google Earth API
Back to Jean-Laurent and the Google Earth API which was introduced a few months ago. Although of course, as a Mac user, I still can't get at the Earth API, and there doesn't seem to be any news on the arrival of Linux and Mac versions of the plug-in as yet. Cool demo though...
Next is Google's Street View Service and how to display these both in and outside the Maps interface. Also pretty cool, although it's not yet possible to overlay anything on top of the panorama as yet.
Moving on, the final new feature is location detection. Until recently the user had to centre/zoom in their location themselves, the solution is automatic detect the user's location using the Maps AJAX API. The Maps API now automatically tries to geocode the user's IP address, and if successful it will make this location available to the application. If successful you can also capture the city, country, country code and region.
Next up is image overlays, and how the Google Maps interface can be used to navigate custom images by defining a custom overlay.
Finally, we're moving on to KML and network links, where as it happens, I'm on fairly solid ground so to speak...
...and we're done.
Update: Except we're not. Jean-Laurent and Russell have handed over to Angela Rele from the Met Office about using Google Earth to show the global impacts of climate change and the Google Outreach project.
Jean-Laurent Wotton and Russell Middleton
Russell kicked the session off with a Google Maps introduction to get everyone up to speed with the API. Handing over to Jean-Laurent we're being shown how to use the Maps geocoding service.
Moving on from the introductory Maps material we're talking about cool new features. First up, is the AJAX Search API, which has actually been around for a while...
Next up is Static Maps API, which lets you embed a Google Maps image on your webpage without requiring JavaScript or any dynamic page loading, and last week started serving satellite imagery as well as the normal map type. Interestingly there is also a Static Map Wizard to allow you to build a (moderately) sophisticated map without any knowledge of coding.
Now Russell is talking about the Flash API, which lets you write the code in ActionScript 3, compile it against the Google interface library and output a SWF containing the Map. I'm not a Flash guy, no pun intended, but it looks fairly solid.
Jean-Laurent and the Google Earth API
Back to Jean-Laurent and the Google Earth API which was introduced a few months ago. Although of course, as a Mac user, I still can't get at the Earth API, and there doesn't seem to be any news on the arrival of Linux and Mac versions of the plug-in as yet. Cool demo though...
Next is Google's Street View Service and how to display these both in and outside the Maps interface. Also pretty cool, although it's not yet possible to overlay anything on top of the panorama as yet.
Moving on, the final new feature is location detection. Until recently the user had to centre/zoom in their location themselves, the solution is automatic detect the user's location using the Maps AJAX API. The Maps API now automatically tries to geocode the user's IP address, and if successful it will make this location available to the application. If successful you can also capture the city, country, country code and region.
Next up is image overlays, and how the Google Maps interface can be used to navigate custom images by defining a custom overlay.
Finally, we're moving on to KML and network links, where as it happens, I'm on fairly solid ground so to speak...
...and we're done.
Update: Except we're not. Jean-Laurent and Russell have handed over to Angela Rele from the Met Office about using Google Earth to show the global impacts of climate change and the Google Outreach project.
Labels:
Developer,
Developer Day,
GDD08,
GDD08UK,
Google,
Google Gears,
Russell Middleton,
Tutorial
GDD08: What's New in Gears
After the break I'm back in Donkey Kong and listening to What's new in Gears with Aaron Boodman.
Aaron Boodman talking about Gears
The point of Gears is to add functionality to web applications, but Gears isn't just about "offline", what Google is trying to do is expose the capabilities of the local machine, whether that's your desktop of your mobile phone, to your web applications.
Every Google Chrome installation has Gears pre-installed, but Gears now supports IE, Firefox, Opera, Safari as well as Chrome. Although the Safari port was only launched yesterday. However the latest Android build also comes with a Gears stub, not full support, but it is coming soon.
We're spending some time talking about Gears' Desktop API and shortcut icons, and the File System API. The file system allows multi-file selection, fitering my extension or mime-type, native OS look-and-feel and makes sure the user has full control...
Obviously what you want to be able to do when you select a file, you want to do something with the contents of the file, and web applications can't. Which is what the Blob is for, it's a generic interchange format.
Next up is the Resumable Upload API which sits onto of the Blob API, which is apparently now live on YouTube. You can in theory parallelize the upload, but most browsers have a fairly low connection limit per-domain, so you can parallelize uploads up to that limit.
What's next for Gears? More of the same, continue to unlock the capabilities of the host system.
...we're out of slides, and the floor is open for questions.
is there any plans to allow applications to share data between domains? This is a problem Google has, and initially they thought about sharing databases together, but that seems like a recipie for disaster. What Gears has is Cross-origin Workers, a secure solution to the cross-origin restriction policy of the browser.
...and we're breaking for lunch. Still not sure whether I'll end up in a code lab or the main tracks for the rest of the afternoon.
Aaron Boodman talking about Gears
The point of Gears is to add functionality to web applications, but Gears isn't just about "offline", what Google is trying to do is expose the capabilities of the local machine, whether that's your desktop of your mobile phone, to your web applications.
Every Google Chrome installation has Gears pre-installed, but Gears now supports IE, Firefox, Opera, Safari as well as Chrome. Although the Safari port was only launched yesterday. However the latest Android build also comes with a Gears stub, not full support, but it is coming soon.
We're spending some time talking about Gears' Desktop API and shortcut icons, and the File System API. The file system allows multi-file selection, fitering my extension or mime-type, native OS look-and-feel and makes sure the user has full control...
Obviously what you want to be able to do when you select a file, you want to do something with the contents of the file, and web applications can't. Which is what the Blob is for, it's a generic interchange format.
Next up is the Resumable Upload API which sits onto of the Blob API, which is apparently now live on YouTube. You can in theory parallelize the upload, but most browsers have a fairly low connection limit per-domain, so you can parallelize uploads up to that limit.
I'm going to use Chrome, because they told us to use Chrome as much as possible. But it does work on the other browsers...Perhaps one of the coolest of the new APIs is the GeoLocation API, which can make use of on-board GPS, cell-phone tower and Wi-Fi access point triangulation. But developers can implement plug-ins to provide more methods of location. It should degrade cleanly, the API will provide the best guess of the user's location to your code.
What's next for Gears? More of the same, continue to unlock the capabilities of the host system.
...we're out of slides, and the floor is open for questions.
is there any plans to allow applications to share data between domains? This is a problem Google has, and initially they thought about sharing databases together, but that seems like a recipie for disaster. What Gears has is Cross-origin Workers, a secure solution to the cross-origin restriction policy of the browser.
...and we're breaking for lunch. Still not sure whether I'll end up in a code lab or the main tracks for the rest of the afternoon.
Labels:
Aaron Boodman,
Developer,
Developer Day,
GDD08,
GDD08UK,
Gears,
Google,
Google Gears,
Tutorial
GDD08: A Deeper Look at Google App Engine
It was a long walk between the keynote room, dubbed Space Invaders, and the App Engine talk here in Donkey Kong, and Google has set up a number of feeding stations along the way for weary developers...
Mano Marks talking about App Engine
But I'm now in "A Deeper Look at Google App Engine" given by Mano Marks.
We've got the first estimate so how much App Engine is going to cost above and beyond the amount Google is giving away for free, about US$40 if you use double the amount of traffic in your preview allocation. There is also support for cron'd jobs, SSL and other languages, apart from Python and presumably the already semi-public effort to port Perl to App Engine, coming soon.
After a brief discussion about what Mano can't talk about, mostly when new languages are coming to App Engine and what those languages will be, we've dived directly into the code, and we're looking through the example that will be used in the App Engine code lab this afternoon. Which I still haven't decided whether I'll go to yet...
We're talking about Bigtable, the storage mechanism underlying App Engine, and Mano is really trying to emphasize that it's not a relational database, it's an object orientated (schema-less) database.
After running through request handlers and entities, we're now talking about counters. One major difference between relational database and a distributed datastore like Bigtable. Bigtable doesn't know counts by design, must scan every entity row. Google is encouraging developers to create a separate entity that you can increment every time a entity is inserted, and decremented every time one is removed. However if you're doing frequent updates you'll end up with a requests queuing up to update the counter. The solution is to use a sharded counter. You create a number of shards, and when you go to increment the counter you pick a entity at random. Mano is now running through how this works in practice...
Mano is showing an implementation of sharded counters using Bigtable and memcache, I'm wondering why this isn't available as a default Google library so is just becomes the way counters are done with Bigtable..?
...and we're out of slides, opening the floor to questions the first one is exactly that. Why aren't counters built into Bigtable? The answer is, "good question". They're trying to keep the environment as clean a Python environment as possible, but I'm not entirely convinced that answers the question?
Interestingly, the recommended work around for the lack of cron support is to set up a remote call that polls a known end point inside App Engine periodically. However you need to remember that every job on App Engine only has 10 seconds to run, and is killed after that time limit is reached, so if you're trying to do something periodically that might take a lot of time to complete (for instance re-indexing) you might have to split this request up into chunks.
...and we're done.
Mano Marks talking about App Engine
But I'm now in "A Deeper Look at Google App Engine" given by Mano Marks.
We've got the first estimate so how much App Engine is going to cost above and beyond the amount Google is giving away for free, about US$40 if you use double the amount of traffic in your preview allocation. There is also support for cron'd jobs, SSL and other languages, apart from Python and presumably the already semi-public effort to port Perl to App Engine, coming soon.
After a brief discussion about what Mano can't talk about, mostly when new languages are coming to App Engine and what those languages will be, we've dived directly into the code, and we're looking through the example that will be used in the App Engine code lab this afternoon. Which I still haven't decided whether I'll go to yet...
We're talking about Bigtable, the storage mechanism underlying App Engine, and Mano is really trying to emphasize that it's not a relational database, it's an object orientated (schema-less) database.
After running through request handlers and entities, we're now talking about counters. One major difference between relational database and a distributed datastore like Bigtable. Bigtable doesn't know counts by design, must scan every entity row. Google is encouraging developers to create a separate entity that you can increment every time a entity is inserted, and decremented every time one is removed. However if you're doing frequent updates you'll end up with a requests queuing up to update the counter. The solution is to use a sharded counter. You create a number of shards, and when you go to increment the counter you pick a entity at random. Mano is now running through how this works in practice...
Mano is showing an implementation of sharded counters using Bigtable and memcache, I'm wondering why this isn't available as a default Google library so is just becomes the way counters are done with Bigtable..?
...and we're out of slides, opening the floor to questions the first one is exactly that. Why aren't counters built into Bigtable? The answer is, "good question". They're trying to keep the environment as clean a Python environment as possible, but I'm not entirely convinced that answers the question?
Interestingly, the recommended work around for the lack of cron support is to set up a remote call that polls a known end point inside App Engine periodically. However you need to remember that every job on App Engine only has 10 seconds to run, and is killed after that time limit is reached, so if you're trying to do something periodically that might take a lot of time to complete (for instance re-indexing) you might have to split this request up into chunks.
...and we're done.
Labels:
Developer,
Developer Day,
GDD08,
GDD08UK,
Google,
Google App Engine,
Mano Marks,
Tutorial
GDD08: The Opening Keynote
The keynote is apparently about what Google is doing for developers, and why we should care...
The opening keynote
The keynote is a hard sell for the "open web". Google believes that the browser is the client, but that modern web applications are pushing the limits of what is possible in the browser. We're getting a demo of some of the multi-process architecture of Google Chrome, and it's actually pretty impressive. I've already managed to test Chrome out, despite it currently being Windows only, and so far I must admit I'm pretty happy with it...
Next up is Gears, designed to allow you to extend the browser and enable richer web applications. The latest release has some interesting new APIs, the GeoLocation API, the Blob API and
We're now talking about the cloud and, amougst other things, Google App Engine and the scalability advantages of using the Google infrastructure instead of your own.
Android running on mystery hardware
Moving on Mike Jennings is taking the stage, and demo'ing Android running on real hardware, amusingly with the vendor's logo taped over, although it looks like an HTC handset. The device has wireless, 3G, GPS touch-screen and accelerometers, looks good...
After the hardware demo we're back to talking to client, cloud and connectivity and GWT. A set of open source tools and libraries for writing really large scale AJAX applications. At a high level GWT is about writing your web applications in the Java programming language and cross-compiling to Javascript that is guaranteed to work on IE, Firefox, Safari, Opera and Chrome.
The final topic in the keynote is Open Social, many sites, one API. But not, unfortunately, Facebook...
...and we're done. The rest of the day is devoted to more in-depth technical sessions.
The opening keynote
The keynote is a hard sell for the "open web". Google believes that the browser is the client, but that modern web applications are pushing the limits of what is possible in the browser. We're getting a demo of some of the multi-process architecture of Google Chrome, and it's actually pretty impressive. I've already managed to test Chrome out, despite it currently being Windows only, and so far I must admit I'm pretty happy with it...
Next up is Gears, designed to allow you to extend the browser and enable richer web applications. The latest release has some interesting new APIs, the GeoLocation API, the Blob API and
onprogress( )
events.We're now talking about the cloud and, amougst other things, Google App Engine and the scalability advantages of using the Google infrastructure instead of your own.
Android running on mystery hardware
Moving on Mike Jennings is taking the stage, and demo'ing Android running on real hardware, amusingly with the vendor's logo taped over, although it looks like an HTC handset. The device has wireless, 3G, GPS touch-screen and accelerometers, looks good...
After the hardware demo we're back to talking to client, cloud and connectivity and GWT. A set of open source tools and libraries for writing really large scale AJAX applications. At a high level GWT is about writing your web applications in the Java programming language and cross-compiling to Javascript that is guaranteed to work on IE, Firefox, Safari, Opera and Chrome.
The final topic in the keynote is Open Social, many sites, one API. But not, unfortunately, Facebook...
...and we're done. The rest of the day is devoted to more in-depth technical sessions.
Google Developer Day 2008
I'm currently holed up in "Space Invaders" waiting for the first keynote of Google Developer Day 2008.
Space Invaders
For those of you who didn't manage to talk your boss into letting you blow an entire day on this thing, the Google Developer YouTube channel should have all talks.
I'm currently intending to go to Deeper look at Google App Engine followed by Google Gears. Then after lunch I'll either be going to the Building a simple application using Google App Engine code lab or What's New in Geo and the Google Web Toolkit. Either way I'll try and keep blogging all day, and while unlike last year the wireless network is holding up under the strain remarkably well, we still don't have any power sockets.
This year's event is, at least so far, fairly light on free stuff. We've been given a gift wrapped, USB key drive, that's actually faintly unsettling when in use. No silly putty, t-shirts, or yo-yo's this year. You can't have it all...
Update: Posts from the Opening Keynote and Deeper look at Google App Engine session.
Update: Post from the What's new in Gears session.
Update: Posts from both the What's new in Geo and the Google Web Toolkit sessions.
Update: The closing keynote. Now time for beer, food and random fun...
Update: Pictures from throughout the day can be found on my Flickr photo-stream, and videos of most of the day will be uploaded to the Google YouTube channel real soon now...
Space Invaders
For those of you who didn't manage to talk your boss into letting you blow an entire day on this thing, the Google Developer YouTube channel should have all talks.
I'm currently intending to go to Deeper look at Google App Engine followed by Google Gears. Then after lunch I'll either be going to the Building a simple application using Google App Engine code lab or What's New in Geo and the Google Web Toolkit. Either way I'll try and keep blogging all day, and while unlike last year the wireless network is holding up under the strain remarkably well, we still don't have any power sockets.
This year's event is, at least so far, fairly light on free stuff. We've been given a gift wrapped, USB key drive, that's actually faintly unsettling when in use. No silly putty, t-shirts, or yo-yo's this year. You can't have it all...
Update: Posts from the Opening Keynote and Deeper look at Google App Engine session.
Update: Post from the What's new in Gears session.
Update: Posts from both the What's new in Geo and the Google Web Toolkit sessions.
Update: The closing keynote. Now time for beer, food and random fun...
Update: Pictures from throughout the day can be found on my Flickr photo-stream, and videos of most of the day will be uploaded to the Google YouTube channel real soon now...
Thursday, September 11, 2008
First impressions of the Dell mini 9
Quite unexpectedly on Thursday morning, over a week before my predicted ship date, my new Dell Inspiron mini 9 arrived. I wasn't alone of course, it seems that they were arriving on doorsteps everywhere, and after playing with it over the weekend I thought I'd post my first impressions of Dell's new netbook.
It's hard to convey how small this thing is, none of the pictures I've taken so far show that, it just looks laptop sized, although the above with the mini perched on top of my Macbook comes close...
Ergonomically then, things are a bit of challenge. I'm still getting used to the keyboard, and I'm unsure whether I'll ever be able to touch type properly on it, and here the somewhat eccentric placement of some of the keys doesn't help. However, it's useable. The same can be said of the processor, I've found the mini somewhat sluggish, although that could be my frustration with Windows shining through. However, again, it's usable...
More than useable in fact, I'm impressed. The screen is just big enough, at least for me, the keyboard isn't really all that bad. It's fast enough and it hardly weighs anything. The power brick isn't huge and, so far at least, the battery seems to last the predicted three and a half hours.
My current plan is to try and use the netbook as it one is intended to be used, and fortunately Dell hasn't loaded XP down with the traditional bloatware. I've installed Google's new Chrome browser and generated desktop icons for Google Mail and Google Reader. I've installed Jungle Disk to access my Amazon S3 buckets, and as much as possible I intend to live in the cloud.
We'll see how that goes...
Posted on Flickr by aallan. |
The mini 9 with my 13-inch Macbook for comparison |
It's hard to convey how small this thing is, none of the pictures I've taken so far show that, it just looks laptop sized, although the above with the mini perched on top of my Macbook comes close...
Ergonomically then, things are a bit of challenge. I'm still getting used to the keyboard, and I'm unsure whether I'll ever be able to touch type properly on it, and here the somewhat eccentric placement of some of the keys doesn't help. However, it's useable. The same can be said of the processor, I've found the mini somewhat sluggish, although that could be my frustration with Windows shining through. However, again, it's usable...
More than useable in fact, I'm impressed. The screen is just big enough, at least for me, the keyboard isn't really all that bad. It's fast enough and it hardly weighs anything. The power brick isn't huge and, so far at least, the battery seems to last the predicted three and a half hours.
My current plan is to try and use the netbook as it one is intended to be used, and fortunately Dell hasn't loaded XP down with the traditional bloatware. I've installed Google's new Chrome browser and generated desktop icons for Google Mail and Google Reader. I've installed Jungle Disk to access my Amazon S3 buckets, and as much as possible I intend to live in the cloud.
We'll see how that goes...
Labels:
Amazon S3,
Cloud Computing,
Dell,
Google Chrome,
Insipron,
Laptop,
Mini 9,
Netbook,
Review,
Unboxing,
Web Application
Thursday, September 04, 2008
The 3 HSDPA Dongle Review
I've just dropped the HSDPA dongle I've had on loan into a prepaid envelope to return the hardware to 3, so I thought I'd better write up my experiences with it now while it's still on my mind...
After some initial teething troubles I got the dongle working under OSX on my Intel Macbook, and gave it a fairly thorough work out over the course of the last couple of months.
As can be seen from their rollout map, 3 doesn't yet have any HSDPA coverage down here in the South West. Locally then, I'm suffering under the same sorts of problems I had with 3's Skypephone. The places where there isn't any 3 coverage is rather long; my house, my office, the cities and towns I visit regularly. The list of places where there is coverage is considerably shorter, and that's bad. What this also implies is that the problems seen with HSDPA when on the fringes of coverage are perhaps more significant than you might think.
However, whether a wireless modem works when you're in your own living room isn't, perhaps, as relevant as how well it works when it isn't. I used it extensively when I was out in Italy for the Trieste meeting roaming onto the 3 network there, and if I hadn't had the dongle on loan, I'd have only have been paying UK rates to do so...
I've also made use of it on various trips up and down the country, while stuck in hotel rooms, on trains and in coffee shops. I found it to be a good backup if wireless wasn't available. That said, performance was noticeably more sluggish than wireless, and if wireless access was available I generally still ended up paying for that rather than using the dongle.
Because somewhat unfortunately I found the process of using the dongle klunky and inconvenient. Coverage wasn't always there, and when it was there it wasn't there automatically. If I wanted to make use of it I had to dig the dongle out of my bag, plug it in, wait for it to find the 3 network, then wait for it to connect, wait for authorization. A lot of waiting...
I think I would have found the process a lot less inconvenient if HSPDA was built-in to my laptop, and like WiFi, automatically connected to a network when one was present. I'd like my data connection to seamlessly switching between wired, Wi-Fi and HSDPA when needed, without having to do do any fiddling around. Which I why I found the Dell and Vodafone announcement earlier today so interesting. You have to wonder how well integrated Vodafone's HSDPA card and Dell's mini 9 are going to be?
Three blew the roof off the mobile data market late last year when then started offering flat rate mobile broadband. Except of course it's not unlimited, their biggest plan has a data allowance of 15GB a month for £30. Which by mobile network standards is pretty good going. But despite the fact I wasn't paying for the bandwidth I found myself obsessively checking how much of the data allowance I was using, and the days where that's acceptable to me are long gone...
So the question I'm asking myself is "what's it for"? With a 15GB per month allowance this would never replace my home ADSL connection, I'd blow through that within the first week. So this is strictly for when you're out of the house, and the office, traveling. Perhaps this isn't normal, but most of the traveling I do is to the US. I don't spend much time in Europe, and less time than that traveling around the UK. Which means that I'd be paying £3 per MB when roaming, which clearly is just totally unacceptable.
So perhaps what I'm really saying here is that for me, this isn't the solution. Even when in Europe, and paying 10 pence per MB rather than £3 per MB, it isn't really good enough. However if you do most of your traveling in the UK, or within the coverage of a 3 sister network, perhaps you should take a look. It could be well worth your while.
As always then, your mileage may vary...
After some initial teething troubles I got the dongle working under OSX on my Intel Macbook, and gave it a fairly thorough work out over the course of the last couple of months.
As can be seen from their rollout map, 3 doesn't yet have any HSDPA coverage down here in the South West. Locally then, I'm suffering under the same sorts of problems I had with 3's Skypephone. The places where there isn't any 3 coverage is rather long; my house, my office, the cities and towns I visit regularly. The list of places where there is coverage is considerably shorter, and that's bad. What this also implies is that the problems seen with HSDPA when on the fringes of coverage are perhaps more significant than you might think.
However, whether a wireless modem works when you're in your own living room isn't, perhaps, as relevant as how well it works when it isn't. I used it extensively when I was out in Italy for the Trieste meeting roaming onto the 3 network there, and if I hadn't had the dongle on loan, I'd have only have been paying UK rates to do so...
I've also made use of it on various trips up and down the country, while stuck in hotel rooms, on trains and in coffee shops. I found it to be a good backup if wireless wasn't available. That said, performance was noticeably more sluggish than wireless, and if wireless access was available I generally still ended up paying for that rather than using the dongle.
Because somewhat unfortunately I found the process of using the dongle klunky and inconvenient. Coverage wasn't always there, and when it was there it wasn't there automatically. If I wanted to make use of it I had to dig the dongle out of my bag, plug it in, wait for it to find the 3 network, then wait for it to connect, wait for authorization. A lot of waiting...
I think I would have found the process a lot less inconvenient if HSPDA was built-in to my laptop, and like WiFi, automatically connected to a network when one was present. I'd like my data connection to seamlessly switching between wired, Wi-Fi and HSDPA when needed, without having to do do any fiddling around. Which I why I found the Dell and Vodafone announcement earlier today so interesting. You have to wonder how well integrated Vodafone's HSDPA card and Dell's mini 9 are going to be?
Three blew the roof off the mobile data market late last year when then started offering flat rate mobile broadband. Except of course it's not unlimited, their biggest plan has a data allowance of 15GB a month for £30. Which by mobile network standards is pretty good going. But despite the fact I wasn't paying for the bandwidth I found myself obsessively checking how much of the data allowance I was using, and the days where that's acceptable to me are long gone...
So the question I'm asking myself is "what's it for"? With a 15GB per month allowance this would never replace my home ADSL connection, I'd blow through that within the first week. So this is strictly for when you're out of the house, and the office, traveling. Perhaps this isn't normal, but most of the traveling I do is to the US. I don't spend much time in Europe, and less time than that traveling around the UK. Which means that I'd be paying £3 per MB when roaming, which clearly is just totally unacceptable.
So perhaps what I'm really saying here is that for me, this isn't the solution. Even when in Europe, and paying 10 pence per MB rather than £3 per MB, it isn't really good enough. However if you do most of your traveling in the UK, or within the coverage of a 3 sister network, perhaps you should take a look. It could be well worth your while.
As always then, your mileage may vary...
The Mini 9 with built-in HSDPA?
Hot on the heels of the official release of Dell's new netbook, the Inspiron mini 9, is the news that Dell has shaken hands with Vodafone on a co-marketing deal.
However the rumour is that, unlike similar deals, the Dell netbook will ship (at least here in the UK) with built-in HSDPA broadband. Which will certainly set the cat amoungt the pigeons...
If true, and initial reviews of the netbook certainly suggest that there could be more than a grain of truth here, this is exciting stuff.
Update: Okay, that's official. Although there isn't any news as to cost as yet, Vodafone would be mad not to significantly subsidise the already fairly moderate cost of the mini 9. Free with a contract data plan sounds like a decent price point to me...
Now I have to decide whether I should pick one up now, or wait? If I buy now, can I get an HSDPA board for it later, or will I be stuck without WWAN access? Decisions, decisions...
Update: The Vodafone press release...
Update: If true, the news that the stock version of the mini 9 "...doesn't have the internal antenna infrastructure needed to support mobile broadband", isn't good...
However the rumour is that, unlike similar deals, the Dell netbook will ship (at least here in the UK) with built-in HSDPA broadband. Which will certainly set the cat amoungt the pigeons...
If true, and initial reviews of the netbook certainly suggest that there could be more than a grain of truth here, this is exciting stuff.
Update: Okay, that's official. Although there isn't any news as to cost as yet, Vodafone would be mad not to significantly subsidise the already fairly moderate cost of the mini 9. Free with a contract data plan sounds like a decent price point to me...
Now I have to decide whether I should pick one up now, or wait? If I buy now, can I get an HSDPA board for it later, or will I be stuck without WWAN access? Decisions, decisions...
Update: The Vodafone press release...
Update: If true, the news that the stock version of the mini 9 "...doesn't have the internal antenna infrastructure needed to support mobile broadband", isn't good...
The Dell Inspiron Mini 9
The much rumoured and long awaited Dell Insipron mini 9 was released officially today, both in the US and in the UK. Although from the looks of things the US rollout isn't going that smoothly, with XML errors and unreachable web pages that are appearing and disappearing at random...
The bad news is that while there are three models in the US, priced at US$349, US$399 and US$449, only the top end model has seen the light of day here on the other side of the pond, priced at £299. The UK version is also only shipping with Windows XP, there isn't an option for an Ubuntu installation, as there is in the US...
However, taking the exchange rate into account, and the fact that the US prices aren't quoted with sales tax included, the UK price is actually (for once) fairly comparable with the US price for the same hardware. Well done Dell. But unfortunately there is more bad news...
There isn't any sign of the red version of the new notebook, either here or in the US. While in the US you can have the mini 9 in either white or black, shades of the Apple Macbook there? On the UK side of the pond you can have any colour you like, so long as it's black. Unfortunately for Dell, the red version was the reason I wanted one in the first place, it's certainly the reason my wife wants (wanted?) one.
The good news? Apparently additional colours and a version shipping with Ubuntu are "coming soon"...
Update: Also coming soon is a version of the mini 9 shipping with built-in HSDPA broadband from Vodafone...
The bad news is that while there are three models in the US, priced at US$349, US$399 and US$449, only the top end model has seen the light of day here on the other side of the pond, priced at £299. The UK version is also only shipping with Windows XP, there isn't an option for an Ubuntu installation, as there is in the US...
However, taking the exchange rate into account, and the fact that the US prices aren't quoted with sales tax included, the UK price is actually (for once) fairly comparable with the US price for the same hardware. Well done Dell. But unfortunately there is more bad news...
There isn't any sign of the red version of the new notebook, either here or in the US. While in the US you can have the mini 9 in either white or black, shades of the Apple Macbook there? On the UK side of the pond you can have any colour you like, so long as it's black. Unfortunately for Dell, the red version was the reason I wanted one in the first place, it's certainly the reason my wife wants (wanted?) one.
The good news? Apparently additional colours and a version shipping with Ubuntu are "coming soon"...
Update: Also coming soon is a version of the mini 9 shipping with built-in HSDPA broadband from Vodafone...
Tuesday, September 02, 2008
This is not the Earth you are looking for...
After spending more time than I should hacking Sky support into Maps.app on my iPod touch I'm somewhat ambivalent about the arrival of Earthscape on the App Store (via the Google Earth Blog).
This is not the Earth I was looking for...
Earthscape has poor imagery outside of the continental United States, and the current version has no KML or accelerometer support and no search capability. Right now at least it's a cool toy. I've bought a copy because I quite like cool toys and I'm sure a bunch of other people will buy it for the same reason, and as a technical demonstrator it's impressive. But as a useful tool? Not at the moment.
At which point I guess I'm still waiting for Google Earth, and Google Sky, for my iPod touch. Of course I can't yet get Google Earth in a browser on my Mac, so I might be waiting a while...
This is not the Earth I was looking for...
Earthscape has poor imagery outside of the continental United States, and the current version has no KML or accelerometer support and no search capability. Right now at least it's a cool toy. I've bought a copy because I quite like cool toys and I'm sure a bunch of other people will buy it for the same reason, and as a technical demonstrator it's impressive. But as a useful tool? Not at the moment.
At which point I guess I'm still waiting for Google Earth, and Google Sky, for my iPod touch. Of course I can't yet get Google Earth in a browser on my Mac, so I might be waiting a while...
Labels:
Apple,
Earthscape,
Google,
Google Earth,
Google Sky,
iPhone,
iPod touch
Wednesday, August 27, 2008
The iPhone NDA
So last night I pre-ordered a copy of Erica Sadun's "iPhone Developer's Cokbook" from Amazon. The expected ship date is sometime late in October, but I'll be surprised if that's even vaguely accurate considering the ongoing problems with the NDA. Developers are now resorting to paying each other US$1 so they can be a sub-contractor, and presumably have some sort of legal protection against Apple's legal team and the NDA, before sharing information about developing against the official iPhone SDK.
So I doubt Erica's publisher will let her release the book until the NDA is lifted, and she isn't alone in having that problem, there are no doubt a bunch of books, tutorials and other such things waiting in the wings, waiting for Apple to lift the NDA.
However it currently seems to be a case that it's not when the NDA is lifted, but if it's going to be lifted at all. In what is now being called the fourth age of software distribution, might yet more companies adopt this bullying approach? That's a faintly scary prospect for independent developers like me...
So I doubt Erica's publisher will let her release the book until the NDA is lifted, and she isn't alone in having that problem, there are no doubt a bunch of books, tutorials and other such things waiting in the wings, waiting for Apple to lift the NDA.
However it currently seems to be a case that it's not when the NDA is lifted, but if it's going to be lifted at all. In what is now being called the fourth age of software distribution, might yet more companies adopt this bullying approach? That's a faintly scary prospect for independent developers like me...
Border Gateway Protocol
Close on the heels of the publicity surrounding cookie hijacking there is now another potentially much more serious problem, this time with the Border Gateway Protocol the core routing protocol underlying the Internet...
Friday, August 22, 2008
Wednesday, August 20, 2008
Cookie Hijacking
Things are looking a bit grim on the security side. Close on the heels of the DNS cache poisoning flaw discovered by Dan Kaminsky last month, there is now a new bogie man, automated HTTPS cookie hijacking...
Time progression showing vulnerable DNS servers: Red dots represent unpatched servers, yellow dots patched servers with NAT problems, green dots are patched servers.
The problem has gotten a lot of attention with respect to unencrypted GMail sessions, in fact there is now a widely available automated tool which allows you to steal session cookies on
Surf Jacking Gmail demonstration from Sandro Gauci on Vimeo
However the problems is more widespread than just GMail, although there are still problems even there, and potentially affects a much broader range of sites.
Of course we can't all go hide in a darkened room and realistically, unless you're a high profile target, your chance of getting caught by this vulnerability is fairly low. However potentially at least, this is serious. You email, merchant account, banking and other personal information are potentially at risk. Right now it's not clear how widespread this problem actually is, so be careful out there...
Time progression showing vulnerable DNS servers: Red dots represent unpatched servers, yellow dots patched servers with NAT problems, green dots are patched servers.
The problem has gotten a lot of attention with respect to unencrypted GMail sessions, in fact there is now a widely available automated tool which allows you to steal session cookies on
HTTP
and HTTPS
sites that do not set the cookie secure flag. Surf Jacking Gmail demonstration from Sandro Gauci on Vimeo
However the problems is more widespread than just GMail, although there are still problems even there, and potentially affects a much broader range of sites.
Since so many sites are likely vulnerable, the actual reporting process is probably going to fall on the shoulders of users. To check your sites under Firefox, go to the Privacy tab in the Preferences window, and click on "Show Cookies". For a given site, inspect the individual cookies, and if any have "Send For: Encrypted connections only", delete them. Then try to visit your site again. If it still allows you in, the site is insecure and your session can be stolen. You should report this to the site maintainer. - Mike Perry
Of course we can't all go hide in a darkened room and realistically, unless you're a high profile target, your chance of getting caught by this vulnerability is fairly low. However potentially at least, this is serious. You email, merchant account, banking and other personal information are potentially at risk. Right now it's not clear how widespread this problem actually is, so be careful out there...
Tuesday, August 12, 2008
Poor indexing?
Nick Carr passes on James Evan's argument in a recent issue of Science that the chief advantage of print media is "poor indexing". How bizarre...
Ironically, my research suggests that one of the chief values of print library research is its poor indexing. Poor indexing—indexing by titles and authors, primarily within journals—likely had the unintended consequence of actually helping the integration of science and scholarship. - James Evans in the Britannica Blog
Friday, August 08, 2008
Paper Phishing
So we're all used to identifying and avoiding phishing attempts via email, but what about when it happens on paper? Today I received an actual paper letter, purporting to be from one of my banks, advising me that they had contacted me some time ago and hadn't had a reply, and that the due to a change in the law they needed to update the information about my extra card holder.
The letter looked genuine and included a 'Extra Cardholder Information Form' and a prepaid envelope to provide the details, and a freephone number that I could alternatively call to provide them. It went on to advise me that if I still needed my extra cardholder I must provide the information within 28 days or they would remove the extra card from my account.
So it looked genuine, except it sort of didn't. My finely tuned spider sense was tingling, if this was an email it wouldn't have even made it past my spam filter.
Despite the fact the letter had my account number on it, and was sent to my address, I was suspicious. So I called the fraud division of the bank in question, they had no record on my account of sending out such a letter, and the freephone number didn't, as far as they knew, belong to them. I'd just been the (almost) victim of a paper-based phishing attack.
Both of us were surprised, this is the first example of a paper-based phishing attack that I, and perhaps more worryingly the bank, had come across. If you get a letter that doesn't look quite right from your bank and is asking for personal information that, as far as you know, they should already have, call your bank on a number you know is genuine to confirm that it was actually from them.
It looks like the bad guys just raised the stakes, and we're now playing a new game entirely. It also looks likely that there has been some sort of major compromise with this specific bank, there were too many details in the letter to have come from a retail source. So this is your warning, keep your guard up...
The letter looked genuine and included a 'Extra Cardholder Information Form' and a prepaid envelope to provide the details, and a freephone number that I could alternatively call to provide them. It went on to advise me that if I still needed my extra cardholder I must provide the information within 28 days or they would remove the extra card from my account.
So it looked genuine, except it sort of didn't. My finely tuned spider sense was tingling, if this was an email it wouldn't have even made it past my spam filter.
Despite the fact the letter had my account number on it, and was sent to my address, I was suspicious. So I called the fraud division of the bank in question, they had no record on my account of sending out such a letter, and the freephone number didn't, as far as they knew, belong to them. I'd just been the (almost) victim of a paper-based phishing attack.
Both of us were surprised, this is the first example of a paper-based phishing attack that I, and perhaps more worryingly the bank, had come across. If you get a letter that doesn't look quite right from your bank and is asking for personal information that, as far as you know, they should already have, call your bank on a number you know is genuine to confirm that it was actually from them.
It looks like the bad guys just raised the stakes, and we're now playing a new game entirely. It also looks likely that there has been some sort of major compromise with this specific bank, there were too many details in the letter to have come from a retail source. So this is your warning, keep your guard up...
Thursday, July 31, 2008
Interrupted Journeys
For those of you puzzled by my early departure from OSCON a week ago, and my non-appearance at HTN IV this week, I'd like to announce the very unexpected early arrival of my son, Alexander Michael. Born late yesterday evening, just under two months premature, and weighing just over 4 lbs.
Both mother and baby are doing well...
Both mother and baby are doing well...
Wednesday, July 23, 2008
OSCON: Wednesday Morning Keynote
My jet lag caught up with me last night and I ended not making it to the Tuesday Night Extravaganza, although other people did and cruelly didn't blog Damian's talk for the rest of us that didn't. So I don't get to find anything more about "Temporally Quaquaversal Virtual Nanomachine Programming In Multiple Topologically Connected Quantum-Relativistic Parallel Timespaces", which is a pity...
The keynote kicked off with Allison Randall and Edd Dumbill, talking about the history of Open Source and OSCON. This is the first OSCON without Nat at the helm, and while I've seen him around, it's pretty weird not to have the keynote kick off with "...and here's your conference chair, Nat Torkington".
Update: Looks like I'll see you all next year. While I was in the keynote I got a phone call and I'm now heading back into the UK somewhat earlier than planned.
Update: More than one interrupted journey...
The keynote kicked off with Allison Randall and Edd Dumbill, talking about the history of Open Source and OSCON. This is the first OSCON without Nat at the helm, and while I've seen him around, it's pretty weird not to have the keynote kick off with "...and here's your conference chair, Nat Torkington".
Update: Looks like I'll see you all next year. While I was in the keynote I got a phone call and I'm now heading back into the UK somewhat earlier than planned.
Update: More than one interrupted journey...
Perl on Google App Engine
I woke up this morning to some of the best news I've heard in a while, it looks like there is some progress with putting Perl onto Google App Engine.
More from Brad Fitzpatrick. If you'd like to discuss this or help out, join the perl-appengine mailing list, and submit code to the appengine-perl project on Google Code. For more information see the Perl-on-AppEngine FAQ.
Maybe I won't have to learn Python after all...
More from Brad Fitzpatrick. If you'd like to discuss this or help out, join the perl-appengine mailing list, and submit code to the appengine-perl project on Google Code. For more information see the Perl-on-AppEngine FAQ.
Maybe I won't have to learn Python after all...
Labels:
Cloud Computing,
Google,
Google App Engine,
Perl,
Web 2.0,
Web Services
Tuesday, July 22, 2008
OSCON: Practical Erlang Programming
This afternoon I'm sitting in "Practical Erlang Programming" given by Francesco Cesarini. Erlang has been around for almost twenty years, and is a niche language. However we're increasingly starting to hear more about it due to the growth in the number of multi-core machines. So I figured I so go and find out what all the fuss was about...
Update: I think Francesco has really overestimated the capacity of the wireless network, he's just told ninety people to download the source bundle and install Erlang.
Update: Okay, we're kicking off with data types; integers, floats, atoms are the simple types. Then we have tuples and lists. Interestingly variables in Erlang are single assignment, an values of variables can not be changed once it has been bound. Puzzled, variables are not very variable at that point?
Update: Pattern matching is used for assigning vales to variables, controlling the executing flow of programs and extracting values.
Update: Moving onto to function calls, this looks, well. Odd.
Function have clauses separated by '
Variables are local to functions and allocated and deallocated automatically.
Update: Modules are stored in files with the
compiling this from the command line
Update: We've now looked at the basics, we're moving on to sequential Erlang; Conditionals, guards and recursion.
In a conditional one branch must always succeed, you can put the '
Again, one branch must always succeed, by using
instead of having this,
Of these two the top one is the faster one, but you really shouldn't really worry about that when using Erlang, apparently...
All variables in guards have to be bound. If all guards have to succeed, use '
Update: On to recursion,
Note the pattern of recursion is the same in both cases. Taking a list and evaluating an element is a very common pattern...
Update: Now onto Build In Functions (BIFs)...
BIFs are by convention regarded as being in the erlang module. There are BIFs for process and port handling, object access and examination, meta programming, type conversion, etc.
Update: We're running through all the possible run time errors.
Update: We're breaking (late!) for coffee...
Update: ...and we're back, and walking through some examples, and onwards to concurrent Erlang.
Before the spawn code is executed by Pid1, afterwards a new process Pid2 is created. The identified Pid2 is only known to Pid1. A process terminates abnormally when run-time error occurs, and normally when there is no more code to execute. Processes do not share data, and the only way to do so is using message passing. Sending a message will never fail, messages sent to non existing processes are thrown away. Received messages are stored in a process mailbox, and will receive them inside a receive clause,
Unlike a
Update: We're getting into a static versus dynamic typing argument, the bizarreness is that even the Francesco seems to think that static typing is a good thing. Why is that? I'm really surprised, after all I'd argue that there are a bunch of reasons to use loosely typed languages in preference to statically typed ones.
Update: It's also interesting that some people in the audience here aren't getting the "let it crash" mantra coming from Francesco. In a highly concurrent language where everything is a process, letting a process crash is just how you handle errors. A process crash is essentially the same as throwing an exception.
Update: I'm starting to loose the thread of the talk now. Pity, Francesco has just got to the interesting bit. It's been a long day...
Update: ...and we're done. Chris was also blogging the tutorial so head over to his post for more coverage.
Update: I think Francesco has really overestimated the capacity of the wireless network, he's just told ninety people to download the source bundle and install Erlang.
Update: Okay, we're kicking off with data types; integers, floats, atoms are the simple types. Then we have tuples and lists. Interestingly variables in Erlang are single assignment, an values of variables can not be changed once it has been bound. Puzzled, variables are not very variable at that point?
1> A = 123.
123
2> A.
123
3> A = 124.
** exception error no match of right hand
4> f().
ok
5> A = 124.
124
Update: Pattern matching is used for assigning vales to variables, controlling the executing flow of programs and extracting values.
Update: Moving onto to function calls, this looks, well. Odd.
area( {square, Side} ) ->
Side * Side ;
area( {circle, Radius } ) ->
3.14 * Radius * Radius;
area( {triangle, A, B, C} ) ->
S = ( A + B + C )/2,
math:sqrt(S*(S-A)*(S-B)*(S-C));
area( Other ) ->
{error, invalid_object}.
Function have clauses separated by '
;
'. Erlang programs consist of a collection of modules that contain functions that call each other. Function and modules names must be atoms.factorial(0) ->
1;
factorial(N) ->
N * factorial(N-1).
Variables are local to functions and allocated and deallocated automatically.
Update: Modules are stored in files with the
.erl
suffix, module and file names must be the same. Modules are names with the -module(Name).
directive.-module(demo).
-export([double/1]).
% Exporting the function double with arity 1
double(X) ->
times(X, 2).
times( X, N ) ->
X * N.
compiling this from the command line
1> cd("/Users/aa/").
2> c(demo).
{ok,demo}
3> demo:double(10).
20
4> demo:times(1,2)
**exception error: undefined function demo:times
Update: We've now looked at the basics, we're moving on to sequential Erlang; Conditionals, guards and recursion.
case lists:members(foo, List) of
true -> ok;
false -> {error, unknown}
end
In a conditional one branch must always succeed, you can put the '
_
' or an unbound variable in the last clause to ensure this happens.if
X < 1 -> smaller;
X > 1 -> greater ;
X == 1 -> equal
end
Again, one branch must always succeed, by using
true
as the last guard you ensure that the last clause will always succeed should previous ones evaluate to false, see it as an 'else' clause. So we can have,factorial(N) when N > 0 ->
N * factorial( N - 1 );
factorial(0) ->
1.
instead of having this,
factorial(0) ->
1;
factorial(N) ->
N * factorial(N-1).
Of these two the top one is the faster one, but you really shouldn't really worry about that when using Erlang, apparently...
All variables in guards have to be bound. If all guards have to succeed, use '
,
' to seperate them, if one has to succeed, use ';
' to separate them. Guards have to be free of side effects.Update: On to recursion,
average(X) -> sum(X) / len (X).
sum([H|T) -> H + sum(T);
sum([]) -> 0.
len([_|T]) -> 1 + len(T);
len([]) -> 0.
Note the pattern of recursion is the same in both cases. Taking a list and evaluating an element is a very common pattern...
Update: Now onto Build In Functions (BIFs)...
date()
time()
length(List)
size(Tuple)
atom_to_list(Atom)
list_to_tuple(List)
integer_to_list(234)
tuple_to_list(Tuple)
BIFs are by convention regarded as being in the erlang module. There are BIFs for process and port handling, object access and examination, meta programming, type conversion, etc.
Update: We're running through all the possible run time errors.
Update: We're breaking (late!) for coffee...
Update: ...and we're back, and walking through some examples, and onwards to concurrent Erlang.
Pid2 = spawn(Mod, Func, Args)
Before the spawn code is executed by Pid1, afterwards a new process Pid2 is created. The identified Pid2 is only known to Pid1. A process terminates abnormally when run-time error occurs, and normally when there is no more code to execute. Processes do not share data, and the only way to do so is using message passing. Sending a message will never fail, messages sent to non existing processes are thrown away. Received messages are stored in a process mailbox, and will receive them inside a receive clause,
recieve
{resetn, Board } -> reset(Board);
{shut_down, Board{ -> {error, unknown_msg}
end
Unlike a
case
block, receive suspends the process until a message which matches a case is received. Message passing is asynchronous, one of the things you look for in stress testing Erlang systems is running out of memory because of full mailboxes.Update: We're getting into a static versus dynamic typing argument, the bizarreness is that even the Francesco seems to think that static typing is a good thing. Why is that? I'm really surprised, after all I'd argue that there are a bunch of reasons to use loosely typed languages in preference to statically typed ones.
Update: It's also interesting that some people in the audience here aren't getting the "let it crash" mantra coming from Francesco. In a highly concurrent language where everything is a process, letting a process crash is just how you handle errors. A process crash is essentially the same as throwing an exception.
Update: I'm starting to loose the thread of the talk now. Pity, Francesco has just got to the interesting bit. It's been a long day...
Update: ...and we're done. Chris was also blogging the tutorial so head over to his post for more coverage.
Labels:
Erlang,
Francesco Cesarini,
OSCON,
OSCON08,
OSCON2008
Subscribe to:
Posts (Atom)