It is good to see the TSSG campus company Nubiq, who have a mobile web site generator called Zinadoo, getting some good coverage in the blogsphere Mobile Web 1.0 again and again and again. It really is very easy to setup a mobile website using Zinadoo.
An interview with John Curran, chairman of the American Registry for Internet Numbers Internet addresses drying up fast - CBRonline.com
This article is a bit easier to read than some of the more technical discussions, and agrees with my core argument that predictions of IPv4 address exhaustion are converging on late 2009 early 2010 (for the central IANA blocks).
Well the mood is really shifting towards IPv6 in May 2007. Within the one month we have revised predictions of the IANA /8 IPv4 address pool exhaustion (Dec 2009/Jan 2010) from Geof Houston, and an announcement from ARIN that they now encourage people to move to IPv6.
On 7th May 2007, the ARIN Board of Trustees passed a resolution advising the Internet technical community that migration to a new version of the Internet Protocol, IPv6, will be necessary to allow continued growth of the Internet.
Internet Protocol defines how computers communicate over a network. IP version 4 (IPv4), the currently prevalent version, contains just over four billion unique IP addresses, which is not enough to last indefinitely. IPv6 is a replacement for IPv4, offering far more IP addresses and enhanced security features. To date, ARIN has performed technical coordination of both versions and has not advocated one over the other.
Thanks to David Malone of The Hamilton Institute in NUI Maynooth for alerting me to the ARIN announcement.
In the TSSG we have been tracking a number of technologies that converge around the idea of the web browser as a flexible docking station for various applications. I am borrowing for a lot that my colleague Eamonn de Leastar has investigated for this posting. The source of these innovations stems from the desire to create a "My X", such as "My Netscape" or the Google customised home page.
You could say this trend for personalised portals started with the Netscape portal idea of the late 1990s that led to the original RSS 0.90 (Rich Site Summary) in 1999. See this History of RSS if you're interested in that journey. There are things before RSS 0.90, but they weren't called RSS (most importantly Dave Winner's Scripting News format). Since then the alternative Atom format has been developed and standardised in the IETF. This history is also linked to the early W3C semantic web standard RDF (Resource Description Framework), and some RSS versions are subsets of RDF.
So the basic idea was that these feed formats could be used to allow syndication and aggregation of content across many different types of content sources such as newspapers and later blogs, but including weather and other sorts of information flows.
The bigger picture with portals is to allow functionality to be bundled into widgets (small sets of functionality) that can allow a host platform to grow through the use of 3rd party information sources (mini-programs). So the feed is the basic low-level entry point here, a simple flow of textual data marked up with XML in RSS or Atom into headline, content and link to original source. The more complex widgets can then become the requirement becomes for a program rather than just an XML parser to allow the widget to function.
The most recent clutch of portal sites are showing some startling effects. Of particular interest where these two:
The former is very slick, the latter is intriguing - have a look at this calendar widget.
It looks like a standard calendar widget. However, look the four buttons along the top, one of them is "Copy to Desktop". i.e. If you have installed the Adobe Apollo engine on your PC (a flash host of some kind) then there is no distinction between these widgets appearing on your desktop or in your browser.
There is a somewhat similar effect in the latest meebo. The IM (Instant Messaging) client emulation has a button which apparently permits the IM window to escape from the browser and live on decoupled. It is in fact another browser instance - but so customised that it looks more or less like Exodus (the open IM Jabber client). No downloads required of course.
Finally, AB5k (Widgets for the World) is the first Java based widgets framework taking advantage of the major (perhaps Eclipse inspired) renovation and improvement of Swing these past 2-3 years. This is currently in pre-alpha but it might have potential once it gets going.
All of these seem a good bit more sophisticated that Google, Yahoo! or netvibes. Whether they are just toys or not remains to be seen...
Well I went to the WebCamp on the Emerging Mobile Internet (UCD, 17th May 2007) and really enjoyed it.
Here are my slides PDF Slides.
My talk was Talk 6 and there's a linked blog discussion here: Talk 6 Blog.
I've posted a short summary of my talk there, and I'll cross post here.
The talk took the title of the WebCamp literally, and looked at issues of the "Emerging Mobile Internet".
For me the Internet is the TCP/IP suite of protocols, that includes application layer stuff like http, but is based on the underlying architecture of packet switching and naming at lower layers. For me the biggest issue here is the broken architecture of the Internet itself where NAT forces services and applications to try and get around the issue of machines not have real publicly addressable identities (IP addresses), and where every packet is changed en route by middle boxes to the detriment of efficiency. My slides showed that we'll run out of centrally assigned IANA /8 blocks of IPv4 addresses in 2009/2010 and then regional registries will have problems fulfilling requests within another year or two from then. IPv6 solves the address shortage issue, and also opens up the Internet architecture to again allow any device to offer a service to another device (the end-to-end principle), allowing innovate p2p services.
At the high service and application levels the main issues for me are: (i) how Internet telephony can be made to work without opening up telephony to spam (as email, blog comments, blog trackbacks, and all such are due to the open nature of identity on the Internet, and the low level of default/assumed security) and (ii) how the mobile web should ideally be designed to be the same as the real web, that there should just be one web, with different presentations on desktops and mobile phones, but the same back-end services.
Update 2007-06-03 My slides have been published on slideshare.
Netcraft's survey of SSL sites has now been running for over ten years. The first survey, in November 1996, found just 3,283 sites; since then, the number of SSL sites has had an average compound growth of 65% per annum.
The survey is a good guide to the growth of online trading and services. The survey counts sites by collecting SSL certificates; each distinct, valid SSL certificate is counted in the results. Each SSL certificate typically represents one company's details, and each certificate must be approved by a certificate authority, so the data is typically more consistent and less volatile than other attributes of the Internet's infrastructure.
Netcraft: Internet Passes 600,000 SSL Sites.
This is many fewer than the total number of websites, current Netcraft web server survey, as I access it now in May 2007 it reads 118,023,363 sites.
This article gives a good summary of the security issues with WiFI and VPNs (Virtual Private Networks) and argues for using both - but the right tool for the job.
Why VPN can replace Wi-Fi security | George Ou | ZDNet.com
IPv4 Address Report Geof Houston has updated his model predicting IPv4 address space exhaustion. Now the projected IANA Unallocated Address Pool Exhaustion date is 18-Dec-2009. Last month this prediction was 06-Jul-2011 (c.f. my earlier blog post).
(Note: 8 May 2007) Frequent visitors to this page would see that the projected data of IANA unallocated pool exhaustion has moved by some months as of the 8th May. The reason for this change of the projection is the use of a different mathematical model as of this date. Previous reports used a best fit of an exponential model to the most recent 1000 days of data in order to generate the predictive model. As indicated in Figure 22 of this report, this exponential model now deviates from the recent data, and the predictive model is based on a quadratic equation.
I learned a lot reading this interview with Bob Metcalfe, co-inventor of Ethernet, including the fact that he first used the term "best-effort" Ethernet papa makes Invent Now Hall of Fame | Newsmakers | CNET News.com. He comments on the net neutrality debate and on his view of the problems with the Internet in general.
Thanks to John Breslin for publicising the event on his blog, I'm prompted to do the same, as I should as I will be speaking at the event.
The speakers include:
* Keith Bradley - Changing Worlds
* Frederic Herrera - The National Digital Research Centre (NDRC)
* TBC - NewBay Software
* Hélène Haughney - Nubiq Ltd.
* Steven Strachen - Hamilton Institute, NUI Maynooth
* Mícheál Ó Foghlú - TSSG, Waterford IT
* Kieran Mahon - Vodafone Ireland
* Karen Church - Adaptive Information Cluster, UCD
If you're interested in any of the following topics, it sounds like it's going to be a very interesting day.
* Mobile Internet applications
* Mobile search and browsing
* Behavioural studies / analysis of mobile Web usage
* Challenges of the mobile Internet
* HCI on the mobile Internet
* Usability of mobile devices and services
* User interfaces for mobile devices
* Improving / enhancing content, infrastructure and billing models
* Mobile Web 2.0
* Content authoring/user generated content from mobile devices
I have to admit I am impressed by the setting for the W3C Advisory Committee (AC) meeting this week in the Fairmont Hotel, Banff National Park, Alberta, Canada.
We're just gathering now in the 8am-9am coffee/breakfast/registration slot and there's a good good buzz about the place. Having seen the range of the discussions on the agenda I'm looking forward to a productive meeting.
Phase 1 of my involvement in web technologies began when I first heard about the web at the NSC92 conference (Network Services Conference) November 1992 in Pisa, Italy, and went back to University College Galway (UCG), now called NUI Galway, where I worked in computer services, very enthused. Within a year I had my own webserver and I was the webmaster for UCG's first website. I was very happy when the front page image from the website, a picture of the old quadrangle in UCG, was featured in the Irish Times in an article about the emerging web with the title "The West's Awake" (a reference to a famous Irish rebel song), I think it was around 1994. At the time we would email some guys in UCD in Dublin running a server called Slarti (a reference to the Hitch-Hiker's Guide to the Galaxy character), where a list and a map of active Irish websites was maintained - my personal server and the UCG server are still listed in this list as are the servers of other UCG web-heads many still active including Joe Desbonet and John Breslin. I ended up getting really into Perl and CGI programming a published some books on this as well as being a webmaster in the first iteration of the web.
Phase 2 of my engagement in web technologies began when I moved to Waterford Institute of Technology back in 1996, and got involved in some EU funded projects linked to the Telecommunications Software & Systems Group (TSSG) there. I became really enthused by the whole content aggregation technology suite, with the early versions of RSS from Netscape in the late 1990s, and later with weblogging/blogging, and the first version of this blog (Greymatter then MovableType).
The third phase of my web technology engagement has been via the telecommunications management work in the TSSG, based on the use of emerging semantic modelling techniques, in particular OWL-based solutions, and on applying these to the telecommunications network and service management space in the TeleManagement Forum ((TM Forum) and in the new Autonomic Communications Forum (ACF) linked to our research programme on Autonomic Management of Communications of Networks and Services.
So coming to the W3C for the first time is a bit like coming home for me, even though the TSSG only joined the W3C very recently, and this is my first AC meeting.
The Advisory Committee meeting is co-located with the WWW2007 conference, and I'll be staying on for two days of that before returning to Ireland.
This blog entry on the ICANN site shows some interesting maps of the assigned IPv4 address space and some links to discussions on IPv4 address space exhaustion: ICANN Blog: Blog Archive: Mapping the Internet, one node at a time
I have recently noted to my work colleagues in the TSSG that Geof Houston's (of APNIC) predictions are now converging with Tony Hain's (of Cisco) predictions at around late 2010 - mid 2011 for the last IANA/8 to be allocated, that's less than 1 year divergence. This means that there is broad agreement on this prediction.
Geof Houston's prediction is updated regularly. As of yesterday he predicts no IANA/8 left after 06-Jul-2011.
UPDATE 2007-05-08 Geof's prediction revised to December 2009!
Tony Hain's prediction, published in September 2005 in the Cisco IPv6 protocol Journal, and well worth a read both for the article itself and for the discussion printed at the end of the article. His predictions are summarised in these two statements:
"So this view of the sustained trend in allocation growth rate suggests that the lifetime of the remaining central IPv4 pool is 4 years +/-1." [i.e. Sep 2009 +/-1]
"The various projections in Figures 5 and 6 show different mathematical models applied to the same raw data. Depending on the model chosen, the nonlinear historical trends in Figure 6 covering the last 5- and 10-year data show that the remaining 64 /8s will be allocated somewhere between 2009 and 2016, with no change in policy or demand (though as discussed previously there are already reasons to err toward 5-year based nonlinear models)."
His model graph is updated quarterly.
In fact it is worth quote the concluding few sections of Ton Hain's original article in full as they address a number of other possible issues that may arise:
There are occasionally arguments that the 16 /8s reserved in the experimental space could be used. Although this is likely to be possible for some IP stack implementations, for others it is not. At a minimum, some quick tests show that Windows 95 through Windows 2003 Server systems consider that block to be a configuration error and refuse to accept it. The operational ability to restrict the space to a select stack implementation is limited, and the amount of space there does not really help even if deployment and operations were trivial. Assuming the sustained growth trend in allocations continues, by the time the remaining 64 /8s in the IANA pool are finished the rate would be approaching 3 /8 allocations per month, so the entirety of the old Class E space would amount to about 6 months of run rate.
Another debate occasionally resurfaces about reclaiming some of the early allocations to further extend the lifetime of IPv4. Hopefully this article has shown that the ROI for that approach is going to be extremely low. Discussions around the Internet community show there is an expectation that it will take several years of substantive negotiation (in multiple court systems around the globe) to retrieve any /8s. Then following that effort and expense, the likelihood of even getting back more than a few /8 blocks is very low. Following the allocation growth trend, after several years of litigation the result is likely to be just a few months of additional resource added to the pool - and possibly not even a whole month. All this assumes IANA does not completely run out before getting any back, because running out would result in pentup demand that could immediately exhaust any returns.
Network Address Translation (NAT) and CIDR did their jobs and bought the 10 years needed to get IPv6 standards and products developed. Now is the time to recognize the end to sustainable growth of the IPv4-based Internet has arrived and that it is time to move on. IPv6 is ready as the successor, so the gating issue is attitude. When CIOs make firm decisions to deploy IPv6, the process is fairly straightforward. Staff will need to be trained, management tools will need to be enhanced, routers and operating systems will need to be updated, and IPv6-enabled versions of applications will need to be deployed. All these steps will take time - in many cases multiple years. The point of this article has been to show that the recent consumption rates of IPv4 will not be sustainable from the central pool beyond this decade, so organizations would be wise to start the process of planning for an IPv6 deployment now. Those who delay may find that the IANA pool for IPv4 has run dry before they have completed their move to IPv6. Although that may not be a problem for most, organizations that need to acquire additional IPv4 space to continue growing during the transition could be out of luck."
So it is clear to me that IPv4 addresses are running out, and IPv6 is the only viable alternative. Whilst the TSSG do engage in "post-IP" research, looking at potential alternatives, there is no question that IPv6 will need to be deployed to meet IPv4 exhaustion, and any post-IP research will require 10-15 years to be ready for deployment, and that's simply too late for IPv4 exhaustion.
This is why I am Director of the Irish National IPv6 Centre and Chair of the Irish IPv6 Task Force and Ireland's representative on the European IPv6 Task Force and the world-wide IPv6 Forum. When we started the Irish National IPv6 Centre in September 2005 there were 64 IANA/8 blocks left, last month this was down to 47. The waiting game is over, it is time to start planning IPv6 deployment today.
The Economist Intelligence Unit in co-operation with the IBM Institute for Business Value have released the 2007 e-Readiness Rankings.
The full PDF Report is available for download.
Interestingly Ireland has dropped from 16th to 21st place, fitting the overall trend is for Asia to catch up on North America and Western Europe. However, the criteria have changed so direct comparison with previous rankings may be invalid....
Defining the next generation of e-readiness
Several new ranking criteria have been introduced to the e-readiness model in 2007, and primary categories have been changed. In addition, some individual criteria have been retired or had their weighting reassessed.
We have refined our notion of connectivity. It remains a defining indicator of how a country's population is able to access the Internet and digital channels; the more telephones and Internet accounts a country has, the easier that e-readiness is to achieve. But it is also true that certain types of connectivity are proving better than others in enabling e-readiness. Broadband Internet access enjoys greater influence in 2007 - not only its penetration, but also its affordability to households. We have also eliminated fixed-line phones as an indicator and increased the weight of mobile penetration, as mobile phones are generally cheaper, easier to access and, with text messaging and mobile commerce applications, increasingly powerful digital devices.
Another key refinement has come in our analysis of the part that legal structures play in creating e-ready economies. We have also placed greater emphasis on the role of governments in fostering digitalisation, both as providers of vision and policy direction, and also as creators of digital channels for their constituents.
Lastly, we have re-focused the consumer and business adoption category to evaluate the utilisation of digital channels by individuals and businesses. We have also slightly increased its weight relative to connectivity and other categories in recognition of the fact that, ultimately, it is actual users who determine a country's ereadiness, not its networks.