silicon.com
By Will Sturgeon
Published: Friday 25 November 2005
All content to be available online, says controller...
The controller of BBC2 has said he intends to make the channel the first mainstream TV station to broadcast via broadband.
Roly Keating's announcement is very much in keeping with the BBC's broader ethos of a move to emerging forms of broadcasting content – a charge led by Ashley Highfield, a silicon.com Agenda Setter and the Beeb's director of new media and technology.
Whatever the broadband revolution means for audiences and channels in the future, we intend to be there, in the front line.
A pilot of the broadband service, pencilled in for 2006, will run concurrently with further trials of the MyBBCPlayer technology which will enable viewers to download and watch BBC content on demand.
The broadband incarnation of BBC2 looks set to offer a mix of streamed media and downloads.
The BBC website quotes Keating saying: "A broadband channel could of course offer simulcast programming [broadcasting simultaneously on multiple media] and the kind of comprehensive catch-up currently being piloted in the BBC Player tests.
"But there's more to it than that and you'll see our first steps on this journey next year. Whatever the broadband revolution means for audiences and channels in the future, we intend to be there, in the front line."
In related news, the BBC today announced that Radio 1 is to trial Bluetooth broadcasting this weekend for attendees at Chart Show Live at The Shepherds Bush Empire, an event which will feature artists such as Charlotte Church and McFly.
Visitors will be able to download media onto their mobile phones including interviews, backstage photos and video content.
Daniel Heaf, interactive editor of Radio 1, said in a press release: "Radio 1 recognises the increased importance of delivering its content to a young audience on mobile platforms. I see this as one small part of our digital future."
Source here
Monday, November 28, 2005
Peter Cochrane's Blog: GPS changes everything
silicon.com
11.45 Monday 28th November 2005
Written while flying from Heathrow to Istanbul and dispatched to silicon.com from Istanbul Airport via a free Wi-Fi service
I remember buying a car when the heater was optional and a radio wasn't even on the list. Today our expectations are much higher, as are production standards and quality, and it is unthinkable that any vehicle wouldn't come complete with these items as standard. We have moved on, and the options list now includes automatic transmission, air conditioning, sound system upgrades, spoilers and GPS navigation systems. Soon these will all be standard too!
For the past six years I have enjoyed the benefit of having GPS in three successive vehicles, and in the past two years, in a succession of hire cars across the US. Beyond the comfort of air conditioning, GPS has made the greatest impact on my driving and me. No more reading of maps, and making errors on freeways when tired - GPS just takes care of it. Likewise, driving in a strange town or city is now a relatively stress-free experience. Just punch in the address or zip code and go - everything is taken care of. And should I find myself in a traffic jam, what could now be easier than a rapid re-route?
We stand on the edge of a new era in terms of maps and mapping where individuals will contribute their location data, photographs and floor plans using mobile devices.
Some GPS systems have better interfaces than others but in my estimation the benefit always outweighs the interface pain. On the upside I see far more good interfaces than bad, and in my own car I think the design is one of the best applications of multimedia I have encountered anywhere. When national traffic management systems are linked in so I can be warned of developments such as traffic jams and be advised of the best alternative routings, the service will be complete. And should it be extended even further to advise on the optimum departure time, travel stops etc, then I will be ecstatic.
So is there a downside to all this? Well, because of my age I have developed a mind map of locations and distances/times across the UK and the US, along with maps of all the cities and towns I most commonly visit. For example, I feel equally at home in London, San Jose and San Francisco. Not so for young people it seems. Some are now buying their first car complete with GPS right off the bat and so no longer refer to a map from day one. Even cab and logistics companies are gradually adopting GPS to broaden the employee base and reduce operating costs.
Does this all matter? This seems to me to be on a par with the abandonment of mental arithmetic skills for the pocket calculator and spreadsheet. Older people worried about that too but the world still spins on its axis. At the same time, not having a sense of geography has, I think, a much broader impact that we should be more concerned about.
Having absolutely no idea where you are necessitates a GPS unit in your hand as well as in your car. Without a rudimentary mind map, people just make mistakes when entering data into a GPS unit. And when satellite lock is lost due to high buildings, wet trees and intense rain, the user is truly lost and prone to make erroneous decisions. The good news is that nano-gyros will soon afford us an inertial navigation facility that does not rely on satellites alone, and it will also work indoors. In addition, there are mobile telephone base stations capable of providing location estimate through triangulation and waypoint references.
We stand on the edge of a new era in terms of maps and mapping where individuals will contribute their location data, photographs and floor plans using mobile devices. In North America this has now started with people using GPS, cameras, PDAs and laptops to take photographs of views and intersections to augment and populate public maps built from satellite images. One motivation for this is the 'thin' nature of maps compared to the EU where accurate and extensive mapping has been a necessity of the near continuous wars of previous centuries. In contrast, North America was never afforded this advantage! So in a way we are again becoming the builders of maps, we are again to become the navigators - but this time around on a micro and a macro scale.
My guess is that soon we will have gone full circle and we will all have gained a new and a more intimate knowledge of our world, but through our own hand, and the use of the latest technology.
Source here
11.45 Monday 28th November 2005
Written while flying from Heathrow to Istanbul and dispatched to silicon.com from Istanbul Airport via a free Wi-Fi service
I remember buying a car when the heater was optional and a radio wasn't even on the list. Today our expectations are much higher, as are production standards and quality, and it is unthinkable that any vehicle wouldn't come complete with these items as standard. We have moved on, and the options list now includes automatic transmission, air conditioning, sound system upgrades, spoilers and GPS navigation systems. Soon these will all be standard too!
For the past six years I have enjoyed the benefit of having GPS in three successive vehicles, and in the past two years, in a succession of hire cars across the US. Beyond the comfort of air conditioning, GPS has made the greatest impact on my driving and me. No more reading of maps, and making errors on freeways when tired - GPS just takes care of it. Likewise, driving in a strange town or city is now a relatively stress-free experience. Just punch in the address or zip code and go - everything is taken care of. And should I find myself in a traffic jam, what could now be easier than a rapid re-route?
We stand on the edge of a new era in terms of maps and mapping where individuals will contribute their location data, photographs and floor plans using mobile devices.
Some GPS systems have better interfaces than others but in my estimation the benefit always outweighs the interface pain. On the upside I see far more good interfaces than bad, and in my own car I think the design is one of the best applications of multimedia I have encountered anywhere. When national traffic management systems are linked in so I can be warned of developments such as traffic jams and be advised of the best alternative routings, the service will be complete. And should it be extended even further to advise on the optimum departure time, travel stops etc, then I will be ecstatic.
So is there a downside to all this? Well, because of my age I have developed a mind map of locations and distances/times across the UK and the US, along with maps of all the cities and towns I most commonly visit. For example, I feel equally at home in London, San Jose and San Francisco. Not so for young people it seems. Some are now buying their first car complete with GPS right off the bat and so no longer refer to a map from day one. Even cab and logistics companies are gradually adopting GPS to broaden the employee base and reduce operating costs.
Does this all matter? This seems to me to be on a par with the abandonment of mental arithmetic skills for the pocket calculator and spreadsheet. Older people worried about that too but the world still spins on its axis. At the same time, not having a sense of geography has, I think, a much broader impact that we should be more concerned about.
Having absolutely no idea where you are necessitates a GPS unit in your hand as well as in your car. Without a rudimentary mind map, people just make mistakes when entering data into a GPS unit. And when satellite lock is lost due to high buildings, wet trees and intense rain, the user is truly lost and prone to make erroneous decisions. The good news is that nano-gyros will soon afford us an inertial navigation facility that does not rely on satellites alone, and it will also work indoors. In addition, there are mobile telephone base stations capable of providing location estimate through triangulation and waypoint references.
We stand on the edge of a new era in terms of maps and mapping where individuals will contribute their location data, photographs and floor plans using mobile devices. In North America this has now started with people using GPS, cameras, PDAs and laptops to take photographs of views and intersections to augment and populate public maps built from satellite images. One motivation for this is the 'thin' nature of maps compared to the EU where accurate and extensive mapping has been a necessity of the near continuous wars of previous centuries. In contrast, North America was never afforded this advantage! So in a way we are again becoming the builders of maps, we are again to become the navigators - but this time around on a micro and a macro scale.
My guess is that soon we will have gone full circle and we will all have gained a new and a more intimate knowledge of our world, but through our own hand, and the use of the latest technology.
Source here
T-Mobile boss slams mobile Internet cynics
zdnet.co.uk
Jo Best
silicon.com
November 28, 2005, 10:00 GMT
Chief executive Rene Obermann claims that Internet access will soon revolve around the mobile phone
T-Mobile chief executive Rene Obermann today hit out at mobile Internet doubters, comparing them to the analysts who predicted mobiles would only be used by 25 percent of the population.
Obermann criticised the "cynicism" around mobile Internet use and "experts" unimpressed with Web access on mobiles. "Experts are usually wrong with these predictions," he said.
The T-Mobile boss, however, announced some pretty bold predictions himself. "To date, Internet traffic has been fixed line," he said. "Mobile will be the centre of the Internet."
"The growth of data and Internet traffic will displace fixed line," he added.
However, Obermann did acknowledge there have been some problems with the Internet experience on phones to date, with WAP giving the Web on mobiles a bad name.
"The user interface is not that easy... We told people they could surf the Web [but] until now that hasn't been true," he said.
Obermann has his own ideas on how user interfaces should develop to cope with the demands of surfing on a mobile.
"Ultimately, you'll be able to ask your mobile — literally ask with your personal voice — and your hopefully T-Mobile will deliver," he said. "We all know that future is a long way off."
Source here
Jo Best
silicon.com
November 28, 2005, 10:00 GMT
Chief executive Rene Obermann claims that Internet access will soon revolve around the mobile phone
T-Mobile chief executive Rene Obermann today hit out at mobile Internet doubters, comparing them to the analysts who predicted mobiles would only be used by 25 percent of the population.
Obermann criticised the "cynicism" around mobile Internet use and "experts" unimpressed with Web access on mobiles. "Experts are usually wrong with these predictions," he said.
The T-Mobile boss, however, announced some pretty bold predictions himself. "To date, Internet traffic has been fixed line," he said. "Mobile will be the centre of the Internet."
"The growth of data and Internet traffic will displace fixed line," he added.
However, Obermann did acknowledge there have been some problems with the Internet experience on phones to date, with WAP giving the Web on mobiles a bad name.
"The user interface is not that easy... We told people they could surf the Web [but] until now that hasn't been true," he said.
Obermann has his own ideas on how user interfaces should develop to cope with the demands of surfing on a mobile.
"Ultimately, you'll be able to ask your mobile — literally ask with your personal voice — and your hopefully T-Mobile will deliver," he said. "We all know that future is a long way off."
Source here
Wireless Toronto/ St Lawrence Market
theglobeandmail.com
St Lawrence Market Offers Free WiFi
Friday, October 7, 2005 Posted at 11:04 AM EST
Globe and Mail Update
Toronto's St. Lawrence Market and its partner Wireless Toronto are offering free internet access to the market's customers.
This initiative is designed to re-imagine how technology, community groups, the arts sector and businesses can come together to revitalize public spaces.
Wireless Toronto is a volunteer-based, not-for-profit community group committed to encouraging the growth of no-cost WiFi to the city's public spaces. Working with local civic organizations, artists, and technologists, Wireless Toronto explores how wireless technologies can be used to build community in innovative and interesting ways.
Source here
St Lawrence Market Offers Free WiFi
Friday, October 7, 2005 Posted at 11:04 AM EST
Globe and Mail Update
Toronto's St. Lawrence Market and its partner Wireless Toronto are offering free internet access to the market's customers.
This initiative is designed to re-imagine how technology, community groups, the arts sector and businesses can come together to revitalize public spaces.
Wireless Toronto is a volunteer-based, not-for-profit community group committed to encouraging the growth of no-cost WiFi to the city's public spaces. Working with local civic organizations, artists, and technologists, Wireless Toronto explores how wireless technologies can be used to build community in innovative and interesting ways.
Source here
O'Reilly Emerging Telephony Conference, Jan 24-26, 2005
The future is calling. Web telephony technologies are hitting the big time. eBay bought Skype, Google launched GoogleTalk, and Yahoo introduced Yahoo Messenger with Voice. Web developer voice platforms such as Tellme and Voxeo are creating entirely new voice services opportunities for developers and enterprises. Pingtel and Digium's open source IP PBX platforms are striking fear into the hearts of traditional telcos. How are the leading IP telephony companies charting this new frontier? How can you stay ahead of the curve in this industry in transition?
The O'Reilly Emerging Telephony Conference is not "Yet Another VoIP Conference." ETel brings you the best of what's happening at the cutting edge of the entire IP telephony spectrum now, and how new technology is being deployed by forward-thinking pioneers. O'Reilly's committment, support, and access to the alpha geek community and top-tier innovative startups acts as a radar for decision makers in the industry who need to know what the next big technology will be.
(...)
[Speakers include:]
Keynote with Peter Cochrane
Peter Cochrane, Co-founder, ConceptLabs
(...)
One of the true visionaries in telecoms and the former CTO of British Telecom's prestigious research labs, Cochrane shares his wisdom and current interests in the communications and technology space. A peer of MIT Media Laboratory's Nicholas Negroponte, Cochrane outlines the fundamental shifts in business and technology that have many telco execs exclaiming that the barbarians are at the gate.
Source here
The O'Reilly Emerging Telephony Conference is not "Yet Another VoIP Conference." ETel brings you the best of what's happening at the cutting edge of the entire IP telephony spectrum now, and how new technology is being deployed by forward-thinking pioneers. O'Reilly's committment, support, and access to the alpha geek community and top-tier innovative startups acts as a radar for decision makers in the industry who need to know what the next big technology will be.
(...)
[Speakers include:]
Keynote with Peter Cochrane
Peter Cochrane, Co-founder, ConceptLabs
(...)
One of the true visionaries in telecoms and the former CTO of British Telecom's prestigious research labs, Cochrane shares his wisdom and current interests in the communications and technology space. A peer of MIT Media Laboratory's Nicholas Negroponte, Cochrane outlines the fundamental shifts in business and technology that have many telco execs exclaiming that the barbarians are at the gate.
Source here
Build-it-yourself cell phones
cnet.com
By John Borland
Staff Writer, CNET News.com
Published: November 15, 2005, 12:09 PM PST
Surj Patel is building his own cell phone, bit by soldered bit.
It's not easy. It starts with parts that cost around $400. Then Patel and his partner, Deva Seetharam, have to write code to run on the tiny Linux-based computer that he's hoping will serve as the brains of his new phone.
So why bother? After all, it's not like cell phones are hard to find or terribly expensive.
Patel says he has lost patience with even the slimmest Motorolas and most advanced Nokias. He has been trying to build new features for cell phones for years, and he--like a growing number of other impatient developers--has concluded that phones have to be as flexible as ordinary computers if he's going to make progress.
"I want the phone to be much more open," Patel said. "The world's best research and development lab is all the hackers out there. Enable them, and they'll do it."
Thirty years ago, this could have been Apple Computer co-founder Steve Wozniak or any of his peers in their garages, building "homebrewed" computers without any inkling of the impending PC explosion. But the mobile world is in a way the inverse of that curve: Cell phone use has already exploded all over the world, but it is only recently that falling component prices have made it practical for homebrew phone hackers to build their own.
Certainly, the phone tinkerers are chafing at the boundaries set by the handset makers and the big phone carriers. They want phones to be programmable, so they can create their own services, either as start-up companies or just for their own use.
"The world's best research and development lab is all the hackers out there. Enable them, and they'll do it."
This is already happening rapidly outside the realm of the hardware itself. Tech-savvy activists are turning phones into political tools. Programmers have built gateways between cell phones and the Skype Internet calling network, allowing inexpensive international calls on mobile phones.
Patel, who studied at the Massachusetts Institute of Technology's Media Lab for several years before moving to U.K.-based carrier Orange, and ultimately to his own start-up, on Monday talked with CNET News.com about melding social applications like LinkedIn or MySpace with a phone's address book.
That type of service, which connects sprawling lists of people into overlapping groups of "friends," and allows visitors to see who is online or active, would be a much better model for a cell phone's lists of contacts, he said. But today's cell phones are virtually impossible to tweak in that way.
Casey Halverson, a Seattle-based mobile developer who is also working on homemade phone projects, has similar complaints. Commercial cell phones don't let developers write at a basic level that talks directly to the hardware, which makes some programming tasks impossible or hugely inefficient, he said.
"I think as more people move to mobile devices, they will be running into more and more limits with closed systems," Halverson said. "For now, this kind of project is limited to tinkerers, but in the future there might be some kind of open platform for people to do these kinds of things."
Challenge to carriers' power?
None of this is likely to become mainstream soon. In a market increasingly filled with cheap phones that take pictures, play music and show television reruns, a make-your-own kit isn't likely to turn many heads outside the Radio Shack crowd.
But innovations that happen at the technological fringe have a way of filtering into the mainstream over time. The first Apple computer created by Wozniak and Steve Jobs set the foundation for a desktop computer revolution. The wonky dial-up bulletin board systems of the 1980s evolved into today's near-universal Internet access. Peer-to-peer programs developed in dorm rooms transformed the biggest media companies in the world.
"I think as more people move to mobile devices, they will be running into more and more limits with closed systems."
--Casey Halverson, mobile developer
A generation of cell phones that are as open and programmable as computers could be unpopular with cell phone companies, which have relied on control of their networks and the associated phones to keep people paying subscription fees. The music industry likes the idea of selling songs over phones, for example, in part because the tight control of networks makes piracy more difficult.
For the most part, this hardware-hacker activity hasn't yet come to the attention of the big carriers. A spokesman for Cingular Wireless stressed that any cell phone radio has to be approved by the Federal Communications Commission and the carrier itself before using the network, but said that the company supported experimentation.
"We do encourage competition and innovation in the marketplace," Cingular spokesman Clay Owen said. "It's great for people to experiment, given the right regulatory approvals."
Today, the phone hackers are largely in the prototype stage, keeping track of their progress and looking for ideas from the community on their blogs. (Patel and Seetharam's blog is here, and Halverson's is here.)
Most aren't starting entirely from scratch. To comply with regulatory requirements, the projects have to find radio components that have already been approved by the FCC.
In this vein, tinkerers have found several companies that sell components originally meant for embedded systems such as surveillance cameras or GPS (Global Positioning System) receivers, which are allowed to transmit on the big cell phone networks.
Small screens and keypads are relatively easy to come by. The recent emergence of tiny Linux-based computer systems, each about the size of a pack of gum, have given them the brains for the phones. Cheaper "microcontrollers" are also available, which are simpler to install, but provide far less flexibility for applications.
Once all of those parts are connected, a homebrew phone needs a Subscriber Identification Module, or SIM, card, the little chip that stores information about which carrier network to use, what the phone's number is, and other personal data. These can be taken out of any GSM (Global System for Mobile Communications)-based phone, or can be purchased on a prepaid basis.
Cingular Wireless and T-Mobile phones both use the GSM wireless standard in the United States. Other carriers use a different technology, which makes it harder for the tinkerers to adapt their equipment.
The early phone tinkerers are hoping that their work sparks a broader response in the open-source community. Once a few people show a way forward in hardware, interesting things can be done by other software developers, they say.
Patel is helping organize an Emerging Telephony Conference with tech publisher O'Reilly Media in January, where he hopes to show off as many grassroots development projects as he can find.
"I have very selfish goals," Patel said. "I want to create demos and prototypes to show clients, and I can't demonstrate the future to you if I can't actually access it. But it's very clear that it's the hacker kids that are doing all the cool stuff."
Source here
By John Borland
Staff Writer, CNET News.com
Published: November 15, 2005, 12:09 PM PST
Surj Patel is building his own cell phone, bit by soldered bit.
It's not easy. It starts with parts that cost around $400. Then Patel and his partner, Deva Seetharam, have to write code to run on the tiny Linux-based computer that he's hoping will serve as the brains of his new phone.
So why bother? After all, it's not like cell phones are hard to find or terribly expensive.
Patel says he has lost patience with even the slimmest Motorolas and most advanced Nokias. He has been trying to build new features for cell phones for years, and he--like a growing number of other impatient developers--has concluded that phones have to be as flexible as ordinary computers if he's going to make progress.
"I want the phone to be much more open," Patel said. "The world's best research and development lab is all the hackers out there. Enable them, and they'll do it."
Thirty years ago, this could have been Apple Computer co-founder Steve Wozniak or any of his peers in their garages, building "homebrewed" computers without any inkling of the impending PC explosion. But the mobile world is in a way the inverse of that curve: Cell phone use has already exploded all over the world, but it is only recently that falling component prices have made it practical for homebrew phone hackers to build their own.
Certainly, the phone tinkerers are chafing at the boundaries set by the handset makers and the big phone carriers. They want phones to be programmable, so they can create their own services, either as start-up companies or just for their own use.
"The world's best research and development lab is all the hackers out there. Enable them, and they'll do it."
This is already happening rapidly outside the realm of the hardware itself. Tech-savvy activists are turning phones into political tools. Programmers have built gateways between cell phones and the Skype Internet calling network, allowing inexpensive international calls on mobile phones.
Patel, who studied at the Massachusetts Institute of Technology's Media Lab for several years before moving to U.K.-based carrier Orange, and ultimately to his own start-up, on Monday talked with CNET News.com about melding social applications like LinkedIn or MySpace with a phone's address book.
That type of service, which connects sprawling lists of people into overlapping groups of "friends," and allows visitors to see who is online or active, would be a much better model for a cell phone's lists of contacts, he said. But today's cell phones are virtually impossible to tweak in that way.
Casey Halverson, a Seattle-based mobile developer who is also working on homemade phone projects, has similar complaints. Commercial cell phones don't let developers write at a basic level that talks directly to the hardware, which makes some programming tasks impossible or hugely inefficient, he said.
"I think as more people move to mobile devices, they will be running into more and more limits with closed systems," Halverson said. "For now, this kind of project is limited to tinkerers, but in the future there might be some kind of open platform for people to do these kinds of things."
Challenge to carriers' power?
None of this is likely to become mainstream soon. In a market increasingly filled with cheap phones that take pictures, play music and show television reruns, a make-your-own kit isn't likely to turn many heads outside the Radio Shack crowd.
But innovations that happen at the technological fringe have a way of filtering into the mainstream over time. The first Apple computer created by Wozniak and Steve Jobs set the foundation for a desktop computer revolution. The wonky dial-up bulletin board systems of the 1980s evolved into today's near-universal Internet access. Peer-to-peer programs developed in dorm rooms transformed the biggest media companies in the world.
"I think as more people move to mobile devices, they will be running into more and more limits with closed systems."
--Casey Halverson, mobile developer
A generation of cell phones that are as open and programmable as computers could be unpopular with cell phone companies, which have relied on control of their networks and the associated phones to keep people paying subscription fees. The music industry likes the idea of selling songs over phones, for example, in part because the tight control of networks makes piracy more difficult.
For the most part, this hardware-hacker activity hasn't yet come to the attention of the big carriers. A spokesman for Cingular Wireless stressed that any cell phone radio has to be approved by the Federal Communications Commission and the carrier itself before using the network, but said that the company supported experimentation.
"We do encourage competition and innovation in the marketplace," Cingular spokesman Clay Owen said. "It's great for people to experiment, given the right regulatory approvals."
Today, the phone hackers are largely in the prototype stage, keeping track of their progress and looking for ideas from the community on their blogs. (Patel and Seetharam's blog is here, and Halverson's is here.)
Most aren't starting entirely from scratch. To comply with regulatory requirements, the projects have to find radio components that have already been approved by the FCC.
In this vein, tinkerers have found several companies that sell components originally meant for embedded systems such as surveillance cameras or GPS (Global Positioning System) receivers, which are allowed to transmit on the big cell phone networks.
Small screens and keypads are relatively easy to come by. The recent emergence of tiny Linux-based computer systems, each about the size of a pack of gum, have given them the brains for the phones. Cheaper "microcontrollers" are also available, which are simpler to install, but provide far less flexibility for applications.
Once all of those parts are connected, a homebrew phone needs a Subscriber Identification Module, or SIM, card, the little chip that stores information about which carrier network to use, what the phone's number is, and other personal data. These can be taken out of any GSM (Global System for Mobile Communications)-based phone, or can be purchased on a prepaid basis.
Cingular Wireless and T-Mobile phones both use the GSM wireless standard in the United States. Other carriers use a different technology, which makes it harder for the tinkerers to adapt their equipment.
The early phone tinkerers are hoping that their work sparks a broader response in the open-source community. Once a few people show a way forward in hardware, interesting things can be done by other software developers, they say.
Patel is helping organize an Emerging Telephony Conference with tech publisher O'Reilly Media in January, where he hopes to show off as many grassroots development projects as he can find.
"I have very selfish goals," Patel said. "I want to create demos and prototypes to show clients, and I can't demonstrate the future to you if I can't actually access it. But it's very clear that it's the hacker kids that are doing all the cool stuff."
Source here
CNET'S FREE "Setting up VoIP" online class
cnet.com
Nov 28, 2005
Is an Internet phone right for you?
Find out everything you need to know about switching to Voice over IP (VoIP) in this FREE class
If your old-fashioned phone is working fine, why bother switching to an Internet-based VoIP system? For one, calls can be extremely cheap or even free to anywhere in the world.
Plus, VoIP offers unique features such as the ability to take your phone number with you when you move, choose your own area code, and have your voice mail messages sent to your e-mail inbox.
This class covers different types of phone solutions and looks at more than a dozen different Internet phone providers and how CNET editors rank them.
ENROLL today in CNET'S FREE "Setting up VoIP" online class to learn:
Understand how VoIP technology works;
Pros and cons of different technologies;
Choosing a VoIP provider and plan;
Explore potential VoIP services available in the future.
To find out more about the VoIP course and other FREE upcoming classes, click here.
Source here
Nov 28, 2005
Is an Internet phone right for you?
Find out everything you need to know about switching to Voice over IP (VoIP) in this FREE class
If your old-fashioned phone is working fine, why bother switching to an Internet-based VoIP system? For one, calls can be extremely cheap or even free to anywhere in the world.
Plus, VoIP offers unique features such as the ability to take your phone number with you when you move, choose your own area code, and have your voice mail messages sent to your e-mail inbox.
This class covers different types of phone solutions and looks at more than a dozen different Internet phone providers and how CNET editors rank them.
ENROLL today in CNET'S FREE "Setting up VoIP" online class to learn:
Understand how VoIP technology works;
Pros and cons of different technologies;
Choosing a VoIP provider and plan;
Explore potential VoIP services available in the future.
To find out more about the VoIP course and other FREE upcoming classes, click here.
Source here
Sunday, November 27, 2005
ITU- Best Practice Guidelines for Spectrum Management to Promote Broadband Access
ITU
Nov 2005
Best Practice Guidelines for Spectrum Management to Promote Broadband Access
Introduction
Wireless broadband technologies hold promise for all countries seeking to ensure the availability of access to information communication technologies (ICT) and the creation of the Information Society. The ICT sector can improve standards of living and quality of life and boost productivity and competitiveness in the global and national economies. Broadband is an essential component of ICT. It is bringing new multimedia services to consumers for work and leisure, making them better-informed and more involved citizens and promoting economic and societal progress. With the advent of digital convergence and the Internet, wireless broadband offers the prospect of faster rollout of services, portability and mobility, making a reality of the vision of ‘any content, any time, any place, anywhere’ in the global information society. Wireless broadband technologies are set to close the broadband divide that exists between developing and developed countries. Wireless broadband, of course, will also require more spectrum (see http://www.itu.int/ITU-D/treg/Events/Seminars/2003/GSR/WSIS-Statement.html and http://www.itu.int/ITU-D/treg/Events/Seminars/2004/GSR04/consultation.html).
Spectrum is a scarce resource that needs to be managed effectively and efficiently in order to derive maximum economic and social benefit, including encouraging growth and rapid deployment of infrastructure and services for consumers. This requires innovative approaches to managing the spectrum dynamically to succeed in making spectrum available for broadband and other new services. As recognized by the 2004 Global Symposium for Regulators (GSR), within the spirit of transparency, objectivity, non-discrimination, and with the goal of the most efficient spectrum use, the onus is on legislators and regulators to adjust, alter or reform their regulatory codes, wherever possible, to dismantle unnecessary rules which today may adversely affect the operation of wireless technologies and systems. A new set of spectrum management principles and practices, within regulators’ respective mandate, will enable countries to harness the full potential of wireless broadband technologies. However, this cannot be done in isolation. A broad approach, including other regulatory instruments, as outlined in the GSR 2003 and 2004 Best Practice Guidelines to promote universal access, and low cost broadband, are necessary.
We, the regulators participating in the 2005 Global Symposium for Regulators, have identified the following set of best practice guidelines for spectrum management to promote broadband access:
1. Facilitate deployment of innovative broadband technologies. Regulators are encouraged to adopt policies to promote innovative services and technologies. Such polices may include:
* Managing spectrum in the public interest.
* Promoting innovation and the introduction of new radio applications and technologies.
* Reducing or removing unnecessary restrictions on spectrum use.
* Adopting harmonized frequency plans defined by ITU-R recommendation in order to facilitate the implementation of competition.
* Embracing the principle of minimum necessary regulation, where possible, to reduce or eliminate regulatory barriers to spectrum access, including simplified licence and authorization procedures for the use of spectrum resources
* Allocating frequencies in a manner to facilitate entry into the market of new competitors.
* Ensuring that broadband wireless operators have as wide a choice as possible of the spectrum they may access, and releasing spectrum to the market as soon as possible.
2. Promote transparency: Regulators are encouraged to adopt transparent and non-discriminatory spectrum management policies to ensure adequate availability of spectrum, provide regulatory certainty and to promote investment. These policies may include:
* Carrying out public consultations on spectrum management policies and procedures to allow interested parties to participate in the decision-making process, such as:
o public consultations before changing national frequency allocation plans; and
o public consultations on spectrum management decisions likely to affect service providers.
* Implementing a stable decision-making process that provides certainty that the grant of radio spectrum is done in accordance with principles of openness, transparency, objectivity--based on a clear and publicly available set of criterion which is published on the regulator’s website--and non-discrimination and that such grants will not be changed by the regulator without good cause.
* Publication of forecasts of spectrum usage and allocation needs, in particular on the regulator’s website.
* Publication of frequency allocation plans, including frequencies available for wireless broadband access, in particular on the regulator’s website.
* Publication of a web-based register that gives an overview of assigned spectrum rights, vacant spectrum, and licence-free spectrum, balancing any concerns for confidential business information or public security.
* Clearly defining and publishing radio frequency spectrum users’ rights and obligations, including on the regulator’s website.
* Clearly defining and publishing licensing and authorization rules and procedures, including on the regulator’s website.
* Publication of legal requirements for imported equipment and foreign investment, in particular on the relevant government agency website.
3. Embrace technology neutrality. To maximize innovation, create conditions for the development of broadband services, reduce investment risks and stimulate competition among different technologies, regulators can give industry the freedom and flexibility to deploy their choice of technologies and decide on the most appropriate technology in their commercial interest rather than regulators specifying the types of technologies to be deployed, or making spectrum available for a preferred broadband application, taking into consideration the need for and cost of interoperable platforms.
* Regulators can take into consideration technological convergence, facilitating spectrum use for both fixed and mobile services, ensuring that similar services are not subject to disparate regulatory treatment.
* Regulators can provide technical guidelines on ways to mitigate inter-operator interference.
* Regulators can ensure that bands are not allocated for the exclusive use of particular services and that spectrum allocations are free of technology and service constraints as far as possible.
4. Adopt flexible use measures: Regulators are encouraged to adopt flexible measures for the use of spectrum for wireless broadband services. Such measures may include:
* Minimizing barriers to entry and providing incentives for small market players by allowing broadband suppliers to begin operations on a small scale at very low cost, without imposing onerous rollout and coverage conditions, to enable small market players to gain experience in broadband provision and to test market demand for various broadband services.
* Recognizing that wireless broadband services may be used for both commercial and non-commercial uses (e.g., for community initiatives or public and social purposes) and that broadband wireless spectrum can be allocated for non-commercial uses with lower regulatory burdens, such as reduced, minimal or no spectrum fees; regulators can also allocate and assign spectrum for community or non-commercial use of broadband wireless services.
* Recognizing through flexible licensing mechanisms that wireless broadband technologies can provide a full range of converged services.
* Adopting lighter regulatory approaches in rural and less congested areas, such as flexible regulation of power levels, the use of specialized antennas, the use of simple authorizations, the use of geographic licensing areas, lower spectrum fees and secondary markets in rural areas.
* Recognizing that in markets where spectrum scarcity is an issue, the introduction of mechanisms such as secondary markets can in some cases foster innovation and free-up spectrum for broadband use.
* Recognizing the role that both non-licensed (or licence-exempt) and licensed spectrum can play in the promotion of broadband services, balancing the desire to foster innovation with the need to control congestion and interference. One measure that could be envisaged is, for example, to allow small operators to start operations using licence-exempt spectrum, and then moved to licensed spectrum when the business case is proved.
* The promotion of shared-use bands, as long as interference is controlled. Spectrum sharing can be implemented on the basis of geography, time or frequency separation.
* Developing strategies and implement mechanisms for clearing bands for new services as appropriate.
* Recognizing the need for cost-effective backhaul infrastructure from rural and semi-rural areas, regulators can consider the use of point-to-point links within other bands, in line with national frequency plans, including any bands for broadband wireless access.
5. Ensure affordability. Regulators can apply reasonable spectrum fees for wireless broadband technologies to foster the provision of innovative broadband services at affordable prices, and minimize unreasonable costs that are barriers to entry. Higher costs of access to spectrum further reduces the economic viability in rural and under-served areas. Auctions and tender processes can also be managed to meet these goals.
6. Optimize spectrum availability on a timely basis. Regulators are encouraged to provide effective and timely spectrum use and equipment authorizations to facilitate the deployment and interoperability of infrastructure for wireless broadband networks. Regulators are also encouraged to make all available spectrum bands for offer, subject to overall national ICT master-plans, in order that prices are not pushed up due to restrictive supply and limited amount of spectrum made available and so that opportunities to use new and emerging technologies can be accommodated in a timely manner. In addition, special research or test authorizations could be issued to promote the development of innovative wireless technologies.
7. Manage spectrum efficiently. Spectrum planning is necessary to achieve efficient and effective spectrum management on both a short-term and long-term basis. Spectrum can be allocated in an economic and efficient manner, and by relying on market forces,economic incentives and technical innovations. Regulators can promote advanced spectrum efficient technologies that allow co-existence with other radio communications services, using interference mitigation techniques, for example, dynamic frequency selection. Regulators can provide swift and effective enforcement of spectrum management policies and regulations.
8. Ensure a level playing field. To prevent spectrum hoarding, especially by incumbents, regulators can set a limit on the maximum amount of spectrum that each operator can obtain.
9. Harmonize international and regional practices and standards. Regulators can, as far as practicable, harmonize effective domestic and international spectrum practices and utilize regional and international standards whenever possible, and where appropriate, reflect them in national standards, balancing harmonization goals with flexibility measures. This could include harmonization of spectrum for broadband wireless access that could generate economies of scale in the production and manufacture of equipment and network infrastructure. Likewise, global harmonization of standards to ensure interoperability between different vendor’s user terminals and network equipment can be promoted. The use of open, interoperable, non-discriminatory and demand-driven standards meets the needs of users and consumers. Coordination agreements with neighbors, either on a bilateral or multilateral basis, can hasten licensing and facilitate network planning.
10. Adopt a broad approach to promote broadband access. Spectrum management alone is inadequate to promote wireless broadband access. A broad approach, including other regulatory instruments; such as effective competitive safeguards, open access to infrastructure, universal access/service measures, the promotion of supply and demand, licensing, roll-out and market entry measures; the introduction of data security and users’ rights, where appropriate; encouraging the lowering or removal of import duties on wireless broadband equipment; as well as development of backbone and distribution networks is necessary.
Source here
pdf version here
+ Related
Comment by openspectrum.info here
Nov 2005
Best Practice Guidelines for Spectrum Management to Promote Broadband Access
Introduction
Wireless broadband technologies hold promise for all countries seeking to ensure the availability of access to information communication technologies (ICT) and the creation of the Information Society. The ICT sector can improve standards of living and quality of life and boost productivity and competitiveness in the global and national economies. Broadband is an essential component of ICT. It is bringing new multimedia services to consumers for work and leisure, making them better-informed and more involved citizens and promoting economic and societal progress. With the advent of digital convergence and the Internet, wireless broadband offers the prospect of faster rollout of services, portability and mobility, making a reality of the vision of ‘any content, any time, any place, anywhere’ in the global information society. Wireless broadband technologies are set to close the broadband divide that exists between developing and developed countries. Wireless broadband, of course, will also require more spectrum (see http://www.itu.int/ITU-D/treg/Events/Seminars/2003/GSR/WSIS-Statement.html and http://www.itu.int/ITU-D/treg/Events/Seminars/2004/GSR04/consultation.html).
Spectrum is a scarce resource that needs to be managed effectively and efficiently in order to derive maximum economic and social benefit, including encouraging growth and rapid deployment of infrastructure and services for consumers. This requires innovative approaches to managing the spectrum dynamically to succeed in making spectrum available for broadband and other new services. As recognized by the 2004 Global Symposium for Regulators (GSR), within the spirit of transparency, objectivity, non-discrimination, and with the goal of the most efficient spectrum use, the onus is on legislators and regulators to adjust, alter or reform their regulatory codes, wherever possible, to dismantle unnecessary rules which today may adversely affect the operation of wireless technologies and systems. A new set of spectrum management principles and practices, within regulators’ respective mandate, will enable countries to harness the full potential of wireless broadband technologies. However, this cannot be done in isolation. A broad approach, including other regulatory instruments, as outlined in the GSR 2003 and 2004 Best Practice Guidelines to promote universal access, and low cost broadband, are necessary.
We, the regulators participating in the 2005 Global Symposium for Regulators, have identified the following set of best practice guidelines for spectrum management to promote broadband access:
1. Facilitate deployment of innovative broadband technologies. Regulators are encouraged to adopt policies to promote innovative services and technologies. Such polices may include:
* Managing spectrum in the public interest.
* Promoting innovation and the introduction of new radio applications and technologies.
* Reducing or removing unnecessary restrictions on spectrum use.
* Adopting harmonized frequency plans defined by ITU-R recommendation in order to facilitate the implementation of competition.
* Embracing the principle of minimum necessary regulation, where possible, to reduce or eliminate regulatory barriers to spectrum access, including simplified licence and authorization procedures for the use of spectrum resources
* Allocating frequencies in a manner to facilitate entry into the market of new competitors.
* Ensuring that broadband wireless operators have as wide a choice as possible of the spectrum they may access, and releasing spectrum to the market as soon as possible.
2. Promote transparency: Regulators are encouraged to adopt transparent and non-discriminatory spectrum management policies to ensure adequate availability of spectrum, provide regulatory certainty and to promote investment. These policies may include:
* Carrying out public consultations on spectrum management policies and procedures to allow interested parties to participate in the decision-making process, such as:
o public consultations before changing national frequency allocation plans; and
o public consultations on spectrum management decisions likely to affect service providers.
* Implementing a stable decision-making process that provides certainty that the grant of radio spectrum is done in accordance with principles of openness, transparency, objectivity--based on a clear and publicly available set of criterion which is published on the regulator’s website--and non-discrimination and that such grants will not be changed by the regulator without good cause.
* Publication of forecasts of spectrum usage and allocation needs, in particular on the regulator’s website.
* Publication of frequency allocation plans, including frequencies available for wireless broadband access, in particular on the regulator’s website.
* Publication of a web-based register that gives an overview of assigned spectrum rights, vacant spectrum, and licence-free spectrum, balancing any concerns for confidential business information or public security.
* Clearly defining and publishing radio frequency spectrum users’ rights and obligations, including on the regulator’s website.
* Clearly defining and publishing licensing and authorization rules and procedures, including on the regulator’s website.
* Publication of legal requirements for imported equipment and foreign investment, in particular on the relevant government agency website.
3. Embrace technology neutrality. To maximize innovation, create conditions for the development of broadband services, reduce investment risks and stimulate competition among different technologies, regulators can give industry the freedom and flexibility to deploy their choice of technologies and decide on the most appropriate technology in their commercial interest rather than regulators specifying the types of technologies to be deployed, or making spectrum available for a preferred broadband application, taking into consideration the need for and cost of interoperable platforms.
* Regulators can take into consideration technological convergence, facilitating spectrum use for both fixed and mobile services, ensuring that similar services are not subject to disparate regulatory treatment.
* Regulators can provide technical guidelines on ways to mitigate inter-operator interference.
* Regulators can ensure that bands are not allocated for the exclusive use of particular services and that spectrum allocations are free of technology and service constraints as far as possible.
4. Adopt flexible use measures: Regulators are encouraged to adopt flexible measures for the use of spectrum for wireless broadband services. Such measures may include:
* Minimizing barriers to entry and providing incentives for small market players by allowing broadband suppliers to begin operations on a small scale at very low cost, without imposing onerous rollout and coverage conditions, to enable small market players to gain experience in broadband provision and to test market demand for various broadband services.
* Recognizing that wireless broadband services may be used for both commercial and non-commercial uses (e.g., for community initiatives or public and social purposes) and that broadband wireless spectrum can be allocated for non-commercial uses with lower regulatory burdens, such as reduced, minimal or no spectrum fees; regulators can also allocate and assign spectrum for community or non-commercial use of broadband wireless services.
* Recognizing through flexible licensing mechanisms that wireless broadband technologies can provide a full range of converged services.
* Adopting lighter regulatory approaches in rural and less congested areas, such as flexible regulation of power levels, the use of specialized antennas, the use of simple authorizations, the use of geographic licensing areas, lower spectrum fees and secondary markets in rural areas.
* Recognizing that in markets where spectrum scarcity is an issue, the introduction of mechanisms such as secondary markets can in some cases foster innovation and free-up spectrum for broadband use.
* Recognizing the role that both non-licensed (or licence-exempt) and licensed spectrum can play in the promotion of broadband services, balancing the desire to foster innovation with the need to control congestion and interference. One measure that could be envisaged is, for example, to allow small operators to start operations using licence-exempt spectrum, and then moved to licensed spectrum when the business case is proved.
* The promotion of shared-use bands, as long as interference is controlled. Spectrum sharing can be implemented on the basis of geography, time or frequency separation.
* Developing strategies and implement mechanisms for clearing bands for new services as appropriate.
* Recognizing the need for cost-effective backhaul infrastructure from rural and semi-rural areas, regulators can consider the use of point-to-point links within other bands, in line with national frequency plans, including any bands for broadband wireless access.
5. Ensure affordability. Regulators can apply reasonable spectrum fees for wireless broadband technologies to foster the provision of innovative broadband services at affordable prices, and minimize unreasonable costs that are barriers to entry. Higher costs of access to spectrum further reduces the economic viability in rural and under-served areas. Auctions and tender processes can also be managed to meet these goals.
6. Optimize spectrum availability on a timely basis. Regulators are encouraged to provide effective and timely spectrum use and equipment authorizations to facilitate the deployment and interoperability of infrastructure for wireless broadband networks. Regulators are also encouraged to make all available spectrum bands for offer, subject to overall national ICT master-plans, in order that prices are not pushed up due to restrictive supply and limited amount of spectrum made available and so that opportunities to use new and emerging technologies can be accommodated in a timely manner. In addition, special research or test authorizations could be issued to promote the development of innovative wireless technologies.
7. Manage spectrum efficiently. Spectrum planning is necessary to achieve efficient and effective spectrum management on both a short-term and long-term basis. Spectrum can be allocated in an economic and efficient manner, and by relying on market forces,economic incentives and technical innovations. Regulators can promote advanced spectrum efficient technologies that allow co-existence with other radio communications services, using interference mitigation techniques, for example, dynamic frequency selection. Regulators can provide swift and effective enforcement of spectrum management policies and regulations.
8. Ensure a level playing field. To prevent spectrum hoarding, especially by incumbents, regulators can set a limit on the maximum amount of spectrum that each operator can obtain.
9. Harmonize international and regional practices and standards. Regulators can, as far as practicable, harmonize effective domestic and international spectrum practices and utilize regional and international standards whenever possible, and where appropriate, reflect them in national standards, balancing harmonization goals with flexibility measures. This could include harmonization of spectrum for broadband wireless access that could generate economies of scale in the production and manufacture of equipment and network infrastructure. Likewise, global harmonization of standards to ensure interoperability between different vendor’s user terminals and network equipment can be promoted. The use of open, interoperable, non-discriminatory and demand-driven standards meets the needs of users and consumers. Coordination agreements with neighbors, either on a bilateral or multilateral basis, can hasten licensing and facilitate network planning.
10. Adopt a broad approach to promote broadband access. Spectrum management alone is inadequate to promote wireless broadband access. A broad approach, including other regulatory instruments; such as effective competitive safeguards, open access to infrastructure, universal access/service measures, the promotion of supply and demand, licensing, roll-out and market entry measures; the introduction of data security and users’ rights, where appropriate; encouraging the lowering or removal of import duties on wireless broadband equipment; as well as development of backbone and distribution networks is necessary.
Source here
pdf version here
+ Related
Comment by openspectrum.info here
LIQUID MEDIA/ Mobile Data Association
mobiledata.org
LIQUID MEDIA
(updated August 2005)
21st Century mobile services in 2005/6 are set to become more multi access, multi speed, multimedia and multicolour. This will provide more investment opportunities for Businesses, Government and Consumers and the mobile industry is preparing for this. The MDA is a UK trade body that is both supporting the trend but also encouraging its growth.
The Mobile Data Association was formed just over 10 years ago to promote all forms of mobile data, even if the 95 or so members tend to have a stronger cellular mobile focus. Alongside some good mobile interoperability and marketing work, for the last 6 years the MDA has published monthly text figures (see www.text.it) charting the monthly growth from 1989 when 1 Billion text messages were sent to 26 Billion for the whole of 2004 in the UK alone. 15 Billion have already been sent in 2005. Recent estimates have suggested this is part of a global total of around 1000 Billion or 1 Trillion forecast for 2005 on GSM networks worldwide. However, it is clear from the trade partnerships in the MDA that there are much broader mobile data growth trends under way.
The 4th screen of Mobile (after Film, TV and the PC) is now much more pervasive and personal. It may not compete directly with these other screens, but now it is often coloured and with a larger viewing or display area. The MDA members can see from usage levels how customers are moving from this verbal to a visual world of mobile content. The members can then share and identify mobility needs and usage that can be met through faster access speeds, web content, Mobile Internet access and downloadable delivery to the mobile. With the 4th screen there are also “Push” and “Pull” services, Location based facilities, billing/pricing and customer care models (and revenue shares) often not available with the other screens. The MDA announced in October 04 a forecast for 2005 of 15 Billion W@P page impressions, which already looks as if it will be exceeded. GPRS capable handsets in the UK already exceed 50% of the total handsets in use, so access speed concerns have significantly reduced, but are also paving the way for customers towards a wider dual mode world of 3G or Next generation services.
The trend towards multi mode (2G/3G, 2G/WIFI, and 2G/DVB-H to name only three), higher speeds and more functionality will continue. These developments, coupled with the likely global mobile volume heading towards 2 Billion cellphones in use, and 750 Million new handsets shipped in the year by the end of 2005, will provide many new opportunities. Over 75 3G Networks globally are now open with fast growth rates already stimulating further 3G network investment, and a flourishing market for Content partnerships.
The MDA announced in February `05 that there were now “more phones than people” in the UK illustrating that the UK has already embraced mobile extensively, with “data” being a key driver of this high adoption rate. OFCOM announced in July 05 that gross mobile revenues exceed fixed revenues.
Ongoing patterns of innovation and growth in usage can also be foreseen. GPRS handset penetration should approach 75% of all handsets in use, and 3G should approach 75% of all handsets in use, and 3G should exceed 5 Million all by end of 2005. We also see that now the UK subscriptions exceed the population, the mobile trade will look at many more innovative ways to stimulate usage, rather than retaining a pure emphasis on connections alone. A lot of this usage will relate to new voice requirements but perhaps with faster growth in Data/Content including Messaging, Mobile Content, Enterprise applications.
The variety of Content on the move, or Liquid Media, will also grow and diversify in areas of
* Participation TV (from text voting, to text to screen, to text response; Mobile Messaging, mobile cameras and picture to screen taking us beyond reality TV into a new world of celebrity downloads?),
* Mobile Music and Entertainment,
* Mobile Marketing (including bar codes / interactive posters).
* Mobile Commerce and more web/Intranet activities for consumers and business. All the mobile portals should develop into a much wider shopping mall of content and services for all our pockets. Search, storage, sharing and security will all become more important.
Interest is also growing in the Enterprise world, with the significant adoption of Mobile Messaging, Mobile E-mail, Fleet Management, and Field/service/sales automation via mobility solutions. Remote access will be a vital member of any blue-collar toolbox. Regulations around the lone worker as well as efficiencies are also driving wider adoption.
Public sector interest will develop in information alerts (Education / Health), and transport management (e.g. parking, congestion and road user charging). The need to make Government investment more productive will be a theme that continues to grow and mobile communications is better placed than ever to play its part.
Although the wireless world to date has really been the Cinderella of the Broadband (wireline) world this is set to change rapidly, with new technologies and higher speeds.
Capability and choice will also continue to evolve with investment and new technologies including more 3G roll out, WiFi, RF ID, Fixed Wireless Access, High Speed downlink services, and GPRS developments during 2005/6. Digital Video Broadcast trials will also encourage early examination of this mobility alternative.
Balancing the benefits of mobile with societal / regulatory concerns will remain important. Mobile Content / Location based Codes, subscription based services must be clearly advertised, and anti-spam self-regulatory controls are all in place – I am sure they will be both tested and refined.
Mobile commerce will continue to develop, subject to (EU or national) pragmatic solutions on Premium Rate Services regulation, DRM and Emoney.
Voice is already going mobile but, with Liquid Media flowing, lifestyles are ready to go more wireless than ever before.
MIKE SHORT
CHAIRMAN MOBILE DATA ASSOCIATION
A Brief History of UK Text Messaging
* The first text message was sent in December 1992
* SMS was launched commercially for the first time in 1995
* 1998 - Interconnect between UK Operators O2, Orange, Vodafone and T-Mobile
* The first recorded monthly text message total was 5.4 million, in April 1998
* The first TV programme to use text messaging in a storyline was Eastenders, in 2000
* August 2001 was the first month in which over one billion messages were sent in the UK.
* The first local and mayoral electoral vote in the UK by text message took place on 23rd May 2002.
* December 2002- 1 billion SMS per day were exchanged globally
* On New Year's Day 2003, the number of text messages sent in one day topped one hundred million for the first time.
* 78 million text messages were sent by Britons on Valentine's Day 2003, 6 times more than traditional cards and a 37% increase on text figures for 2002.
* In December 2003, 1.9 billion text messages were sent in Britain as the traditional Christmas card was dumped in favour of a seasonal text message.
* A-Level-81 million messages were sent throughout the UK on August 19th 2004, compared to67 million text messages on A-level result day, August 14th 2003.
* The total number of text messages sent in the UK during November 2004 totalled 2.27 billion, compared to1.7 billion inNovember 2003and 1.5 billion in November 2002.
* The Rt. Hon Tony Blair MP became the first UK Prime Minister to use text message technology to talk directly to the people on 25th November 2004, answering questions submitted in advance by text message from members of the public as well as in real-time in a mobile phone chat-room, transmitted live from No. 10 Downing Street.
* On New Year's Day 2004, the total number of text messages sent reached 111 million, the highest recorded daily total.
* Annual SMS totals: 1999 – 1 billion; 2000 – 6.2 billion; 2001 – 12.2 billion; 2002 – 16.8 billion; 2003 – 20.5 billion; 2004 – 25 billion
* The MDA has forecast that a total 30 billion text messages will be sent in the UK by the end of 2005 compared to the figure of25 billion for 2004
* 78% of the UK population own a mobile phone, of which over 70% send text messages
* Text messages contribute up to 16 % of operator revenues
* 95% of 16-24 year olds use text messaging regularly, each sending an average of 100 texts per month
* UK mobile phone owners now send73 million text messages on a typical day across the four UK GSM network operators
* On average, over3 million messages are sent every hour in Britain.
* The peak hours for texting are between 10.30pm and 11.00pm.
Source here
+ Related
mobilemarketingmagazine.co.uk
25 Nov 2005
UK Goes Even More Text Mad
Figures released today by the Mobile Data Association (MDA) show that a record 2.9 billion text messages were sent during October, at an average of 93.5 million texts per day. The cumulative total for 2005, to the end of October, stands at 29 billion texts, with the MDA forecasting a total of 32 billion for the whole year.
The rise reflects growth in both person-to-person messaging, and in commercial messages from businesses to consumers. Person-to-person texts sent across the UK GSM network operators in October were up by 25.7% over October 2004, while in the b2c arena, retailers are using SMS to drive store traffic and business in the run up to Christmas.
(...)
Source here
LIQUID MEDIA
(updated August 2005)
21st Century mobile services in 2005/6 are set to become more multi access, multi speed, multimedia and multicolour. This will provide more investment opportunities for Businesses, Government and Consumers and the mobile industry is preparing for this. The MDA is a UK trade body that is both supporting the trend but also encouraging its growth.
The Mobile Data Association was formed just over 10 years ago to promote all forms of mobile data, even if the 95 or so members tend to have a stronger cellular mobile focus. Alongside some good mobile interoperability and marketing work, for the last 6 years the MDA has published monthly text figures (see www.text.it) charting the monthly growth from 1989 when 1 Billion text messages were sent to 26 Billion for the whole of 2004 in the UK alone. 15 Billion have already been sent in 2005. Recent estimates have suggested this is part of a global total of around 1000 Billion or 1 Trillion forecast for 2005 on GSM networks worldwide. However, it is clear from the trade partnerships in the MDA that there are much broader mobile data growth trends under way.
The 4th screen of Mobile (after Film, TV and the PC) is now much more pervasive and personal. It may not compete directly with these other screens, but now it is often coloured and with a larger viewing or display area. The MDA members can see from usage levels how customers are moving from this verbal to a visual world of mobile content. The members can then share and identify mobility needs and usage that can be met through faster access speeds, web content, Mobile Internet access and downloadable delivery to the mobile. With the 4th screen there are also “Push” and “Pull” services, Location based facilities, billing/pricing and customer care models (and revenue shares) often not available with the other screens. The MDA announced in October 04 a forecast for 2005 of 15 Billion W@P page impressions, which already looks as if it will be exceeded. GPRS capable handsets in the UK already exceed 50% of the total handsets in use, so access speed concerns have significantly reduced, but are also paving the way for customers towards a wider dual mode world of 3G or Next generation services.
The trend towards multi mode (2G/3G, 2G/WIFI, and 2G/DVB-H to name only three), higher speeds and more functionality will continue. These developments, coupled with the likely global mobile volume heading towards 2 Billion cellphones in use, and 750 Million new handsets shipped in the year by the end of 2005, will provide many new opportunities. Over 75 3G Networks globally are now open with fast growth rates already stimulating further 3G network investment, and a flourishing market for Content partnerships.
The MDA announced in February `05 that there were now “more phones than people” in the UK illustrating that the UK has already embraced mobile extensively, with “data” being a key driver of this high adoption rate. OFCOM announced in July 05 that gross mobile revenues exceed fixed revenues.
Ongoing patterns of innovation and growth in usage can also be foreseen. GPRS handset penetration should approach 75% of all handsets in use, and 3G should approach 75% of all handsets in use, and 3G should exceed 5 Million all by end of 2005. We also see that now the UK subscriptions exceed the population, the mobile trade will look at many more innovative ways to stimulate usage, rather than retaining a pure emphasis on connections alone. A lot of this usage will relate to new voice requirements but perhaps with faster growth in Data/Content including Messaging, Mobile Content, Enterprise applications.
The variety of Content on the move, or Liquid Media, will also grow and diversify in areas of
* Participation TV (from text voting, to text to screen, to text response; Mobile Messaging, mobile cameras and picture to screen taking us beyond reality TV into a new world of celebrity downloads?),
* Mobile Music and Entertainment,
* Mobile Marketing (including bar codes / interactive posters).
* Mobile Commerce and more web/Intranet activities for consumers and business. All the mobile portals should develop into a much wider shopping mall of content and services for all our pockets. Search, storage, sharing and security will all become more important.
Interest is also growing in the Enterprise world, with the significant adoption of Mobile Messaging, Mobile E-mail, Fleet Management, and Field/service/sales automation via mobility solutions. Remote access will be a vital member of any blue-collar toolbox. Regulations around the lone worker as well as efficiencies are also driving wider adoption.
Public sector interest will develop in information alerts (Education / Health), and transport management (e.g. parking, congestion and road user charging). The need to make Government investment more productive will be a theme that continues to grow and mobile communications is better placed than ever to play its part.
Although the wireless world to date has really been the Cinderella of the Broadband (wireline) world this is set to change rapidly, with new technologies and higher speeds.
Capability and choice will also continue to evolve with investment and new technologies including more 3G roll out, WiFi, RF ID, Fixed Wireless Access, High Speed downlink services, and GPRS developments during 2005/6. Digital Video Broadcast trials will also encourage early examination of this mobility alternative.
Balancing the benefits of mobile with societal / regulatory concerns will remain important. Mobile Content / Location based Codes, subscription based services must be clearly advertised, and anti-spam self-regulatory controls are all in place – I am sure they will be both tested and refined.
Mobile commerce will continue to develop, subject to (EU or national) pragmatic solutions on Premium Rate Services regulation, DRM and Emoney.
Voice is already going mobile but, with Liquid Media flowing, lifestyles are ready to go more wireless than ever before.
MIKE SHORT
CHAIRMAN MOBILE DATA ASSOCIATION
A Brief History of UK Text Messaging
* The first text message was sent in December 1992
* SMS was launched commercially for the first time in 1995
* 1998 - Interconnect between UK Operators O2, Orange, Vodafone and T-Mobile
* The first recorded monthly text message total was 5.4 million, in April 1998
* The first TV programme to use text messaging in a storyline was Eastenders, in 2000
* August 2001 was the first month in which over one billion messages were sent in the UK.
* The first local and mayoral electoral vote in the UK by text message took place on 23rd May 2002.
* December 2002- 1 billion SMS per day were exchanged globally
* On New Year's Day 2003, the number of text messages sent in one day topped one hundred million for the first time.
* 78 million text messages were sent by Britons on Valentine's Day 2003, 6 times more than traditional cards and a 37% increase on text figures for 2002.
* In December 2003, 1.9 billion text messages were sent in Britain as the traditional Christmas card was dumped in favour of a seasonal text message.
* A-Level-81 million messages were sent throughout the UK on August 19th 2004, compared to67 million text messages on A-level result day, August 14th 2003.
* The total number of text messages sent in the UK during November 2004 totalled 2.27 billion, compared to1.7 billion inNovember 2003and 1.5 billion in November 2002.
* The Rt. Hon Tony Blair MP became the first UK Prime Minister to use text message technology to talk directly to the people on 25th November 2004, answering questions submitted in advance by text message from members of the public as well as in real-time in a mobile phone chat-room, transmitted live from No. 10 Downing Street.
* On New Year's Day 2004, the total number of text messages sent reached 111 million, the highest recorded daily total.
* Annual SMS totals: 1999 – 1 billion; 2000 – 6.2 billion; 2001 – 12.2 billion; 2002 – 16.8 billion; 2003 – 20.5 billion; 2004 – 25 billion
* The MDA has forecast that a total 30 billion text messages will be sent in the UK by the end of 2005 compared to the figure of25 billion for 2004
* 78% of the UK population own a mobile phone, of which over 70% send text messages
* Text messages contribute up to 16 % of operator revenues
* 95% of 16-24 year olds use text messaging regularly, each sending an average of 100 texts per month
* UK mobile phone owners now send73 million text messages on a typical day across the four UK GSM network operators
* On average, over3 million messages are sent every hour in Britain.
* The peak hours for texting are between 10.30pm and 11.00pm.
Source here
+ Related
mobilemarketingmagazine.co.uk
25 Nov 2005
UK Goes Even More Text Mad
Figures released today by the Mobile Data Association (MDA) show that a record 2.9 billion text messages were sent during October, at an average of 93.5 million texts per day. The cumulative total for 2005, to the end of October, stands at 29 billion texts, with the MDA forecasting a total of 32 billion for the whole year.
The rise reflects growth in both person-to-person messaging, and in commercial messages from businesses to consumers. Person-to-person texts sent across the UK GSM network operators in October were up by 25.7% over October 2004, while in the b2c arena, retailers are using SMS to drive store traffic and business in the run up to Christmas.
(...)
Source here
Consumers Union, USA- DTV legislation must Expand Availability of Unlicensed Spectrum to Promote Affordable Broadband Access
October 31, 2005
United States Senate
Washington, D.C. 20515
Dear Senator:
While the digital television transition legislation soon to be addressed on the Senate floor raises a number of issues for the undersigned groups, on one crucial element of the bill, we speak with one voice: DTV legislation must expand availability of unlicensed spectrum to promote affordable broadband access.
Congress should set aside portions of the digital broadcast band for unlicensed use and direct the FCC to complete its stalled rulemaking to open unassigned TV channels in each market (TV band “white space”) for unlicensed wireless broadband services. Use of these airwaves via an unlicensed wireless broadband platform would be of enormous benefit to consumers, public safety agencies, and small businesses that seek low-cost communications to promote job growth.
Greater availability of unlicensed spectrum in the high-penetration frequencies below 700 MHz will improve our local emergency communications networks, create broadband competition, and help reduce the digital divide by ensuring that low-income, minority and rural households have both universal and affordable high-speed Internet access. From towns as diverse as Chaska, Minnesota, Coffman Cove, Alaska, Granbury, Texas and Philadelphia, Pennsylvania, hundreds of communities are opting to use unlicensed spectrum to facilitate high-speed wireless broadband networks to better serve their residents.
Greater availability of high-quality unlicensed spectrum will also create a booming marketplace for high-speed, high-capacity broadband and the technology and applications that accompany it. Hardware manufacturers, computer software makers, network operators, and Internet service providers all view unlicensed spectrum as a huge economic opportunity.
But the promise of new technology is stymied by our current spectrum policies. The best and most innovative uses of the public’s airwaves are restricted to a tiny sliver of our broadcast spectrum (the 2.4 GHz “Wi-Fi” band) that is shared with more than 250 million consumer gadgets—everything from baby monitors and cordless phones to garage door openers. Moreover, the capital cost of deploying wireless broadband networks is roughly three times higher at 2.4 GHz than below 1 GHz; battery life for mobile devices is shorter; and quality of service (particularly indoor coverage) is considerably worse.
The DTV transition legislation should include two key provisions that, together, will go far in securing spectrum for an unlicensed communications marketplace.
First, and most imperatively, Congress should direct the FCC to complete its work on rules that would open up the “white space” between TV channels that now lies fallow and wasted, for non-interfering unlicensed use. In most rural markets where broadband availability is badly needed, there are more than a dozen empty broadcast channels (in some cases two or three dozen). Using today’s “smart radio” technologies, Congress can leverage this vast swath of dormant public spectrum to generate local economic development (particularly in areas under-served by broadband), enhance our nation’s economic competitiveness, and create opportunities for entrepreneurs.
It is clear that the Commission needs to know that Congress wants the wasted spectrum below Channel 52 reallocated for broadband, subject to strict interference protections for television viewers (which are already outlined by the FCC in its rulemaking). The positive outcomes of this public policy are extraordinary.
Second, Congress should reserve portions of the broadcast bands for unlicensed use. One approach would be to set aside channels 2, 3, and 4 as dedicated unlicensed space. Few broadcasters have selected these channels for digital transmission and they are otherwise dormant. Another approach that would ensure a full range of applications for these new technologies would be to reserve some of the 10 returned analog channels on 700 MHz for unlicensed use, withholding that portion from auction. Reserving three channels (18 MHz) for unlicensed services – and auctioning seven (42 MHz) – would pay dividends to the economy far exceeding any temporary loss of auction revenue.
It is vital that the American people benefit from the public airwaves in specific, concrete ways. The DTV bill may be the Congress’s best opportunity to promote affordable broadband nationwide and close the growing gap between the U.S. and our international competitors. The U.S. has fallen from 3rd to 16th in the world in broadband subscribers in the last few years. We remain among the worst performers in the industrialized world in terms of bit-speeds per dollar paid by the consumer for monthly service. This gap is both unacceptable and unsustainable for our long-term global competitiveness. Access to unlicensed spectrum will help close it.
Any legislation that fails to address the spectrum needs of Americans in the 21st century fails to serve the public interest. The DTV transition represents an historic opportunity to maximize efficient use of public resources to meet public needs. We urge you to ensure that all Americans benefit from it.
Sincerely,
Action Coalition for Media Education (ACME)
Alliance for Community Media
Center for Creative Voices in Media
Center for Digital Democracy
Chicago Media Action
Citizens for Independent Public Broadcasting
Common Assets Defense Fund
Common Cause
Consumer Federation of America
Consumer Project on Technology (CPTech)
Consumers Union
Free Press
Hawaii Consumers
Industry Ears
Media Alliance
Media Access Project
Media Channel
Media Democracy Chicago
National Hispanic Media Coalition
Native Networking Policy Center
New America Foundation
Public Knowledge
Prometheus Radio
Reclaim the Media
U.S. Public Interest Research Group/National Association of State PIRGs
Source here
October 31, 2005
United States Senate
Washington, D.C. 20515
Dear Senator:
While the digital television transition legislation soon to be addressed on the Senate floor raises a number of issues for the undersigned groups, on one crucial element of the bill, we speak with one voice: DTV legislation must expand availability of unlicensed spectrum to promote affordable broadband access.
Congress should set aside portions of the digital broadcast band for unlicensed use and direct the FCC to complete its stalled rulemaking to open unassigned TV channels in each market (TV band “white space”) for unlicensed wireless broadband services. Use of these airwaves via an unlicensed wireless broadband platform would be of enormous benefit to consumers, public safety agencies, and small businesses that seek low-cost communications to promote job growth.
Greater availability of unlicensed spectrum in the high-penetration frequencies below 700 MHz will improve our local emergency communications networks, create broadband competition, and help reduce the digital divide by ensuring that low-income, minority and rural households have both universal and affordable high-speed Internet access. From towns as diverse as Chaska, Minnesota, Coffman Cove, Alaska, Granbury, Texas and Philadelphia, Pennsylvania, hundreds of communities are opting to use unlicensed spectrum to facilitate high-speed wireless broadband networks to better serve their residents.
Greater availability of high-quality unlicensed spectrum will also create a booming marketplace for high-speed, high-capacity broadband and the technology and applications that accompany it. Hardware manufacturers, computer software makers, network operators, and Internet service providers all view unlicensed spectrum as a huge economic opportunity.
But the promise of new technology is stymied by our current spectrum policies. The best and most innovative uses of the public’s airwaves are restricted to a tiny sliver of our broadcast spectrum (the 2.4 GHz “Wi-Fi” band) that is shared with more than 250 million consumer gadgets—everything from baby monitors and cordless phones to garage door openers. Moreover, the capital cost of deploying wireless broadband networks is roughly three times higher at 2.4 GHz than below 1 GHz; battery life for mobile devices is shorter; and quality of service (particularly indoor coverage) is considerably worse.
The DTV transition legislation should include two key provisions that, together, will go far in securing spectrum for an unlicensed communications marketplace.
First, and most imperatively, Congress should direct the FCC to complete its work on rules that would open up the “white space” between TV channels that now lies fallow and wasted, for non-interfering unlicensed use. In most rural markets where broadband availability is badly needed, there are more than a dozen empty broadcast channels (in some cases two or three dozen). Using today’s “smart radio” technologies, Congress can leverage this vast swath of dormant public spectrum to generate local economic development (particularly in areas under-served by broadband), enhance our nation’s economic competitiveness, and create opportunities for entrepreneurs.
It is clear that the Commission needs to know that Congress wants the wasted spectrum below Channel 52 reallocated for broadband, subject to strict interference protections for television viewers (which are already outlined by the FCC in its rulemaking). The positive outcomes of this public policy are extraordinary.
Second, Congress should reserve portions of the broadcast bands for unlicensed use. One approach would be to set aside channels 2, 3, and 4 as dedicated unlicensed space. Few broadcasters have selected these channels for digital transmission and they are otherwise dormant. Another approach that would ensure a full range of applications for these new technologies would be to reserve some of the 10 returned analog channels on 700 MHz for unlicensed use, withholding that portion from auction. Reserving three channels (18 MHz) for unlicensed services – and auctioning seven (42 MHz) – would pay dividends to the economy far exceeding any temporary loss of auction revenue.
It is vital that the American people benefit from the public airwaves in specific, concrete ways. The DTV bill may be the Congress’s best opportunity to promote affordable broadband nationwide and close the growing gap between the U.S. and our international competitors. The U.S. has fallen from 3rd to 16th in the world in broadband subscribers in the last few years. We remain among the worst performers in the industrialized world in terms of bit-speeds per dollar paid by the consumer for monthly service. This gap is both unacceptable and unsustainable for our long-term global competitiveness. Access to unlicensed spectrum will help close it.
Any legislation that fails to address the spectrum needs of Americans in the 21st century fails to serve the public interest. The DTV transition represents an historic opportunity to maximize efficient use of public resources to meet public needs. We urge you to ensure that all Americans benefit from it.
Sincerely,
Action Coalition for Media Education (ACME)
Alliance for Community Media
Center for Creative Voices in Media
Center for Digital Democracy
Chicago Media Action
Citizens for Independent Public Broadcasting
Common Assets Defense Fund
Common Cause
Consumer Federation of America
Consumer Project on Technology (CPTech)
Consumers Union
Free Press
Hawaii Consumers
Industry Ears
Media Alliance
Media Access Project
Media Channel
Media Democracy Chicago
National Hispanic Media Coalition
Native Networking Policy Center
New America Foundation
Public Knowledge
Prometheus Radio
Reclaim the Media
U.S. Public Interest Research Group/National Association of State PIRGs
Source here
Mobility key to converged broadband world
ft.com
Mobility key to converged broadband world
By Frank Sixt
Published: November 6 2005 19:45 | Last updated: November 6 2005 19:45
[Vietnamese girl on a mobile phone]
Steve Jobs clearly believes in it, Rupert Murdoch wants a piece and Jorma Ollila has high hopes for it: the executives behind well-known companies such as Apple, News Corporation and Nokia agree that convergence has finally arrived in technology, media and telecommunications.
I think each of these leaders, icons in their industries, is telling us the same thing: the convergence of communications, media and the internet can no longer be ignored. In the global media and communications sector, technology has brought us again to one of those extraordinary inflection points where entirely new mass consumer product categories are created.
Source here
+ Commentary: Benton Foundation
[SOURCE: Financial Times, AUTHOR: Frank Sixt, Hutchison Whampoa ]
[Commentary] The convergence of communications, media and the Internet can no longer be ignored. In the global media and communications sector, technology has brought us again to one of those extraordinary inflection points where entirely new mass consumer product categories are created. If that sounds like hype, just pause for a second and try to remember the world before the Internet, before the mobile phone, before the Walkman, before multi-channel television. That was about 20 years ago. I believe that the distribution structure for Internet, media and communications products has already changed, and has changed for good. All the trends from the music industry, pay-per-view television and Internet search indicate that “mobile” is what users want. Mobiles are at the center of the converged world of communications, the Internet and media. We now find ourselves at a similar point to where television was in the late 1940s. In 1949, just 2 per cent of American households had a television set. Five years later, more than half of all Americans were glued to the box. Today, television is ubiquitous globally. But the winners of this new world order will not be all the “usual suspects”. A mobile operator that does not understand its consumers’ media needs will fail, as will a media operator that does not anticipate what customers want in mobility. An Internet operator that does not understand its customers’ needs in mobility will be off-line for much of the time that his customers are online. This will lead to important new alignments between the mobile, Internet and media worlds.
Source here
Mobility key to converged broadband world
By Frank Sixt
Published: November 6 2005 19:45 | Last updated: November 6 2005 19:45
[Vietnamese girl on a mobile phone]
Steve Jobs clearly believes in it, Rupert Murdoch wants a piece and Jorma Ollila has high hopes for it: the executives behind well-known companies such as Apple, News Corporation and Nokia agree that convergence has finally arrived in technology, media and telecommunications.
I think each of these leaders, icons in their industries, is telling us the same thing: the convergence of communications, media and the internet can no longer be ignored. In the global media and communications sector, technology has brought us again to one of those extraordinary inflection points where entirely new mass consumer product categories are created.
Source here
+ Commentary: Benton Foundation
[SOURCE: Financial Times, AUTHOR: Frank Sixt, Hutchison Whampoa ]
[Commentary] The convergence of communications, media and the Internet can no longer be ignored. In the global media and communications sector, technology has brought us again to one of those extraordinary inflection points where entirely new mass consumer product categories are created. If that sounds like hype, just pause for a second and try to remember the world before the Internet, before the mobile phone, before the Walkman, before multi-channel television. That was about 20 years ago. I believe that the distribution structure for Internet, media and communications products has already changed, and has changed for good. All the trends from the music industry, pay-per-view television and Internet search indicate that “mobile” is what users want. Mobiles are at the center of the converged world of communications, the Internet and media. We now find ourselves at a similar point to where television was in the late 1940s. In 1949, just 2 per cent of American households had a television set. Five years later, more than half of all Americans were glued to the box. Today, television is ubiquitous globally. But the winners of this new world order will not be all the “usual suspects”. A mobile operator that does not understand its consumers’ media needs will fail, as will a media operator that does not anticipate what customers want in mobility. An Internet operator that does not understand its customers’ needs in mobility will be off-line for much of the time that his customers are online. This will lead to important new alignments between the mobile, Internet and media worlds.
Source here
Eli Noam/Thomas W. Hazlett: A First Amendment for the internet
ft.com
Eli Noam: A First Amendment for the internet
By Eli Noam
Published: November 15 2005 19:18 | Last updated: November 15 2005 19:18
On Wednesday, the United Nations’ world summit on the information society is opening in Tunis. Much of the attention has centred on reducing American control over the internet.
European countries are leading the charge, together with developing countries in need of more resources. Opponents of the US role have had a hard time identifying concrete misdeeds. But the issue has taken on a life of its own.
That is too bad, because the real question is not so much who regulates the overall aspects of the internet, but to what purpose. One of the fundamental questions is whether and how to regulate television programmes that are delivered over the emerging broadband internet.
There are three basic models. The first is to treat the providers of broadband services, such as cable TV and telephone companies, like a print publisher. Like the Financial Times, they would have the right to determine what content they wanted to carry and what other information providers could be accessed from their websites. Market forces are supposed to generate access to providers of information. This is the approach the Federal Communications Commission set for the US.
The second approach is that of “common carriage”, which has been the basic system for telecommunication carriers. Users can access any lawful content or application and the broadband provider cannot be a gatekeeper. This approach is known as “net- neutrality” and is advocated by public interest groups. It is also the traditional way in which the internet has functioned so successfully. But it is not entirely non-regulatory in that the broadband providers are legally obliged to keep their connections open and non-discriminatory.
Both these models have solid free-speech arguments in their favour, the difference being whose rights are given priority: those of the network providers or those of the users. But the third approach is one of state intervention. It is to treat TV over the internet just like a variant of regular broadcast TV, to require its licensing by a governmental body or adherence to various rules.
This is the approach taken by South Korea, the world’s leader in broadband internet, which requires government licensing of internet TV providers. It has not issued such licences yet, perhaps to protect cable TV. It is also the policy that the European Commission is developing. Brussels intends to require commercial internet protocol TV providers to follow rules on impartiality, decency, accuracy, right of reply and content import quotas.
The licensing and regulation of over-the-air broadcasting had a reason – there were only a few frequencies available for TV and they had to be allocated with public interest conditions. But for TV over the internet, no such rationale exists. An unlimited amount of content is possible, just as it is for the print press, which is under no obligation of impartiality or content quotas. Thus, these rules, while in pursuit of laudable public goals, establish the broadcast regulatory model for non-broadcast media, instead of the other way around. If the future of all media is on broadband, that future will be one of media regulation.
Sovereign countries can restrict their internet media, and many do so, including the summit’s host country, Tunisia. But the internet offers a loophole: content can be readily provided from across borders. The closing of that loophole by firewalls could be legitimised by the rules of an international regulator of the internet. Thus, the stakes in this debate are much higher than web address systems.
For that reason, it is important that any international internet regulation be based in advance on constitution-like principles. What is needed is a strong rule against governmental restrictiveness on the international flows of information over the internet, such the First Amendment of the US constitution, which protects free speech and press in America. Such a rule must be clear and unambiguous. Anything less will be undermined since it will be easy to find an international majority to support various qualifications.
This gives the US a constructive opportunity. Instead of clinging to the status quo in internet governance it should move forward to pursue positive goals. Thus, any new international system of internet governance, as contemplated now at the summit in Tunis, should be conditional on a clear declaration of freedom for the global flow of all internet content. If such a resolution is passed, the US can declare victory for its First Amendment principles of free information flows and their expansion into the international arena, and make way for a broader international body. But if such a declaration is unachievable, it should give supporters of international democracy pause about what it is that they stand to gain from displacing the US from continuing to set the tone for the internet. They may be helping to establish the global internet media system of the future as one of state licensing and controls, which is vastly more troubling than temporary American over-representation.
The writer is professor of finance and economics at Columbia University and director of its Columbia Institute for Tele-Information
(...)
Thomas W. Hazlett: A brilliant bit of choreography
Thomas W. HazlettProf. Noam’s compromise, urging the US to declare victory – should a First Amendment for the internet be enacted – and then go home, is a brilliant bit of choreography. The question it prompts, however, is one that Prof. Epstein seizes upon: Will the ‘internet constitution’ be sufficiently potent to protect free speech? Alas, what we know from the United States’ own history, suggests the reverse. The right to free speech and a free press has been sharply compromised by “public interest” regulation of electronic media.
Eli Noam’s essay rightly argues that licensing of the press, and the content controls that inevitably ensue, is the path to avoid. But the suggestion that US regulation of TV broadcasters resulted from only having “a few frequencies available for TV” is incorrect. Government regulation purposely restricted stations; indeed, of the initial 82 TV channels set aside for television broadcasting (and more were available), just three government licenses were awarded in most areas. The “Big Three” networks that resulted were a product of licensing policy, not nature or markets.
Even with such artificial scarcity, broadcasting licenses can be assigned by auction rather than by political discretion. And content controls are entirely optional. The “fairness doctrine” and “equal time rule” have nothing to do with spectrum allocation – except as a legal strategy to gain special exemption from the First Amendment.
Governments have little difficulty creating ex post rationalisations for regulation. Content controls then are used to justify restrictions on competition, bringing favoured private interests (licensees) on as key allies in pursuit of the “public interest.” The history of broadcasting in the US (and elsewhere) has seen this conspiracy in restraint of trade play out repeatedly.
US courts have allowed content controls that would clearly violate the First Amendment in print publishing. The grounds for side-stepping the Constitution include the logically vacuous “physical scarcity” doctrine, and the “pervasiveness” of broadcasting, said to give government a stake in regulating content when it wafts into the citizen’s home or office without permission. This was said to happen in radio broadcasting. Could it not be said to happen in wireless internet transmissions?
I would delight in seeing a First Amendment for the internet, but one that is up to the challenge of extending print protections to electronic media. America’s First Amendment has failed to do that. How would the internet’s First Amendment prove tougher?
The writer is professor of law economics at George Mason University, where he is director of tthe Information Economy Project of the National Center for Technology and Law
Source here
Eli Noam: A First Amendment for the internet
By Eli Noam
Published: November 15 2005 19:18 | Last updated: November 15 2005 19:18
On Wednesday, the United Nations’ world summit on the information society is opening in Tunis. Much of the attention has centred on reducing American control over the internet.
European countries are leading the charge, together with developing countries in need of more resources. Opponents of the US role have had a hard time identifying concrete misdeeds. But the issue has taken on a life of its own.
That is too bad, because the real question is not so much who regulates the overall aspects of the internet, but to what purpose. One of the fundamental questions is whether and how to regulate television programmes that are delivered over the emerging broadband internet.
There are three basic models. The first is to treat the providers of broadband services, such as cable TV and telephone companies, like a print publisher. Like the Financial Times, they would have the right to determine what content they wanted to carry and what other information providers could be accessed from their websites. Market forces are supposed to generate access to providers of information. This is the approach the Federal Communications Commission set for the US.
The second approach is that of “common carriage”, which has been the basic system for telecommunication carriers. Users can access any lawful content or application and the broadband provider cannot be a gatekeeper. This approach is known as “net- neutrality” and is advocated by public interest groups. It is also the traditional way in which the internet has functioned so successfully. But it is not entirely non-regulatory in that the broadband providers are legally obliged to keep their connections open and non-discriminatory.
Both these models have solid free-speech arguments in their favour, the difference being whose rights are given priority: those of the network providers or those of the users. But the third approach is one of state intervention. It is to treat TV over the internet just like a variant of regular broadcast TV, to require its licensing by a governmental body or adherence to various rules.
This is the approach taken by South Korea, the world’s leader in broadband internet, which requires government licensing of internet TV providers. It has not issued such licences yet, perhaps to protect cable TV. It is also the policy that the European Commission is developing. Brussels intends to require commercial internet protocol TV providers to follow rules on impartiality, decency, accuracy, right of reply and content import quotas.
The licensing and regulation of over-the-air broadcasting had a reason – there were only a few frequencies available for TV and they had to be allocated with public interest conditions. But for TV over the internet, no such rationale exists. An unlimited amount of content is possible, just as it is for the print press, which is under no obligation of impartiality or content quotas. Thus, these rules, while in pursuit of laudable public goals, establish the broadcast regulatory model for non-broadcast media, instead of the other way around. If the future of all media is on broadband, that future will be one of media regulation.
Sovereign countries can restrict their internet media, and many do so, including the summit’s host country, Tunisia. But the internet offers a loophole: content can be readily provided from across borders. The closing of that loophole by firewalls could be legitimised by the rules of an international regulator of the internet. Thus, the stakes in this debate are much higher than web address systems.
For that reason, it is important that any international internet regulation be based in advance on constitution-like principles. What is needed is a strong rule against governmental restrictiveness on the international flows of information over the internet, such the First Amendment of the US constitution, which protects free speech and press in America. Such a rule must be clear and unambiguous. Anything less will be undermined since it will be easy to find an international majority to support various qualifications.
This gives the US a constructive opportunity. Instead of clinging to the status quo in internet governance it should move forward to pursue positive goals. Thus, any new international system of internet governance, as contemplated now at the summit in Tunis, should be conditional on a clear declaration of freedom for the global flow of all internet content. If such a resolution is passed, the US can declare victory for its First Amendment principles of free information flows and their expansion into the international arena, and make way for a broader international body. But if such a declaration is unachievable, it should give supporters of international democracy pause about what it is that they stand to gain from displacing the US from continuing to set the tone for the internet. They may be helping to establish the global internet media system of the future as one of state licensing and controls, which is vastly more troubling than temporary American over-representation.
The writer is professor of finance and economics at Columbia University and director of its Columbia Institute for Tele-Information
(...)
Thomas W. Hazlett: A brilliant bit of choreography
Thomas W. HazlettProf. Noam’s compromise, urging the US to declare victory – should a First Amendment for the internet be enacted – and then go home, is a brilliant bit of choreography. The question it prompts, however, is one that Prof. Epstein seizes upon: Will the ‘internet constitution’ be sufficiently potent to protect free speech? Alas, what we know from the United States’ own history, suggests the reverse. The right to free speech and a free press has been sharply compromised by “public interest” regulation of electronic media.
Eli Noam’s essay rightly argues that licensing of the press, and the content controls that inevitably ensue, is the path to avoid. But the suggestion that US regulation of TV broadcasters resulted from only having “a few frequencies available for TV” is incorrect. Government regulation purposely restricted stations; indeed, of the initial 82 TV channels set aside for television broadcasting (and more were available), just three government licenses were awarded in most areas. The “Big Three” networks that resulted were a product of licensing policy, not nature or markets.
Even with such artificial scarcity, broadcasting licenses can be assigned by auction rather than by political discretion. And content controls are entirely optional. The “fairness doctrine” and “equal time rule” have nothing to do with spectrum allocation – except as a legal strategy to gain special exemption from the First Amendment.
Governments have little difficulty creating ex post rationalisations for regulation. Content controls then are used to justify restrictions on competition, bringing favoured private interests (licensees) on as key allies in pursuit of the “public interest.” The history of broadcasting in the US (and elsewhere) has seen this conspiracy in restraint of trade play out repeatedly.
US courts have allowed content controls that would clearly violate the First Amendment in print publishing. The grounds for side-stepping the Constitution include the logically vacuous “physical scarcity” doctrine, and the “pervasiveness” of broadcasting, said to give government a stake in regulating content when it wafts into the citizen’s home or office without permission. This was said to happen in radio broadcasting. Could it not be said to happen in wireless internet transmissions?
I would delight in seeing a First Amendment for the internet, but one that is up to the challenge of extending print protections to electronic media. America’s First Amendment has failed to do that. How would the internet’s First Amendment prove tougher?
The writer is professor of law economics at George Mason University, where he is director of tthe Information Economy Project of the National Center for Technology and Law
Source here
Ofcom/ Digital Dividend Review
guardian.co.uk
Ofcom to sell off analogue spectrum after switchover
Owen Gibson, media correspondent
Friday November 18, 2005
The Guardian
Media regulator Ofcom yesterday announced plans to auction the analogue spectrum after the terrestrial channels switch to digital television, a move eagerly awaited by technology, telecoms and broadcasting companies.
But Ofcom said yesterday to maximise revenue it would not be prescriptive about what the spectrum could be used for, opening up the possibility of bids from mobile operators, broadband providers, new local television services and other wireless services. Existing broadcasting platforms such as Freeview are likely to bid for the spectrum to allow channels to launch next-generation high definition services.
The regulator said it would abandon the existing planning regime, which allocates a specific use for each part of the spectrum and guided the auction for 3G licences in April 2000. Instead, it will be up to the bidders to propose how the so-called "digital dividend" should be used.
Stephen Carter, Ofcom chief executive, said: "The benefits of digital switchover, in terms of efficient use of spectrum and subsequent innovation, are becoming clearer. This review is intended to maximise the digital dividend."
The cleared spectrum, which will become available on a region-by-region basis as analogue television is switched off between 2008 and 2012, is seen as valuable because it occupies the versatile UHF band, offering a combination of high capacity and long range.
Analysts said bidders will not pay anywhere near the £22.5bn raised from mobile operators for the 3G networks that allowed them to launch high bandwidth services such as mobile TV and music downloads. Any money raised will flow back to the Treasury, prompting some critics to question why the government is ordering the BBC to pay for analogue switch off through the licence fee.
Liberal Democrat culture, media and sport spokesman Don Foster said: "Ofcom's welcome initiative will test current government estimates that the Treasury will gain half a billion pounds a year from switchover. If this figure is correct, serious questions must be asked as to why the BBC, not government, should pay for the government's policy of switchover."
Ofcom's market-led approach could be bad news for the BBC, which will have to rely on the Department of Culture, Media and Sport to intervene and make a valid case for handing some of the spectrum to the corporation to enable it to launch high definition channels on Freeview.
"We make no bones about the fact that the market is best placed to decide the best use for spectrum rather than the regulator," said an Ofcom spokesman. However, he said the regulator would listen carefully to the case for intervention if it was demonstrably in the wider public interest.
It plans to consult the major stakeholders in the coming months and publish its final proposals in the last quarter of 2006, just over a year before the first analogue transmitter is due to be switched off in the Scottish borders.
Another factor will be the Regional Radio Conference due to take place in May 2006 with other European regulators to ascertain how spectrum that crosses national borders should be divided. Parts of the south and south-east, which are most likely to interfere with signals from France, Belgium and Holland, will be among the last to convert to digital in 2012.
Using the 'digital dividend'
New mobile services, with fast video and interactive applications
Widespread wireless broadband offering voice calls and fast internet
Outside broadcast services
High-definition Freeview channels
Extra interactive services and television channels
Local television
Extending high-speed wireless services to rural areas
New ideas yet to emerge
Source here
Ofcom to sell off analogue spectrum after switchover
Owen Gibson, media correspondent
Friday November 18, 2005
The Guardian
Media regulator Ofcom yesterday announced plans to auction the analogue spectrum after the terrestrial channels switch to digital television, a move eagerly awaited by technology, telecoms and broadcasting companies.
But Ofcom said yesterday to maximise revenue it would not be prescriptive about what the spectrum could be used for, opening up the possibility of bids from mobile operators, broadband providers, new local television services and other wireless services. Existing broadcasting platforms such as Freeview are likely to bid for the spectrum to allow channels to launch next-generation high definition services.
The regulator said it would abandon the existing planning regime, which allocates a specific use for each part of the spectrum and guided the auction for 3G licences in April 2000. Instead, it will be up to the bidders to propose how the so-called "digital dividend" should be used.
Stephen Carter, Ofcom chief executive, said: "The benefits of digital switchover, in terms of efficient use of spectrum and subsequent innovation, are becoming clearer. This review is intended to maximise the digital dividend."
The cleared spectrum, which will become available on a region-by-region basis as analogue television is switched off between 2008 and 2012, is seen as valuable because it occupies the versatile UHF band, offering a combination of high capacity and long range.
Analysts said bidders will not pay anywhere near the £22.5bn raised from mobile operators for the 3G networks that allowed them to launch high bandwidth services such as mobile TV and music downloads. Any money raised will flow back to the Treasury, prompting some critics to question why the government is ordering the BBC to pay for analogue switch off through the licence fee.
Liberal Democrat culture, media and sport spokesman Don Foster said: "Ofcom's welcome initiative will test current government estimates that the Treasury will gain half a billion pounds a year from switchover. If this figure is correct, serious questions must be asked as to why the BBC, not government, should pay for the government's policy of switchover."
Ofcom's market-led approach could be bad news for the BBC, which will have to rely on the Department of Culture, Media and Sport to intervene and make a valid case for handing some of the spectrum to the corporation to enable it to launch high definition channels on Freeview.
"We make no bones about the fact that the market is best placed to decide the best use for spectrum rather than the regulator," said an Ofcom spokesman. However, he said the regulator would listen carefully to the case for intervention if it was demonstrably in the wider public interest.
It plans to consult the major stakeholders in the coming months and publish its final proposals in the last quarter of 2006, just over a year before the first analogue transmitter is due to be switched off in the Scottish borders.
Another factor will be the Regional Radio Conference due to take place in May 2006 with other European regulators to ascertain how spectrum that crosses national borders should be divided. Parts of the south and south-east, which are most likely to interfere with signals from France, Belgium and Holland, will be among the last to convert to digital in 2012.
Using the 'digital dividend'
New mobile services, with fast video and interactive applications
Widespread wireless broadband offering voice calls and fast internet
Outside broadcast services
High-definition Freeview channels
Extra interactive services and television channels
Local television
Extending high-speed wireless services to rural areas
New ideas yet to emerge
Source here
The ideas interview: Ray Kurzweil
guardian.co.uk
Monday November 21, 2005
The Guardian
Expect the human of the future to be at least part computer, the inventor and futurologist tells John Sutherland
Inventor and futurologist Ray Kurzweil
The cyber-man ... 'By 2030 we will have achieved machinery that equals and exceeds human intelligence'. Photo: Steven Senne/AP
Ray Kurzweil has enormous faith in science. He takes 250 dietary supplements every day. He is sure computers will make him much, much cleverer within decades. He won't rule out being able to live for ever. Even if medical technology cannot prevent the life passing from his body, he thinks there is a good chance he will be able to secure immortality by downloading the contents of his enhanced brain before he dies.
What is more, he says, his predictions have tended to come true. "You can predict certain aspects of the future. It turns out that certain things are remarkably predictable. Measures of IT - price, performance, capacity - in many different fields, follow very smooth evolutionary progressions. So if you ask me what the price or performance of computers will be in 2010 or how much it will cost to sequence base pairs of DNA in 2012, I can give you a figure and it's likely to be accurate. The Age of Intelligent Machines, which I wrote in the 1980s, has hundreds of predictions about the 90s and they've worked out quite well."
Although he has written some of the defining texts of modern futurology, Kurzweil is not just a theorist: he has decades of experience as an inventor. As a schoolboy he created a computer that could write music in the style of the great classical composers. As an adult, he invented the first flat-bed scanner, and a device that translated text in to speech, to help blind people read. There is much, much more.
His current big idea is "the singularity", an idea first proposed by computer scientist and science fiction writer Vernor Vinge, and expounded by Kurzweil in his new book, The Singularity is Near: When Humans Transcend Biology. The nub of Kurzweil's argument is that technology is evolving so quickly that in the near future humans and computers will, in effect, meld to create a hybrid, bio-mechanical life form that will extend our capacities unimaginably.
"By 2020, $1,000 (£581) worth of computer will equal the processing power of the human brain," he says. "By the late 2020s, we'll have reverse-engineered human brains."
What form will the computer take by the middle of the century: a kind of superhuman clone or just a terrific prosthesis? "I would lean more towards the prosthesis side. Not a prosthetic device that just fixes problems, like a wooden leg, but something that allows us to expand our capabilities, because we're going to merge with this technology. By 2030, we will have achieved machinery that equals and exceeds human intelligence but we're going to combine with these machines rather than just competing with them. These machines will be inserted into our bodies, via nano-technology. They'll go inside our brains through the capillaries and enlarge human intelligence."
It sounds creepily wonderful. But will humans have the political and social structures to accommodate and control these super-enhancing technologies? Look at the problems that stem-cell research is currently having in America, for example.
"That's completely insignificant," he replies. "I support stem-cell research and oppose the government restrictions, but nobody can say that this is having any significant impact on the flow of scientific progress. Ultimately, we don't want to use embryonic stem-cells anyway. Not because of any ethical and political issues. If I want artificial heart cells, or if I want pancreatic cells, it will be done from my own DNA and there'll be an inexhaustible supply. These barriers are stones in the river. The science just flows around them."
OK. But what if the bad guys get hold of the technology? Does that possibility keep Kurzweil awake at night?
"I've been concerned about that for many years," he concedes. "But you can't just relinquish these technologies. And you can't ban them. It would deprive humanity of profound benefits and it wouldn't work. In fact it would make the dangers worse by driving the technologies underground, where they would be even less controlled. But we do need to put more stones on the defensive side of the scale and invest more in developing defensive technology. The main danger we have right now is the ability of some bio-terrorist engineering a brand new type of virus that would be very dangerous. Bill Joy and I had an op-ed piece in the New York Times a couple of weeks ago criticising the publication of the genome of the 1918 avian virus on the web. We do have to be careful."
Kurzweil has plenty of critics. Some are horrified by his vision of a future that doesn't seem to need humans. Others suggest his predictions are based on assertion rather than evidence. Some, such as Steven Pinker, argue that Kurzweil has oversimplified evolution by wrongly claiming it to be a pursuit of greater intellectual complexity and applying it to technology.
"It is truly an evolutionary process," Kurzweil insists. "You have different niches and technology competes for them. The better ones survive and the weaker ones go to the wall. Technology evolves in a virtually straight line. The first important point is that we can make accurate predictions and I've been doing that for several decades now. The other important point is the exponential rate at which technology is moving under what I call the Law of Accelerating Return. It's not just Moore's Law."
Kurzweil is referring to the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented and would continue to do so, a key foundation of Kuzweil's thinking.
"It's not just computers. In 1989, only one ten-thousandth of the genome was mapped. Sceptics said there's no way you're gonna do this by the turn of the century. Ten years later they were still sceptical because we'd only succeeded in collecting 2% of the genome. But doubling every year brings surprising results and the project was done in time. It took us 15 years to sequence HIV - a huge project - now we can sequence Sars in 31 days and we sequence other viruses in a week."
All this is moving towards "the singularity", is it? "Yes. Consider how important computers and IT are already. Then go on to consider that the power of these technologies will grow by a factor of a billion in 25 years. And it'll be another factor of a billion by the time we get to 2045".
Source here
Monday November 21, 2005
The Guardian
Expect the human of the future to be at least part computer, the inventor and futurologist tells John Sutherland
Inventor and futurologist Ray Kurzweil
The cyber-man ... 'By 2030 we will have achieved machinery that equals and exceeds human intelligence'. Photo: Steven Senne/AP
Ray Kurzweil has enormous faith in science. He takes 250 dietary supplements every day. He is sure computers will make him much, much cleverer within decades. He won't rule out being able to live for ever. Even if medical technology cannot prevent the life passing from his body, he thinks there is a good chance he will be able to secure immortality by downloading the contents of his enhanced brain before he dies.
What is more, he says, his predictions have tended to come true. "You can predict certain aspects of the future. It turns out that certain things are remarkably predictable. Measures of IT - price, performance, capacity - in many different fields, follow very smooth evolutionary progressions. So if you ask me what the price or performance of computers will be in 2010 or how much it will cost to sequence base pairs of DNA in 2012, I can give you a figure and it's likely to be accurate. The Age of Intelligent Machines, which I wrote in the 1980s, has hundreds of predictions about the 90s and they've worked out quite well."
Although he has written some of the defining texts of modern futurology, Kurzweil is not just a theorist: he has decades of experience as an inventor. As a schoolboy he created a computer that could write music in the style of the great classical composers. As an adult, he invented the first flat-bed scanner, and a device that translated text in to speech, to help blind people read. There is much, much more.
His current big idea is "the singularity", an idea first proposed by computer scientist and science fiction writer Vernor Vinge, and expounded by Kurzweil in his new book, The Singularity is Near: When Humans Transcend Biology. The nub of Kurzweil's argument is that technology is evolving so quickly that in the near future humans and computers will, in effect, meld to create a hybrid, bio-mechanical life form that will extend our capacities unimaginably.
"By 2020, $1,000 (£581) worth of computer will equal the processing power of the human brain," he says. "By the late 2020s, we'll have reverse-engineered human brains."
What form will the computer take by the middle of the century: a kind of superhuman clone or just a terrific prosthesis? "I would lean more towards the prosthesis side. Not a prosthetic device that just fixes problems, like a wooden leg, but something that allows us to expand our capabilities, because we're going to merge with this technology. By 2030, we will have achieved machinery that equals and exceeds human intelligence but we're going to combine with these machines rather than just competing with them. These machines will be inserted into our bodies, via nano-technology. They'll go inside our brains through the capillaries and enlarge human intelligence."
It sounds creepily wonderful. But will humans have the political and social structures to accommodate and control these super-enhancing technologies? Look at the problems that stem-cell research is currently having in America, for example.
"That's completely insignificant," he replies. "I support stem-cell research and oppose the government restrictions, but nobody can say that this is having any significant impact on the flow of scientific progress. Ultimately, we don't want to use embryonic stem-cells anyway. Not because of any ethical and political issues. If I want artificial heart cells, or if I want pancreatic cells, it will be done from my own DNA and there'll be an inexhaustible supply. These barriers are stones in the river. The science just flows around them."
OK. But what if the bad guys get hold of the technology? Does that possibility keep Kurzweil awake at night?
"I've been concerned about that for many years," he concedes. "But you can't just relinquish these technologies. And you can't ban them. It would deprive humanity of profound benefits and it wouldn't work. In fact it would make the dangers worse by driving the technologies underground, where they would be even less controlled. But we do need to put more stones on the defensive side of the scale and invest more in developing defensive technology. The main danger we have right now is the ability of some bio-terrorist engineering a brand new type of virus that would be very dangerous. Bill Joy and I had an op-ed piece in the New York Times a couple of weeks ago criticising the publication of the genome of the 1918 avian virus on the web. We do have to be careful."
Kurzweil has plenty of critics. Some are horrified by his vision of a future that doesn't seem to need humans. Others suggest his predictions are based on assertion rather than evidence. Some, such as Steven Pinker, argue that Kurzweil has oversimplified evolution by wrongly claiming it to be a pursuit of greater intellectual complexity and applying it to technology.
"It is truly an evolutionary process," Kurzweil insists. "You have different niches and technology competes for them. The better ones survive and the weaker ones go to the wall. Technology evolves in a virtually straight line. The first important point is that we can make accurate predictions and I've been doing that for several decades now. The other important point is the exponential rate at which technology is moving under what I call the Law of Accelerating Return. It's not just Moore's Law."
Kurzweil is referring to the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented and would continue to do so, a key foundation of Kuzweil's thinking.
"It's not just computers. In 1989, only one ten-thousandth of the genome was mapped. Sceptics said there's no way you're gonna do this by the turn of the century. Ten years later they were still sceptical because we'd only succeeded in collecting 2% of the genome. But doubling every year brings surprising results and the project was done in time. It took us 15 years to sequence HIV - a huge project - now we can sequence Sars in 31 days and we sequence other viruses in a week."
All this is moving towards "the singularity", is it? "Yes. Consider how important computers and IT are already. Then go on to consider that the power of these technologies will grow by a factor of a billion in 25 years. And it'll be another factor of a billion by the time we get to 2045".
Source here
digital cameras
guardian.co.uk
Seeing the big picture
Thursday November 24, 2005
The Guardian
Cheap and simple digital cameras are turning the world of photography on its head. Will this revolution be the profession's biggest challenge, asks Tom Ang
The photographic coverage of the July 7 London bombings was an unexpected aftershock for media professionals. The realisation that almost every image seen on television news and front pages was captured by amateurs sent shivers down the collective media spine. It was confirmation, if any were needed, that digital photography had come of age. Thanks to sales exceeding 200m a year worldwide, including cameraphones, access to digital photography is all but universal among the urban population.
What is astonishing is to realise that the important changes - from nearly zero adoption to near-saturation of the market - took place in only five years. While the first patents for a filmless camera were filed in 1972, we had to wait until 1986 for the first digital camera system, the Canon RC-701. It was aimed at press photographers but its $27,000 price tag was hardly an encouragement. The first consumer digital camera under $1,000 arrived in 1994: the Apple QuickTake had a fixed lens and took 640x480 pixels (similar to the quality of an average TV screen). It was apparent that, while it was fun to get a picture without the hassle of processing film, the price was too high for image quality that was, to put it kindly, dilapidated.
It was not digital cameras that blocked progress. As with any new technology, the environment had to be conducive. Digital photography needed computers with fast processors capable of dealing with large files - one image file can be bigger than a year's worth of text documents. In the late 1990s, computers were being loaded with ever more memory, hard-disk capacity and extra software to encourage flagging sales.
Right time, right place
Digital photography took off only when, at the turn of the millennium, every computer could comfortably handle image files. As the market grew, manufacturing costs dropped. The adoption of cost-saving measures such as sharing components (the imaging chip, image processors, LCD screens and minor optics) between camera models also helped to drive down prices.
The impact of digital photography on modern life is in part due to marketing. Aggressive competition between camera makers has forced product cycles - the time between new models - to shorten. Replacement models are being announced almost before the previous camera has reached the market.
The arrival of each new model offering more features and more quality at lower prices means that consumers are the winners. And don't they know it: half of all digital camera sales (not including cameraphones) in the US and Europe are to those who already own a digital camera.
The intense activity has fuelled a parallel growth in technological awareness: it is no longer remarkable when a grandmother asks her teenage grandson to explain the difference between optical and digital zoom. Retired schoolteachers shop for cameras with a checklist of specifications in a way they would never have done for film-based models.
The increased awareness of the technology together with wide access to digital photography has, in turn, thrown the industry into disarray. The headlines - such as Dixons removing film cameras from its shelves, Kodak no longer making black-and-white printing paper and the near-extinction of household names such as Leica, Agfa and Polaroid - all signal the obvious changes.
The restructuring of the profession is more subtle, profound and distressing: experienced photographers are finding themselves marginalised, their darkroom skills discounted with a rapidity that makes the destruction of craft traditions by the industrial revolution appear snail-paced in comparison.
To join the digital world, these professionals not only have to abandon large investments in equipment and experience, they must retrain to use computers and imaging software. And as film-using professionals are supplanted by digital photographers, so their largely obsolete equipment can be bought for a song.
Working practices have changed. The industry has now dumped ultimate responsibility for image quality on the lap of photographers - amateur and professional alike. All the quality control processes formerly ensuring that you got good results when films were developed and printed now sit in your hands.
Not only is it your job to download your pictures, you must adjust or manipulate the image if you don't like the results, before printing it out on your printer. And if the results are not perfect, there is no one to blame but yourself.
After enjoying a brief respite from having to placate increasingly demanding customers, the industry woke to the horrified realisation that no one makes prints any more. From the happy days of a print made for every image captured, it's now one miserable print for every few hundred images.
This is particularly galling because we now take far more images than ever: stories of those who once exposed a roll of film per holiday but now return to find hundreds of images in their new digital camera are not rare. But we show the images on our computers using slideshow features of software such as iPhoto or PowerPoint; we attach image files to the emails we send home; we share our holiday snaps on picture-sharing websites.
For the professional, the honeymoon of sensual joy in reviewing pictures immediately and of not having to dash to the lab to get films processed has been replaced by a colder reality. The working day suddenly grew hours longer: with nightfall, we can't put our feet up. No, we sit at the laptop downloading images, captioning and backing up. And, if we are press photographers, we then have to edit the day's shoot before transmitting them.
Any attempt to gaze into the crystal ball will be obscured by the sheer number of images being taken. In 1998, 67bn images were made worldwide. We know that because 3bn rolls of film were sold. It is impossible to be accurate, but with a world population of digital cameras exceeding a third of a billion on top of millions of film-using cameras still in use, it is likely that more pictures are taken every year than in the previous 160 years of photography put together. In addition to the other pollutions we have unleashed on ourselves, we may well have to thank digital photography for giving us image pollution.
· Tom Ang is a photographer, broadcaster and author of Eyewitness Companion: Photography, published by Dorling Kindersley, £14.99. He was the presenter of A Digital Picture of Britain, shown on BBC2 and BBC4
Source here
+ Related
guardian.co.uk
Ghosts of the digital future
Victor Keegan
Thursday November 24, 2005
The Guardian
Anyone watching the Sunday night television series about the Pharaohs will be impressed with the data preservation techniques used 3,000 years ago, which have enabled their words and images to be viewed today. Because they wrote on stone, they had none of today's problems associated with viruses or changes in formats and storage techniques that threaten the longevity of our images.
There are more photographs around than ever before and, thanks to the growth of digital photography and cameraphones, there may well be more photos taken this year than in the whole of history. But how many will still be there 50 years hence, let alone 3,000?
This may seem a silly question to ask when we are being bombarded with wonderful, easy-to-use, websites offering to store online our digitised photos at the click of a button for nothing (for a list see www.andromeda.com/cgi-bin/photo/showsites.pl). Of course, there is a danger some of these could go bust in a few years, and there is also the question of privacy. Who is responsible if a newspaper publishes your private photos stored on a public server?
A survey by McAfee has found that 68% of adults archive their photos only in digital form - because it is so easy to do. But even if our snaps will still be there decades hence, it is an open question whether they will be in a format future computers can access.
One problem is recurrence of familiar failures that have consigned much of our past digital data to oblivion: the failure of a hard disk, scratched CD-Roms or changes in storage technology, from floppy disks to external hard drives, to CDs to DVDs, to USB devices.
Second, if photos are stored in several ways, some of them could include proprietary technologies that might be superseded 20 years on. Even if you use non-proprietary formats such as JPeg or Tiff, they do not, as Jeff Schewe (in PhotoshopNews.com) and others have pointed out, provide a format for storage of the unprocessed raw sensor data of which there are more than 100 formats from more than 15 camera manufacturers. Third, if you fall back on old technology and use a photo album, remember that ordinary digital photo printers do not deliver as long a life as old-style processed snaps.
For the moment, there is not really much alternative to multiple back-ups if you want to be reasonably sure your digital photos will be there in 50 years' time. This means keeping a copy on your hard disk, another on a CD, DVD or external storage unit or one of the burgeoning online storage sites, plus printed copies of treasured ones.
The trouble is it takes superhuman discipline to update back-ups regularly in multiple places, let alone ensure they are all labelled and dated so you are not faced with thousands of anonymous tags in future years. This may sound like a lot of hassle. It is. But it is worth doing it until all the preservation efforts around the world bear fruit - not least the $100m Library of Congress project in the US to preserve "a universal collection of knowledge and creativity for future generations".
None of this should deter people from buying cameras or cameraphones. They are getting better and cheaper all the time and offer rich opportunities to record life as never before. We now have a wonderful record of Victorian life, thanks to so many early photographs being preserved. If photographers do not think seriously about digital preservation, there is a danger that the information revolution could turn into a new dark age. But if most of the photos now being taken are preserved, it will leave for posterity, as well as our own descendants, an amazing record of what life was like in the 21st century. It would be a tragedy if much of this were to be lost because of a failure to agree a common approach.
Source here
Seeing the big picture
Thursday November 24, 2005
The Guardian
Cheap and simple digital cameras are turning the world of photography on its head. Will this revolution be the profession's biggest challenge, asks Tom Ang
The photographic coverage of the July 7 London bombings was an unexpected aftershock for media professionals. The realisation that almost every image seen on television news and front pages was captured by amateurs sent shivers down the collective media spine. It was confirmation, if any were needed, that digital photography had come of age. Thanks to sales exceeding 200m a year worldwide, including cameraphones, access to digital photography is all but universal among the urban population.
What is astonishing is to realise that the important changes - from nearly zero adoption to near-saturation of the market - took place in only five years. While the first patents for a filmless camera were filed in 1972, we had to wait until 1986 for the first digital camera system, the Canon RC-701. It was aimed at press photographers but its $27,000 price tag was hardly an encouragement. The first consumer digital camera under $1,000 arrived in 1994: the Apple QuickTake had a fixed lens and took 640x480 pixels (similar to the quality of an average TV screen). It was apparent that, while it was fun to get a picture without the hassle of processing film, the price was too high for image quality that was, to put it kindly, dilapidated.
It was not digital cameras that blocked progress. As with any new technology, the environment had to be conducive. Digital photography needed computers with fast processors capable of dealing with large files - one image file can be bigger than a year's worth of text documents. In the late 1990s, computers were being loaded with ever more memory, hard-disk capacity and extra software to encourage flagging sales.
Right time, right place
Digital photography took off only when, at the turn of the millennium, every computer could comfortably handle image files. As the market grew, manufacturing costs dropped. The adoption of cost-saving measures such as sharing components (the imaging chip, image processors, LCD screens and minor optics) between camera models also helped to drive down prices.
The impact of digital photography on modern life is in part due to marketing. Aggressive competition between camera makers has forced product cycles - the time between new models - to shorten. Replacement models are being announced almost before the previous camera has reached the market.
The arrival of each new model offering more features and more quality at lower prices means that consumers are the winners. And don't they know it: half of all digital camera sales (not including cameraphones) in the US and Europe are to those who already own a digital camera.
The intense activity has fuelled a parallel growth in technological awareness: it is no longer remarkable when a grandmother asks her teenage grandson to explain the difference between optical and digital zoom. Retired schoolteachers shop for cameras with a checklist of specifications in a way they would never have done for film-based models.
The increased awareness of the technology together with wide access to digital photography has, in turn, thrown the industry into disarray. The headlines - such as Dixons removing film cameras from its shelves, Kodak no longer making black-and-white printing paper and the near-extinction of household names such as Leica, Agfa and Polaroid - all signal the obvious changes.
The restructuring of the profession is more subtle, profound and distressing: experienced photographers are finding themselves marginalised, their darkroom skills discounted with a rapidity that makes the destruction of craft traditions by the industrial revolution appear snail-paced in comparison.
To join the digital world, these professionals not only have to abandon large investments in equipment and experience, they must retrain to use computers and imaging software. And as film-using professionals are supplanted by digital photographers, so their largely obsolete equipment can be bought for a song.
Working practices have changed. The industry has now dumped ultimate responsibility for image quality on the lap of photographers - amateur and professional alike. All the quality control processes formerly ensuring that you got good results when films were developed and printed now sit in your hands.
Not only is it your job to download your pictures, you must adjust or manipulate the image if you don't like the results, before printing it out on your printer. And if the results are not perfect, there is no one to blame but yourself.
After enjoying a brief respite from having to placate increasingly demanding customers, the industry woke to the horrified realisation that no one makes prints any more. From the happy days of a print made for every image captured, it's now one miserable print for every few hundred images.
This is particularly galling because we now take far more images than ever: stories of those who once exposed a roll of film per holiday but now return to find hundreds of images in their new digital camera are not rare. But we show the images on our computers using slideshow features of software such as iPhoto or PowerPoint; we attach image files to the emails we send home; we share our holiday snaps on picture-sharing websites.
For the professional, the honeymoon of sensual joy in reviewing pictures immediately and of not having to dash to the lab to get films processed has been replaced by a colder reality. The working day suddenly grew hours longer: with nightfall, we can't put our feet up. No, we sit at the laptop downloading images, captioning and backing up. And, if we are press photographers, we then have to edit the day's shoot before transmitting them.
Any attempt to gaze into the crystal ball will be obscured by the sheer number of images being taken. In 1998, 67bn images were made worldwide. We know that because 3bn rolls of film were sold. It is impossible to be accurate, but with a world population of digital cameras exceeding a third of a billion on top of millions of film-using cameras still in use, it is likely that more pictures are taken every year than in the previous 160 years of photography put together. In addition to the other pollutions we have unleashed on ourselves, we may well have to thank digital photography for giving us image pollution.
· Tom Ang is a photographer, broadcaster and author of Eyewitness Companion: Photography, published by Dorling Kindersley, £14.99. He was the presenter of A Digital Picture of Britain, shown on BBC2 and BBC4
Source here
+ Related
guardian.co.uk
Ghosts of the digital future
Victor Keegan
Thursday November 24, 2005
The Guardian
Anyone watching the Sunday night television series about the Pharaohs will be impressed with the data preservation techniques used 3,000 years ago, which have enabled their words and images to be viewed today. Because they wrote on stone, they had none of today's problems associated with viruses or changes in formats and storage techniques that threaten the longevity of our images.
There are more photographs around than ever before and, thanks to the growth of digital photography and cameraphones, there may well be more photos taken this year than in the whole of history. But how many will still be there 50 years hence, let alone 3,000?
This may seem a silly question to ask when we are being bombarded with wonderful, easy-to-use, websites offering to store online our digitised photos at the click of a button for nothing (for a list see www.andromeda.com/cgi-bin/photo/showsites.pl). Of course, there is a danger some of these could go bust in a few years, and there is also the question of privacy. Who is responsible if a newspaper publishes your private photos stored on a public server?
A survey by McAfee has found that 68% of adults archive their photos only in digital form - because it is so easy to do. But even if our snaps will still be there decades hence, it is an open question whether they will be in a format future computers can access.
One problem is recurrence of familiar failures that have consigned much of our past digital data to oblivion: the failure of a hard disk, scratched CD-Roms or changes in storage technology, from floppy disks to external hard drives, to CDs to DVDs, to USB devices.
Second, if photos are stored in several ways, some of them could include proprietary technologies that might be superseded 20 years on. Even if you use non-proprietary formats such as JPeg or Tiff, they do not, as Jeff Schewe (in PhotoshopNews.com) and others have pointed out, provide a format for storage of the unprocessed raw sensor data of which there are more than 100 formats from more than 15 camera manufacturers. Third, if you fall back on old technology and use a photo album, remember that ordinary digital photo printers do not deliver as long a life as old-style processed snaps.
For the moment, there is not really much alternative to multiple back-ups if you want to be reasonably sure your digital photos will be there in 50 years' time. This means keeping a copy on your hard disk, another on a CD, DVD or external storage unit or one of the burgeoning online storage sites, plus printed copies of treasured ones.
The trouble is it takes superhuman discipline to update back-ups regularly in multiple places, let alone ensure they are all labelled and dated so you are not faced with thousands of anonymous tags in future years. This may sound like a lot of hassle. It is. But it is worth doing it until all the preservation efforts around the world bear fruit - not least the $100m Library of Congress project in the US to preserve "a universal collection of knowledge and creativity for future generations".
None of this should deter people from buying cameras or cameraphones. They are getting better and cheaper all the time and offer rich opportunities to record life as never before. We now have a wonderful record of Victorian life, thanks to so many early photographs being preserved. If photographers do not think seriously about digital preservation, there is a danger that the information revolution could turn into a new dark age. But if most of the photos now being taken are preserved, it will leave for posterity, as well as our own descendants, an amazing record of what life was like in the 21st century. It would be a tragedy if much of this were to be lost because of a failure to agree a common approach.
Source here
Murdoch predicts demise of classified ads
ft.com
Murdoch predicts demise of classified ads
By Andrew Edgecliffe-Johnson
Published: November 24 2005 17:59 | Last updated: November 24 2005 17:59
Rupert Murdoch has added to the gloom surrounding the US newspaper industry, saying that the business model of most titles is under threat as classified advertising moves online and circulations fall further.
The News Corporation chairman, who once decribed classified revenues as “rivers of gold”, said: “Sometimes rivers dry up.”
Interviewed by Press Gazette, a UK trade publication part-owned by his son-in-law, he added: “I don’t know anybody under 30 who has ever looked at a classified advertisement in a newspaper.”
His comments came as Knight Ridder, the US’s second-largest newspaper group, yielded to shareholder pressure and retained advisers to explore “strategic alternatives”, including a possible sale, in response to rising newsprint costs, declining circulation and more competition from the internet.
Mr Murdoch, who owns the New York Post, indicated that US newspapers’ editorial strategies were to blame for their financial problems. “Outside New York, it’s all monopoly newspapers. Some have good work in them, but it tends to be overwritten, boring and elitist, not a reflection of the general mood in the public.”
After spending $1.5bn so far this year on acquisitions such as MySpace.com, a fast-growing online community, and IGN Entertainment, a games and content site, Mr Murdoch disputed the recent claim by Sir Martin Sorrell, chief executive of WPP, that some traditional media owners were panic buying new media assets.
“There’s no panic, and there’s certainly no overpayment,” he said. “It was a very careful strategy to go for the two biggest community sites for people under 30. If you take the number of page views in the US, we are the third biggest presence already.”
He admitted that News Corp was not the most profitable online media group, and said it had “a huge amount of work ahead to get that whole thing right”.
Mr Murdoch also criticised newspapers in the UK, where he owns titles including The Sun and the Sunday Times. Recent attempts to boost circulation by giving away a DVD with each copy must stop, he insisted.
Source here
+ Related
ft.com
Murdoch shares ‘vision for internet’
By Andrew Edgecliffe-Johnson, Media Editor
Published: September 12 2005 17:25 | Last updated: September 12 2005 17:25
Rupert Murdoch has told his senior executives they must find ways to integrate their traditional publishing and broadcasting operations with the internet assets in which News Corporation has invested in recent months.
Delegates at a weekend News Corp summit in Carmel, California, were told that different parts of the media conglomerate needed to start talking more to each other to find more ways of collaborating online.
The summit also focused on identifying which of the group’s existing news and sports brands – which range from the Fox TV channels in the US to The Sun and The Times in the UK – could best be used on News Corp’s new online platforms.
A further priority was to ensure that different divisions were using technology that was compatible across the company, according to one person who attended.
Neither James Murdoch, his younger son who was in charge of the group’s internet strategy before becoming chief executive of British Sky Broadcasting, nor his older brother Lachlan, who resigned from his executive duties in July, attended the summit.
Mr Murdoch also invited outside speakers including George Gilder, author of a technology investment newsletter called The Gilder Report, and representatives of Kleiner Perkins Caufield & Byers, a Silicon Valley venture capital group.
The summit, the second in seven months to address News Corp’s internet strategy, follows Mr Murdoch’s speech to the American Society of Newspaper Editors in April, in which he said that digital technology was driving “a revolution in the way in which young people are accessing news” which would require “a transformation of the way we think about our product”.
Since then, News Corp has formed a new online division, Fox Interactive Media. It has announced the $580m acquisition of Intermix Media, which operates a social networking site called MySpace.com; bid $60m for Scout Media, which publishes sports websites; and last week unveiled a $650m takeover of IGN Entertainment, an internet game information company.
Source here
Murdoch predicts demise of classified ads
By Andrew Edgecliffe-Johnson
Published: November 24 2005 17:59 | Last updated: November 24 2005 17:59
Rupert Murdoch has added to the gloom surrounding the US newspaper industry, saying that the business model of most titles is under threat as classified advertising moves online and circulations fall further.
The News Corporation chairman, who once decribed classified revenues as “rivers of gold”, said: “Sometimes rivers dry up.”
Interviewed by Press Gazette, a UK trade publication part-owned by his son-in-law, he added: “I don’t know anybody under 30 who has ever looked at a classified advertisement in a newspaper.”
His comments came as Knight Ridder, the US’s second-largest newspaper group, yielded to shareholder pressure and retained advisers to explore “strategic alternatives”, including a possible sale, in response to rising newsprint costs, declining circulation and more competition from the internet.
Mr Murdoch, who owns the New York Post, indicated that US newspapers’ editorial strategies were to blame for their financial problems. “Outside New York, it’s all monopoly newspapers. Some have good work in them, but it tends to be overwritten, boring and elitist, not a reflection of the general mood in the public.”
After spending $1.5bn so far this year on acquisitions such as MySpace.com, a fast-growing online community, and IGN Entertainment, a games and content site, Mr Murdoch disputed the recent claim by Sir Martin Sorrell, chief executive of WPP, that some traditional media owners were panic buying new media assets.
“There’s no panic, and there’s certainly no overpayment,” he said. “It was a very careful strategy to go for the two biggest community sites for people under 30. If you take the number of page views in the US, we are the third biggest presence already.”
He admitted that News Corp was not the most profitable online media group, and said it had “a huge amount of work ahead to get that whole thing right”.
Mr Murdoch also criticised newspapers in the UK, where he owns titles including The Sun and the Sunday Times. Recent attempts to boost circulation by giving away a DVD with each copy must stop, he insisted.
Source here
+ Related
ft.com
Murdoch shares ‘vision for internet’
By Andrew Edgecliffe-Johnson, Media Editor
Published: September 12 2005 17:25 | Last updated: September 12 2005 17:25
Rupert Murdoch has told his senior executives they must find ways to integrate their traditional publishing and broadcasting operations with the internet assets in which News Corporation has invested in recent months.
Delegates at a weekend News Corp summit in Carmel, California, were told that different parts of the media conglomerate needed to start talking more to each other to find more ways of collaborating online.
The summit also focused on identifying which of the group’s existing news and sports brands – which range from the Fox TV channels in the US to The Sun and The Times in the UK – could best be used on News Corp’s new online platforms.
A further priority was to ensure that different divisions were using technology that was compatible across the company, according to one person who attended.
Neither James Murdoch, his younger son who was in charge of the group’s internet strategy before becoming chief executive of British Sky Broadcasting, nor his older brother Lachlan, who resigned from his executive duties in July, attended the summit.
Mr Murdoch also invited outside speakers including George Gilder, author of a technology investment newsletter called The Gilder Report, and representatives of Kleiner Perkins Caufield & Byers, a Silicon Valley venture capital group.
The summit, the second in seven months to address News Corp’s internet strategy, follows Mr Murdoch’s speech to the American Society of Newspaper Editors in April, in which he said that digital technology was driving “a revolution in the way in which young people are accessing news” which would require “a transformation of the way we think about our product”.
Since then, News Corp has formed a new online division, Fox Interactive Media. It has announced the $580m acquisition of Intermix Media, which operates a social networking site called MySpace.com; bid $60m for Scout Media, which publishes sports websites; and last week unveiled a $650m takeover of IGN Entertainment, an internet game information company.
Source here
Friday, November 25, 2005
innovation strategy
ft.com
Businesses to be offered a bigger role in shaping innovation strategy
By Clive Cookson, Science Editor
Published: November 24 2005 02:00 | Last updated: November 24 2005 02:00
Business has an unprecedented opportunity to shape the government's innovation policy, including the distribution of £200m a year in technology grants, the head of the UK Technology Strategy Board says today.
The days when the Department of Trade and Industry decided the policy with token industrial consultation - and then offered funding to companies on a take-it-or-leave-it basis - are over.
Graham Spittle, the board's chairman, launches its first annual report today and a separate "call to action" for companies to become more involved in the strategy. The business-led board, set up a year ago, has already put some of the elements in place but much of the strategy is still up for discussion.
"We are pleased with the way companies have taken part in our competitions [for grants] so far but we need them to be engaged with us in driving the strategy forward," says Mr Spittle, who runs the IBM Hursley Laboratory, the US computer group's main software development centre in Europe. "We want to hear from our 'customers' so that we can solve real problems rather than push new technologies for their own sake."
The technology programme is shaping up so far with two prongs. Most of the early funding is going into "key emerging technologies": electronics and photonics; advanced materials; information and communications; bioscience and healthcare; sustainable production and consumption; energy; design engineering and advanced manufacturing.
But Mr Spittle is keen to develop the second prong: "innovation platforms" where a range of technologies can be integrated and co-ordinated with government policy and procurement, to improve public services and "the ability of UK business to provide solutions".
The board is committing £10m each to two pilot "innovation platform" projects, in "network security" and "intelligent transport systems/services", and it expects substantial contributions from other government departments, particularly the Ministry of Defence, the Home Office and the Department for Transport. "We went first for the platforms we could start quickly but we probably have another six to eight that are worth looking at," Mr Spittle says.
In the past, DTI technology grants have been distributed between too many small projects, the board has decided. In future there will be fewer competitions with larger grants, aimed particularly at projects that integrate several technologies, for example combining medicine, diagnostics and IT to treat more patients at home.
Although two of the six business members of the board come from the life sciences industry, comparatively little of its budget has been directed to the biotechnology and pharmaceutical sector, which carries out 38 per cent of all corporate research and development in the UK, according to the DTI R&D Scoreboard.
But Mr Spittle says he is happy with the broad balance of spending across different sectors: "We want to help companies in areas where the UK is already strong, such as pharmaceuticals and aerospace, but we also want to go for new areas."
The CBI, the employers' organisation, whose annual conference next week will have innovation as a theme, is happy with the way things are going. "The board is right to focus on creating the right environment for strategic technology development," says John Cridland, CBI deputy director-general. "The criteria they have adopted are sound and the TSB is well placed to work right across government and make the strategy happen. Now we need to see it deliver."
Source here
Businesses to be offered a bigger role in shaping innovation strategy
By Clive Cookson, Science Editor
Published: November 24 2005 02:00 | Last updated: November 24 2005 02:00
Business has an unprecedented opportunity to shape the government's innovation policy, including the distribution of £200m a year in technology grants, the head of the UK Technology Strategy Board says today.
The days when the Department of Trade and Industry decided the policy with token industrial consultation - and then offered funding to companies on a take-it-or-leave-it basis - are over.
Graham Spittle, the board's chairman, launches its first annual report today and a separate "call to action" for companies to become more involved in the strategy. The business-led board, set up a year ago, has already put some of the elements in place but much of the strategy is still up for discussion.
"We are pleased with the way companies have taken part in our competitions [for grants] so far but we need them to be engaged with us in driving the strategy forward," says Mr Spittle, who runs the IBM Hursley Laboratory, the US computer group's main software development centre in Europe. "We want to hear from our 'customers' so that we can solve real problems rather than push new technologies for their own sake."
The technology programme is shaping up so far with two prongs. Most of the early funding is going into "key emerging technologies": electronics and photonics; advanced materials; information and communications; bioscience and healthcare; sustainable production and consumption; energy; design engineering and advanced manufacturing.
But Mr Spittle is keen to develop the second prong: "innovation platforms" where a range of technologies can be integrated and co-ordinated with government policy and procurement, to improve public services and "the ability of UK business to provide solutions".
The board is committing £10m each to two pilot "innovation platform" projects, in "network security" and "intelligent transport systems/services", and it expects substantial contributions from other government departments, particularly the Ministry of Defence, the Home Office and the Department for Transport. "We went first for the platforms we could start quickly but we probably have another six to eight that are worth looking at," Mr Spittle says.
In the past, DTI technology grants have been distributed between too many small projects, the board has decided. In future there will be fewer competitions with larger grants, aimed particularly at projects that integrate several technologies, for example combining medicine, diagnostics and IT to treat more patients at home.
Although two of the six business members of the board come from the life sciences industry, comparatively little of its budget has been directed to the biotechnology and pharmaceutical sector, which carries out 38 per cent of all corporate research and development in the UK, according to the DTI R&D Scoreboard.
But Mr Spittle says he is happy with the broad balance of spending across different sectors: "We want to help companies in areas where the UK is already strong, such as pharmaceuticals and aerospace, but we also want to go for new areas."
The CBI, the employers' organisation, whose annual conference next week will have innovation as a theme, is happy with the way things are going. "The board is right to focus on creating the right environment for strategic technology development," says John Cridland, CBI deputy director-general. "The criteria they have adopted are sound and the TSB is well placed to work right across government and make the strategy happen. Now we need to see it deliver."
Source here
Westminster Council widens Wi-Fi network
zdnet.co.uk
Karen Gomm
ZDNet UK
November 25, 2005, 14:45 GMT
A pioneering public sector wireless network is being extended, and should eventually bring Wi-Fi to local residents
Westminster residents could soon get Wi-Fi access as part of Westminster City Council's plans to extend its wireless networking project. The council, which last year began deploying a high-speed wireless network to try and improve the delivery of public services to citizens, announced on Friday that it is now extending the network.
Plans for residential access to Wi-Fi hotspots are also in the pipeline, said a council spokeswoman on Friday.
"At the moment residents can't access the network but this is something that will happen and is something the council is planning to do," she revealed, adding that this access would not be free.
ZDNet UK reported in April 2004 that Westminster council was planning to extend its Wi-Fi network, after a pilot trial using just four cameras. Eighteen months later, that expansion is complete.
Until now, the wireless network covered just one part of Soho, but it is being extended to include all of Soho and two housing estates within Westminster, Lisson Green in Marylebone and Churchill Gardens in Pimlico.
Ten wireless CCTV cameras will be installed on each estate, attached to lamp posts.
As in the original Soho trial, they will be used to deter antisocial behaviour and alert council or emergency services to potential problems. There are no plans to install noise monitoring equipment in any residential area until further consultation has taken place.
In Soho, the Council will deploy another twenty wireless CCTV cameras in an attempt to ensure that visitors and residents in the heart of London remain safe. Because they are networked, the cameras can be monitored in real time by trained personnel who can then quickly respond to any criminal behaviour.
In January this next stage of the project will be assessed by the Council. If successful, the network could be rolled out across more of Westminster.
The project was first launched with a pilot scheme in 2003 with Intel and Cisco, using Wi-Fi to link a small number of CCTV cameras to a central network.
Back in 2004, council staff said they hoped to eventually use the network to keep council staff connected when out of the office, and to improve delivery of local government services such as bill payment, waste management, and parking.
Islington Council, in North London, has also embarked on a wireless project. It has installed a mile-long Wi-Fi network that gives free access to local residents and businesses
Source here
Karen Gomm
ZDNet UK
November 25, 2005, 14:45 GMT
A pioneering public sector wireless network is being extended, and should eventually bring Wi-Fi to local residents
Westminster residents could soon get Wi-Fi access as part of Westminster City Council's plans to extend its wireless networking project. The council, which last year began deploying a high-speed wireless network to try and improve the delivery of public services to citizens, announced on Friday that it is now extending the network.
Plans for residential access to Wi-Fi hotspots are also in the pipeline, said a council spokeswoman on Friday.
"At the moment residents can't access the network but this is something that will happen and is something the council is planning to do," she revealed, adding that this access would not be free.
ZDNet UK reported in April 2004 that Westminster council was planning to extend its Wi-Fi network, after a pilot trial using just four cameras. Eighteen months later, that expansion is complete.
Until now, the wireless network covered just one part of Soho, but it is being extended to include all of Soho and two housing estates within Westminster, Lisson Green in Marylebone and Churchill Gardens in Pimlico.
Ten wireless CCTV cameras will be installed on each estate, attached to lamp posts.
As in the original Soho trial, they will be used to deter antisocial behaviour and alert council or emergency services to potential problems. There are no plans to install noise monitoring equipment in any residential area until further consultation has taken place.
In Soho, the Council will deploy another twenty wireless CCTV cameras in an attempt to ensure that visitors and residents in the heart of London remain safe. Because they are networked, the cameras can be monitored in real time by trained personnel who can then quickly respond to any criminal behaviour.
In January this next stage of the project will be assessed by the Council. If successful, the network could be rolled out across more of Westminster.
The project was first launched with a pilot scheme in 2003 with Intel and Cisco, using Wi-Fi to link a small number of CCTV cameras to a central network.
Back in 2004, council staff said they hoped to eventually use the network to keep council staff connected when out of the office, and to improve delivery of local government services such as bill payment, waste management, and parking.
Islington Council, in North London, has also embarked on a wireless project. It has installed a mile-long Wi-Fi network that gives free access to local residents and businesses
Source here
Broadband take-up tipped to level out in two years
zdnet.co.uk
Karen Gomm
ZDNet UK
November 25, 2005, 18:25 GMT
Marketing campaigns, lower prices and wider coverage mean that 60 percent of the UK will have broadband by 2008
Almost two-thirds of consumers in Western Europe will have adopted broadband by early 2008, thanks to aggressive marketing campaigns and a rise in awareness, a report published on Friday said.
Broadband adoption will settle at a rate of 60 percent in Western Europe and other advanced markets within two years, according to Datamonitor. With PC ownership likely to settle at 70 percent, this would represent a large proportion of the available market.
Consumer broadband adoption is increasing so rapidly the market is approaching maturity already and is growing at its fastest rate ever in many markets across Western Europe, according to the Datamonitor report, Consumer broadband markets approaching maturity.
Datamonitor also forecast that eight million households in the UK will have a broadband connection by the end of 2005.
Tim Gower, enterprise communications analyst at Datamonitor, said aggressive marketing campaigns were driving increased broadband adoption. "The current situation in many markets can be best described as one of rapidly increasing penetration, where broadband has effectively entered its growth sweet spot," he said.
Other factors behind the broadband boom are a dramatic increase in broadband coverage, lower prices, and new voice, video and data applications.
According to Datamonitor's research, in many countries broadband is the fastest growing consumer technology of all time, outpacing the uptake of mobile phones and dial-up Internet access.
Technologies such as broadband and managed services are helping service providers to counter a decline in traditional markets such as fixed-line voice revenues, Gower added.
Source here
Karen Gomm
ZDNet UK
November 25, 2005, 18:25 GMT
Marketing campaigns, lower prices and wider coverage mean that 60 percent of the UK will have broadband by 2008
Almost two-thirds of consumers in Western Europe will have adopted broadband by early 2008, thanks to aggressive marketing campaigns and a rise in awareness, a report published on Friday said.
Broadband adoption will settle at a rate of 60 percent in Western Europe and other advanced markets within two years, according to Datamonitor. With PC ownership likely to settle at 70 percent, this would represent a large proportion of the available market.
Consumer broadband adoption is increasing so rapidly the market is approaching maturity already and is growing at its fastest rate ever in many markets across Western Europe, according to the Datamonitor report, Consumer broadband markets approaching maturity.
Datamonitor also forecast that eight million households in the UK will have a broadband connection by the end of 2005.
Tim Gower, enterprise communications analyst at Datamonitor, said aggressive marketing campaigns were driving increased broadband adoption. "The current situation in many markets can be best described as one of rapidly increasing penetration, where broadband has effectively entered its growth sweet spot," he said.
Other factors behind the broadband boom are a dramatic increase in broadband coverage, lower prices, and new voice, video and data applications.
According to Datamonitor's research, in many countries broadband is the fastest growing consumer technology of all time, outpacing the uptake of mobile phones and dial-up Internet access.
Technologies such as broadband and managed services are helping service providers to counter a decline in traditional markets such as fixed-line voice revenues, Gower added.
Source here
Thursday, November 24, 2005
Broadband Access for Rural Development/ A-BARD
Analysing Broadband Access for Rural Development (A-BARD) is a 24 month Coordination Action started in January 2005 to research rural broadband provision and use, as part of the Scientific Support to Policies ( SSP ) in the EU Sixth Framework Programme.
A-BARD aims to continuously identify views on the issues and barriers to widespread broadband provision and the extent to which broadband can act as an external driver of change in rural economies.
Targeted to the needs of rural communities A-BARD addresses questions close to eRural actors such as:
* Rural broadband deployment in rural Europe: issues, models, best practices, affordability and acessibility?
* Broadband Applications and Services: what is emerging and can some of those applications and services directly address the digital divide?
Broadband is a very dynamic area, with technology, applications and services moving very fast, information is of the essence to make the right decisions and while there is a vast amount of information concerning broadband not a lot of it is of particular relevance to broadband for rural areas.
A-BARD will provide a rolling continuous monitoring, reporting and analysing current trends and recent developments with regard to broadband provision, access and use in the rural areas across Europe, that can be used by all interested actors.
Source here
A-BARD aims to continuously identify views on the issues and barriers to widespread broadband provision and the extent to which broadband can act as an external driver of change in rural economies.
Targeted to the needs of rural communities A-BARD addresses questions close to eRural actors such as:
* Rural broadband deployment in rural Europe: issues, models, best practices, affordability and acessibility?
* Broadband Applications and Services: what is emerging and can some of those applications and services directly address the digital divide?
Broadband is a very dynamic area, with technology, applications and services moving very fast, information is of the essence to make the right decisions and while there is a vast amount of information concerning broadband not a lot of it is of particular relevance to broadband for rural areas.
A-BARD will provide a rolling continuous monitoring, reporting and analysing current trends and recent developments with regard to broadband provision, access and use in the rural areas across Europe, that can be used by all interested actors.
Source here
The Promise of Broadband Wireless Communities
The Promise of Broadband Wireless Communities | The Wireless Internet Institute and the United Nations ICT Task Force | 2005
“This book, a collaborative effort of the United Nations ICT Task Force and the Wireless Internet Institute (W2i), is intended as a resource and toolkit for local authorities seeking to plan, fund and deploy broadband-wireless networks in their communities. It also aims to raise awareness of the immense potential of wireless technologies in meeting the real needs of the poor. Indeed, the exponential growth in new technologies continues to offer vast opportunities. I look forward to working with all concerned to build an information society that benefits and empowers all the world’s people.”
—From the Foreword by UN Secretary-General Kofi Annan
“Broadband-wireless technologies open up a quantum leap in the future of information and communications technologies by making high-speed access to the Internet affordable anywhere and any time. Populations that were isolated can get connected. People scattered over large areas can interact as integrated communities. And cities and local governments can serve their citizens with greater speed and effectiveness thanks to unlimited new applications to improve quality of life.
“Moreover, local-government officials recognize that broadband connectivity is a utility essential to the broad-scale social, economic, and educational development of their communities, and they have begun taking a more active leadership role in developing sustainable broadband service for their constituencies.
“Despite their growing popularity worldwide, deploying community broadband-wireless infrastructures still presents significant challenges. This volume is intended to serve as a knowledge base, including a primer on the technology and regulatory issues, a record of case studies and best practices, and a how-to implementation and toolbox. Whether built and/or operated by the local government or by private-sector investors, the best implementations appear to grow out of a broad community consensus and a careful analysis of local needs.”
—From the Preface to “The Promise of Broadband Wireless Communities”
Source here
“This book, a collaborative effort of the United Nations ICT Task Force and the Wireless Internet Institute (W2i), is intended as a resource and toolkit for local authorities seeking to plan, fund and deploy broadband-wireless networks in their communities. It also aims to raise awareness of the immense potential of wireless technologies in meeting the real needs of the poor. Indeed, the exponential growth in new technologies continues to offer vast opportunities. I look forward to working with all concerned to build an information society that benefits and empowers all the world’s people.”
—From the Foreword by UN Secretary-General Kofi Annan
“Broadband-wireless technologies open up a quantum leap in the future of information and communications technologies by making high-speed access to the Internet affordable anywhere and any time. Populations that were isolated can get connected. People scattered over large areas can interact as integrated communities. And cities and local governments can serve their citizens with greater speed and effectiveness thanks to unlimited new applications to improve quality of life.
“Moreover, local-government officials recognize that broadband connectivity is a utility essential to the broad-scale social, economic, and educational development of their communities, and they have begun taking a more active leadership role in developing sustainable broadband service for their constituencies.
“Despite their growing popularity worldwide, deploying community broadband-wireless infrastructures still presents significant challenges. This volume is intended to serve as a knowledge base, including a primer on the technology and regulatory issues, a record of case studies and best practices, and a how-to implementation and toolbox. Whether built and/or operated by the local government or by private-sector investors, the best implementations appear to grow out of a broad community consensus and a careful analysis of local needs.”
—From the Preface to “The Promise of Broadband Wireless Communities”
Source here
Subscribe to:
Posts (Atom)