Imagine tapping out text messages on a device the size of an index card and as flat as a piece of paper, then folding it in thirds to hold it to your ear and make a phone call. Refold it in a slightly different shape and wrap it around your wrist, where it becomes a watch and also communicates with an ear bud that lets you talk hands free.
Nokia researchers, along with researchers at the University of Cambridge in England, have created an animated video describing such a vision for mobile devices, which could come in the future through nanotechnology developments.
The animation shows practical applications for several specific types of work that the scientists are developing based on their nanotechnology research, said Tapani Ryhanen, the head of multimedia devices research at Nokia Research Center. The concept video was created at the prodding of New York's Museum of Modern Art, which is opening an exhibit Sunday called "Design and the Elastic Mind," he said.
In another segment of the video, the user flaps the paper-thin device in front of an apple. Tiny particles fly off the apple, landing on the device, which quickly analyzes them. It then flashes a warning signal, recommending that the user wash the apple before eating it.
That's one of the most interesting potential uses that Ryhanen sees. "Personally, I'm mostly interested about the bigger issue of how we can make our mobile devices more intelligent and so they can sense something from the environment," he said. One day, a device like the one in the video could sense harmful elements in the air. With potentially millions of such devices communicating globally, they might be able to warn people about a disease that could spread into a pandemic, identifying dangerous areas around the world, he said.
The device in the animation is covered in minuscule "grass" that can absorb solar energy to power it. It's also "syperhydrophobic," making it incredibly dirt repellent. The animated woman in the video, sitting at an outdoor café, accidentally drops a bit of honey on the device and the drop slides off without leaving a bit behind.
Just before she walks away, she places the device on top of her brightly colored purse and snaps a photo. When she folds the device around her wrist, she sets a new wallpaper and the entire surface of the device displays the same pattern as her purse.
Currently, the researchers have developed "bits and pieces" of the technologies envisioned in the concept "but we are not yet at the level that we could integrate those things together into a device that we're showing in this animation," Ryhanen said. Some features of the device could start appearing in commercial products as soon as seven years from now, Nokia said.
Around 18 Nokia researchers and 25 University of Cambridge researchers have been working together for about a year at the university's West Cambridge site.
The concept animation video is expected to be available for viewing on Nokia's site on Monday. Nothing about the concept, called Morph, will be on exhibit at the museum, but it will feature in the exhibition catalog and on MoMa's Web site, Nokia said.
Tuesday, February 26, 2008
SAP ships 'enhancements' for ERP
SAP is expected on Monday to ship a third "enhancement package" to its ERP (enterprise resource planning) application, with new features focusing both on core functionality, such as financials and procurement, and functionality aimed at verticals like the retail and manufacturing industries.
The release also has more than 50 "enterprise services bundles." These are sets of existing SAP ERP service interfaces, packaged in various ways to address specific business processes such as order-to-cash, the company said.
The vendor doesn't charge existing customers for enhancement packages, which stem from a strategy shift it announced in 2006. Instead of issuing major ERP platform releases every 12 to 18 months, it is parceling out incremental updates to the current core platform, SAP ERP 6.0. According to the company, 4,000 implementations of SAP ERP 6.0 have gone live since January 2007.
Given the "historically painful" process of implementing an ERP, it is wise for SAP to move in this direction, said Marc Songini, an analyst with Nucleus Research, by e-mail on Friday.
SAP's rival, Oracle, is also basing its Fusion strategy around pain-free upgrades, making it a competitive play as well, he said.
The move is also a good way for SAP to preserve its installed base, he added. "If you have a choice between the agony of re-implementing SAP or turning to a new vendor such as Oracle or Lawson, you might be tempted to jump ship. But if you're already on SAP, know it warts and all, and want to keep that investment, and SAP is making it easy to get new features without a rip and replace, you won't be as tempted."
The release also has more than 50 "enterprise services bundles." These are sets of existing SAP ERP service interfaces, packaged in various ways to address specific business processes such as order-to-cash, the company said.
The vendor doesn't charge existing customers for enhancement packages, which stem from a strategy shift it announced in 2006. Instead of issuing major ERP platform releases every 12 to 18 months, it is parceling out incremental updates to the current core platform, SAP ERP 6.0. According to the company, 4,000 implementations of SAP ERP 6.0 have gone live since January 2007.
Given the "historically painful" process of implementing an ERP, it is wise for SAP to move in this direction, said Marc Songini, an analyst with Nucleus Research, by e-mail on Friday.
SAP's rival, Oracle, is also basing its Fusion strategy around pain-free upgrades, making it a competitive play as well, he said.
The move is also a good way for SAP to preserve its installed base, he added. "If you have a choice between the agony of re-implementing SAP or turning to a new vendor such as Oracle or Lawson, you might be tempted to jump ship. But if you're already on SAP, know it warts and all, and want to keep that investment, and SAP is making it easy to get new features without a rip and replace, you won't be as tempted."
Microsoft kills off HD DVD drive for Xbox 360
Microsoft will stop making external HD DVD drives for its Xbox 360 game console, but won't say whether it will offer a Blu-ray Disc drive instead.
The company will continue to provide warranty and product support for existing HD DVD players, it said.
The Xbox 360 has a standard DVD drive built in: support for high-definition content came only with an add-on. Sony's Playstation 3 console, however, has a Blu-ray Disc drive built in, which helped grow support for the rival high-definition format.
Microsoft's announcement comes barely a week after HD DVD's main backer, Toshiba, said it will stop making the drives in the face of declining support for its high-definition format from retailers and studios. HD DVD's other supporters included Microsoft, Intel, HP and Universal Studios. Blu-ray also had the support of Panasonic and Samsung.
Warner Bros., which initially supported HD DVD, said early this year it would switch to Blu-ray Disc, a decision widely seen as a mortal blow to the format. Retailer Wal-Mart also recently said it would no longer sell HD DVDs.
A Microsoft spokesperson said Monday morning that the company is taking the long-term view that support for specific high-definition drives is less important as people increasingly look to download movies and content from the Internet.
Microsoft's Xbox Live Marketplace lets people download content to their Xbox or PC from major studios such as Paramount Studios and Warner Bros., with recent titles such as "Ocean's Thirteen."
That movie, which costs £19.99 (US$39.26) to download from the site, lets a user keep one copy on their PC and one copy on their mobile device. The movie is encoded in Microsoft's Windows Media Player format.
The company will continue to provide warranty and product support for existing HD DVD players, it said.
The Xbox 360 has a standard DVD drive built in: support for high-definition content came only with an add-on. Sony's Playstation 3 console, however, has a Blu-ray Disc drive built in, which helped grow support for the rival high-definition format.
Microsoft's announcement comes barely a week after HD DVD's main backer, Toshiba, said it will stop making the drives in the face of declining support for its high-definition format from retailers and studios. HD DVD's other supporters included Microsoft, Intel, HP and Universal Studios. Blu-ray also had the support of Panasonic and Samsung.
Warner Bros., which initially supported HD DVD, said early this year it would switch to Blu-ray Disc, a decision widely seen as a mortal blow to the format. Retailer Wal-Mart also recently said it would no longer sell HD DVDs.
A Microsoft spokesperson said Monday morning that the company is taking the long-term view that support for specific high-definition drives is less important as people increasingly look to download movies and content from the Internet.
Microsoft's Xbox Live Marketplace lets people download content to their Xbox or PC from major studios such as Paramount Studios and Warner Bros., with recent titles such as "Ocean's Thirteen."
That movie, which costs £19.99 (US$39.26) to download from the site, lets a user keep one copy on their PC and one copy on their mobile device. The movie is encoded in Microsoft's Windows Media Player format.
Sunday, February 24, 2008
Microsoft accidentally leaks SP1
On Thursday, some Windows Vista users began finding Service Pack 1 in Windows Update, even though the upgrade isn't supposed to be available broadly until the middle of March.
Microsoft acknowledged the error. "Yesterday, a build of SP1 was posted to Windows Update and it was inadvertently made available to a broad group. The build was intended only for our more technically advanced testers, and was meant to only be offered to those with a specific registry key set on their PC," Microsoft said in a statement. It also reiterated plans to make SP1 broadly available in mid-March.
Some customers on a Windows Vista forum reported that they successfully downloaded SP1 from Windows Update, but most others said that the download didn't work for them.
The accidental posting to Windows Update follows another recent issue with an update designed as a prerequisite for downloading SP1. Some users, after trying to install the update, got stuck in a reboot cycle. Earlier this week, Microsoft posted a fix for that problem.
Microsoft issued a second refresh of SP1 to beta users in late January, raising hopes that the final version would be out within a couple of weeks. The company had long said that SP1 would come out in the first quarter.
The final broad release of SP1 could boost Vista sales, particularly among enterprise users, because some companies have said that they are waiting for SP1 before upgrading to Vista.
Microsoft acknowledged the error. "Yesterday, a build of SP1 was posted to Windows Update and it was inadvertently made available to a broad group. The build was intended only for our more technically advanced testers, and was meant to only be offered to those with a specific registry key set on their PC," Microsoft said in a statement. It also reiterated plans to make SP1 broadly available in mid-March.
Some customers on a Windows Vista forum reported that they successfully downloaded SP1 from Windows Update, but most others said that the download didn't work for them.
The accidental posting to Windows Update follows another recent issue with an update designed as a prerequisite for downloading SP1. Some users, after trying to install the update, got stuck in a reboot cycle. Earlier this week, Microsoft posted a fix for that problem.
Microsoft issued a second refresh of SP1 to beta users in late January, raising hopes that the final version would be out within a couple of weeks. The company had long said that SP1 would come out in the first quarter.
The final broad release of SP1 could boost Vista sales, particularly among enterprise users, because some companies have said that they are waiting for SP1 before upgrading to Vista.
EA offers $2 billion for Grand Theft Auto publisher
Take-Two Interactive Software, publishers of the popular Grand Theft Auto series of games, has received and rejected a US$2 billion acquisition bid from Electronic Arts but left the door open to a possible acquisition later.
The EA bid, which wasn't made public until shortly before Take-Two announced its rejection Sunday, offered $26 cash per share for Take-Two. At the time the bid was made on Feb. 19, the price represented a 64 percent premium on Take-Two's Feb. 15 closing price of $15.83. It is currently a 49 percent premium on Take-Two's Friday closing price.
In its rejection the board of Take-Two said it judged the bid to be "inadequate in multiple respects."
"Electronic Arts' proposal provides insufficient value to our shareholders and comes at absolutely the wrong time given the crucial initiatives underway at the company," Take-Two Chairman Strauss Zelnick said in a statement.
Take-Two is scheduled to release the latest installment in the popular Grand Theft Auto series, "Grand Theft Auto IV," on April 29. The release of "GTA IV" was slated for October last year, but was delayed in order to give the development team more time for certain game elements. The series has sold more than 65 million copies to date, and the company said that it wants to hold-off on talks with EA until after that game hits the market. Therefore it proposed to start talks on April 30.
EA had originally told Take-Two the offer was subject to Take-Two agreeing to start talks by Feb. 22, but it noted Sunday that it would hold the offer open "for the present time" in the hope that discussions can begin.
In an open letter to investors CEO John Riccitiello wrote EA believes its offer is a good one for Take-Two shareholders. He said Take-Two's future is uncertain and that "there is a strong likelihood that the company will be sold in the not-too-distant future."
"So, that's it. We've made a proposal to buy Take-Two. Our preference is to make this a friendly transaction and I'm hopeful we can achieve that. We've sent this proposal in the genuine belief that combining EA and Take-Two would be good for the people who make games and good for the people who play them," he wrote.
The EA bid, which wasn't made public until shortly before Take-Two announced its rejection Sunday, offered $26 cash per share for Take-Two. At the time the bid was made on Feb. 19, the price represented a 64 percent premium on Take-Two's Feb. 15 closing price of $15.83. It is currently a 49 percent premium on Take-Two's Friday closing price.
In its rejection the board of Take-Two said it judged the bid to be "inadequate in multiple respects."
"Electronic Arts' proposal provides insufficient value to our shareholders and comes at absolutely the wrong time given the crucial initiatives underway at the company," Take-Two Chairman Strauss Zelnick said in a statement.
Take-Two is scheduled to release the latest installment in the popular Grand Theft Auto series, "Grand Theft Auto IV," on April 29. The release of "GTA IV" was slated for October last year, but was delayed in order to give the development team more time for certain game elements. The series has sold more than 65 million copies to date, and the company said that it wants to hold-off on talks with EA until after that game hits the market. Therefore it proposed to start talks on April 30.
EA had originally told Take-Two the offer was subject to Take-Two agreeing to start talks by Feb. 22, but it noted Sunday that it would hold the offer open "for the present time" in the hope that discussions can begin.
In an open letter to investors CEO John Riccitiello wrote EA believes its offer is a good one for Take-Two shareholders. He said Take-Two's future is uncertain and that "there is a strong likelihood that the company will be sold in the not-too-distant future."
"So, that's it. We've made a proposal to buy Take-Two. Our preference is to make this a friendly transaction and I'm hopeful we can achieve that. We've sent this proposal in the genuine belief that combining EA and Take-Two would be good for the people who make games and good for the people who play them," he wrote.
Goolag makes Google Hacking a snap
The hacking group Cult of the Dead Cow has released a tool that should make Google hacking a little easier for novices.
Called Goolag, the open-source software lets hackers use the Google search engine to scan Web sites for vulnerabilities.
This is something that hackers have been doing for years, but it can be tricky work -- involving custom scripts and tools that sift through the mountain of data available via Google.
The Cult of the Dead Cow is best known for creating the Back Orifice software 10 years ago, which could be used to remotely control a Windows machine.
Like Back Orifice, the software could be used by both legitimate security professionals and criminals. Goolag comes with an easy-to-use graphical interface. It is based on techniques developed by Computer Sciences Corp. researcher Johnny Long, a well-known computer hacker who has spent years documenting the way that Google's search engine can be used to uncover security vulnerabilities in the Web sites it indexes.
In a statement, The Cult of the Dead Cow said that the software is "one more tool for Web site owners to patch up their online properties."
"It's no big secret that the Web is the platform," the statement said. "And this platform pretty much sucks from a security perspective."
There are already free Web vulnerability search tools available -- such as the Wikto scanning software -- but the Cult of the Dead Cow's notoriety will probably help make Goolag popular, security experts said Friday.
"I don't think it's particularly new, but maybe it makes [Google hacking] more accessible," said Robert Hansen, CEO of Sectheory.com and author of the Ha.ckers.org Web security blog.
"It is interesting because it could theoretically represent a lower burden of entry for the novice Google hacker," he added.
Amichai Shulman, chief technology officer with security vendor Imperva, agreed that there are still far too many security vulnerabilities on Web sites. "Maybe the headlines that this release is getting will serve as a wake-up call for application owners," he said.
Called Goolag, the open-source software lets hackers use the Google search engine to scan Web sites for vulnerabilities.
This is something that hackers have been doing for years, but it can be tricky work -- involving custom scripts and tools that sift through the mountain of data available via Google.
The Cult of the Dead Cow is best known for creating the Back Orifice software 10 years ago, which could be used to remotely control a Windows machine.
Like Back Orifice, the software could be used by both legitimate security professionals and criminals. Goolag comes with an easy-to-use graphical interface. It is based on techniques developed by Computer Sciences Corp. researcher Johnny Long, a well-known computer hacker who has spent years documenting the way that Google's search engine can be used to uncover security vulnerabilities in the Web sites it indexes.
In a statement, The Cult of the Dead Cow said that the software is "one more tool for Web site owners to patch up their online properties."
"It's no big secret that the Web is the platform," the statement said. "And this platform pretty much sucks from a security perspective."
There are already free Web vulnerability search tools available -- such as the Wikto scanning software -- but the Cult of the Dead Cow's notoriety will probably help make Goolag popular, security experts said Friday.
"I don't think it's particularly new, but maybe it makes [Google hacking] more accessible," said Robert Hansen, CEO of Sectheory.com and author of the Ha.ckers.org Web security blog.
"It is interesting because it could theoretically represent a lower burden of entry for the novice Google hacker," he added.
Amichai Shulman, chief technology officer with security vendor Imperva, agreed that there are still far too many security vulnerabilities on Web sites. "Maybe the headlines that this release is getting will serve as a wake-up call for application owners," he said.
Microsoft letter hopeful, vague on Yahoo deal
In a letter to employees, Microsoft put an upbeat spin on its attempt to take over Yahoo.
While noting that no acquisition agreement is in place, Kevin Johnson, president of Microsoft's platforms and services division, wrote that he expects such a transaction to close in the second half of this year. "If and when Yahoo! agrees to proceed with the proposed transaction, we will go through the process to receive regulatory approval, and expect that this transaction will close in the 2nd half of calendar year 2008," he wrote.
Microsoft made its US$44.6 billion offer for Yahoo on Feb. 1. More than a week later, Yahoo rejected the bid as too low. Microsoft maintains that the offer is fair.
Johnson addressed some of the most pressing questions surrounding the potential acquisition in the letter, which Microsoft distributed to the media, but answered few of them definitively.
Acknowledging that there would likely be overlap in terms of staffing, he also noted that Microsoft has hired more than 20,000 people since 2005. "We have no shortage of business and technical opportunities, and we need great people to focus on them," he said. Microsoft would retain locations in both Silicon Valley and Redmond if the deal went through, he said.
He didn't shed any more light on the fate of either company's brands. "It is premature to say which aspects of the brands and technologies we would use in our combined offerings," he said.
Johnson also revealed little about how Microsoft would handle Yahoo's wide use of open-source software in its systems, an issue that some industry watchers have wondered about. Yahoo often uses open-source software in its back-end systems, while Microsoft prefers its own proprietary software. In the past, after acquisitions, Microsoft has sometimes migrated systems to its own software and in other cases maintained the existing software, Johnson said. "Yahoo! has made significant investments in both its skills and technologies, so we would work closely with Yahoo! engineers to make pragmatic platform and integration methodology decisions as appropriate, prioritizing above all how those decisions would impact customers," he said.
Johnson indicated that the process of integrating the companies would be critical to a combination's success. He pointed to recent Microsoft acquisitions, including aQuantive and Tellme, as examples of successful integrations.
Earlier this week, The New York Times reported that Microsoft planned to soon launch a proxy fight to replace Yahoo's board and force the takeover in a hostile bid. Neither company confirmed that report.
Johnson reiterated Microsoft's belief that a combination of the two companies would create a "more compelling alternative in search and online advertising," something that major media companies are looking for, he said.
While noting that no acquisition agreement is in place, Kevin Johnson, president of Microsoft's platforms and services division, wrote that he expects such a transaction to close in the second half of this year. "If and when Yahoo! agrees to proceed with the proposed transaction, we will go through the process to receive regulatory approval, and expect that this transaction will close in the 2nd half of calendar year 2008," he wrote.
Microsoft made its US$44.6 billion offer for Yahoo on Feb. 1. More than a week later, Yahoo rejected the bid as too low. Microsoft maintains that the offer is fair.
Johnson addressed some of the most pressing questions surrounding the potential acquisition in the letter, which Microsoft distributed to the media, but answered few of them definitively.
Acknowledging that there would likely be overlap in terms of staffing, he also noted that Microsoft has hired more than 20,000 people since 2005. "We have no shortage of business and technical opportunities, and we need great people to focus on them," he said. Microsoft would retain locations in both Silicon Valley and Redmond if the deal went through, he said.
He didn't shed any more light on the fate of either company's brands. "It is premature to say which aspects of the brands and technologies we would use in our combined offerings," he said.
Johnson also revealed little about how Microsoft would handle Yahoo's wide use of open-source software in its systems, an issue that some industry watchers have wondered about. Yahoo often uses open-source software in its back-end systems, while Microsoft prefers its own proprietary software. In the past, after acquisitions, Microsoft has sometimes migrated systems to its own software and in other cases maintained the existing software, Johnson said. "Yahoo! has made significant investments in both its skills and technologies, so we would work closely with Yahoo! engineers to make pragmatic platform and integration methodology decisions as appropriate, prioritizing above all how those decisions would impact customers," he said.
Johnson indicated that the process of integrating the companies would be critical to a combination's success. He pointed to recent Microsoft acquisitions, including aQuantive and Tellme, as examples of successful integrations.
Earlier this week, The New York Times reported that Microsoft planned to soon launch a proxy fight to replace Yahoo's board and force the takeover in a hostile bid. Neither company confirmed that report.
Johnson reiterated Microsoft's belief that a combination of the two companies would create a "more compelling alternative in search and online advertising," something that major media companies are looking for, he said.
Motorola finds new counter for shrinking pile of beans
Motorola President and CEO Greg Brown added another piece to the company's new management team on Friday with the announcement that Paul Liska will become executive vice president and chief financial officer.
Liska, who has been a partner in several private equity firms and played financial and general executive roles in transportation, publishing and retail companies, will take over Motorola's finances on March 1. Tom Meredith, who has been acting CFO since last year, will remain on Motorola's board and help Liska with the transition, the company said in a statement. It praised Meredith for cost-cutting efforts.
Motorola's last permanent CFO, David Devonshire, resigned last March. The company had run into rough waters after it failed to come up with a popular successor to the slim Razr clamshell phone. Former President and CEO Ed Zander handed those two jobs over to Brown in November, though he remains chairman until the next Motorola shareholder meeting in May.
Since Brown took Zander's place, Chief Technology Officer Padmasree Warrior has also left, and the company has said it might spin off its handset business.
Motorola has fallen behind both Nokia and Samsung in the hotly contested mobile-phone market, but its handset division still brought in US$4.8 billion of the company's US$9.6 billion revenue in the fourth quarter of last year. The company as a whole saw revenue fall from $11.8 billion a year earlier and earnings per share drop to $0.04 from $0.25.
Liska, who has been a partner in several private equity firms and played financial and general executive roles in transportation, publishing and retail companies, will take over Motorola's finances on March 1. Tom Meredith, who has been acting CFO since last year, will remain on Motorola's board and help Liska with the transition, the company said in a statement. It praised Meredith for cost-cutting efforts.
Motorola's last permanent CFO, David Devonshire, resigned last March. The company had run into rough waters after it failed to come up with a popular successor to the slim Razr clamshell phone. Former President and CEO Ed Zander handed those two jobs over to Brown in November, though he remains chairman until the next Motorola shareholder meeting in May.
Since Brown took Zander's place, Chief Technology Officer Padmasree Warrior has also left, and the company has said it might spin off its handset business.
Motorola has fallen behind both Nokia and Samsung in the hotly contested mobile-phone market, but its handset division still brought in US$4.8 billion of the company's US$9.6 billion revenue in the fourth quarter of last year. The company as a whole saw revenue fall from $11.8 billion a year earlier and earnings per share drop to $0.04 from $0.25.
Developers: OpenSocial OK, but needs tuning
Google's OpenSocial initiative to simplify the creation and adaptation of applications for social-networking sites pursues a valuable goal, but its technology platform needs further improvement.
That's the consensus from several developers who have been testing the OpenSocial APIs (application programming interfaces) and the OpenSocial implementations, or "containers," of participating Web sites.
However, the technical bumps they have encountered, while annoying and frustrating, haven't prompted them to give up on OpenSocial. Instead, the developers remain hopeful that the project, announced almost four months ago, will continue to mature.
Chris McCormick, a games industry contractor based in Australia, has encountered "a few rough edges" when working with OpenSocial, especially bugs in the partner sites' containers, but is "pretty satisfied" with the project.
"The API is intelligently designed and seems to cover all bases quite comprehensively. It should be possible to do some really fun stuff with it," McCormick said via e-mail.
Meanwhile, Aakash Bapna, an information sciences student in Bangalore, has also run into technical issues. "Bugs, bugs and lots of bugs. There are lots of issues with OpenSocial specs as they are launched. You can't tell when your smoothly working application can break," he said via e-mail.
For Bapna, a big hole is the unavailability of the server-side REST (Representational State Transfer) API, which will allow applications to tap servers, something that Thiago Santos, a Brazilian developer of an upcoming application called Partyeah, also misses.
Like McCormick, Santos also has encountered many bugs in partner site containers. Santos would also like Google to do a better job of communicating changes and updates to OpenSocial components. Still, he's confident OpenSocial will get over its growing pains eventually. "I have no doubt that [OpenSocial's promise] will be fulfilled," Santos said via e-mail.
That promise is to establish a standard application-development platform for social applications so developers don't have to remake an application for each social-networking site. While Facebook hasn't signed up for OpenSocial, other big social-networking sites have, like MySpace, Bebo and LinkedIn, as well as major enterprise software players like Oracle and Salesforce.com, which see emergence of social features within business applications.
With OpenSocial, developers will be able to build the core portions of social applications and then adapt them if necessary, with, they hope, minor tweaks and changes for specific sites.
"It's not 'write once, run everywhere.' It's more 'learn once and write everywhere.' You learn the OpenSocial model once. For most applications there will be a core of code that's common to all platforms," said Patrick Chanezon, developer advocate at Google.
Then it's likely that participating Web sites will make available to developers additional extensions in their OpenSocial containers, allowing developers to take advantage of specific features in their sites that aren't included in the standard, Chanezon said.
Developers don't seem worried that OpenSocial will splinter if partner sites add too many proprietary functions to their containers. "I think it should be reasonably easy to write apps that run on all social-networking sites that support OpenSocial without much modification," McCormick said. "The core of OpenSocial contains the most important parts of the social-networking experience ... Anything which does end up adding something drastically new and wonderful will more than likely become part of the standard anyway."
Regarding the technical bumps, Google was clear that the first version of the OpenSocial APIs, labeled 0.5, was far from final, and that it was putting it out in the market in order to get feedback from developers. Now, with version 0.7, Google says that developers can create production applications. Moreover, OpenSocial's technology will continue to improve. "If it turns out this round of OpenSocial provides good applications and we want to get to stellar applications, we'll enhance it," said David Glazer, an engineering director at Google.
The server-side REST API is also coming, but Google and its partners need to agree on the exact way it will be done, Chanezon said. "It will be super-useful for mobile applications," Chanezon said. Mobile phones whose browsers aren't powerful enough to run the OpenSocial Javascript APIs will take advantage of this REST API to get needed data from a server.
Google is also working on a security technology for OpenSocial applications called Caja, which the company calls an open-source Javascript "sanitizer" that aims to provide a security layer to prevent the spread of phishing scams, spam and malware via applications.
Also in the works is Shindig, an open-source reference implementation of OpenSocial overseen by The Apache Software Foundation, whose purpose is to let Web site operators implement an OpenSocial container in a matter of hours.
Meanwhile, Google's social-networking site Orkut will soon make available OpenSocial applications to its end-users, as will some of the other participating sites."That's what we're looking forward to: opening the doors and watching the party get started," Glazer said.
AOL's Userplane, a maker of Web-based communication applications, has been involved in the OpenSocial effort and is eager to see it continue to evolve, said Userplane CEO Michael Jones. "As application developers, we're excited about reducing the code we have to write, so I love the concept behind OpenSocial," Jones said.
"Although it has some uncertainties, I feel we're seeing an initiative that can have a great role in the future," Santos said.
That's the consensus from several developers who have been testing the OpenSocial APIs (application programming interfaces) and the OpenSocial implementations, or "containers," of participating Web sites.
However, the technical bumps they have encountered, while annoying and frustrating, haven't prompted them to give up on OpenSocial. Instead, the developers remain hopeful that the project, announced almost four months ago, will continue to mature.
Chris McCormick, a games industry contractor based in Australia, has encountered "a few rough edges" when working with OpenSocial, especially bugs in the partner sites' containers, but is "pretty satisfied" with the project.
"The API is intelligently designed and seems to cover all bases quite comprehensively. It should be possible to do some really fun stuff with it," McCormick said via e-mail.
Meanwhile, Aakash Bapna, an information sciences student in Bangalore, has also run into technical issues. "Bugs, bugs and lots of bugs. There are lots of issues with OpenSocial specs as they are launched. You can't tell when your smoothly working application can break," he said via e-mail.
For Bapna, a big hole is the unavailability of the server-side REST (Representational State Transfer) API, which will allow applications to tap servers, something that Thiago Santos, a Brazilian developer of an upcoming application called Partyeah, also misses.
Like McCormick, Santos also has encountered many bugs in partner site containers. Santos would also like Google to do a better job of communicating changes and updates to OpenSocial components. Still, he's confident OpenSocial will get over its growing pains eventually. "I have no doubt that [OpenSocial's promise] will be fulfilled," Santos said via e-mail.
That promise is to establish a standard application-development platform for social applications so developers don't have to remake an application for each social-networking site. While Facebook hasn't signed up for OpenSocial, other big social-networking sites have, like MySpace, Bebo and LinkedIn, as well as major enterprise software players like Oracle and Salesforce.com, which see emergence of social features within business applications.
With OpenSocial, developers will be able to build the core portions of social applications and then adapt them if necessary, with, they hope, minor tweaks and changes for specific sites.
"It's not 'write once, run everywhere.' It's more 'learn once and write everywhere.' You learn the OpenSocial model once. For most applications there will be a core of code that's common to all platforms," said Patrick Chanezon, developer advocate at Google.
Then it's likely that participating Web sites will make available to developers additional extensions in their OpenSocial containers, allowing developers to take advantage of specific features in their sites that aren't included in the standard, Chanezon said.
Developers don't seem worried that OpenSocial will splinter if partner sites add too many proprietary functions to their containers. "I think it should be reasonably easy to write apps that run on all social-networking sites that support OpenSocial without much modification," McCormick said. "The core of OpenSocial contains the most important parts of the social-networking experience ... Anything which does end up adding something drastically new and wonderful will more than likely become part of the standard anyway."
Regarding the technical bumps, Google was clear that the first version of the OpenSocial APIs, labeled 0.5, was far from final, and that it was putting it out in the market in order to get feedback from developers. Now, with version 0.7, Google says that developers can create production applications. Moreover, OpenSocial's technology will continue to improve. "If it turns out this round of OpenSocial provides good applications and we want to get to stellar applications, we'll enhance it," said David Glazer, an engineering director at Google.
The server-side REST API is also coming, but Google and its partners need to agree on the exact way it will be done, Chanezon said. "It will be super-useful for mobile applications," Chanezon said. Mobile phones whose browsers aren't powerful enough to run the OpenSocial Javascript APIs will take advantage of this REST API to get needed data from a server.
Google is also working on a security technology for OpenSocial applications called Caja, which the company calls an open-source Javascript "sanitizer" that aims to provide a security layer to prevent the spread of phishing scams, spam and malware via applications.
Also in the works is Shindig, an open-source reference implementation of OpenSocial overseen by The Apache Software Foundation, whose purpose is to let Web site operators implement an OpenSocial container in a matter of hours.
Meanwhile, Google's social-networking site Orkut will soon make available OpenSocial applications to its end-users, as will some of the other participating sites."That's what we're looking forward to: opening the doors and watching the party get started," Glazer said.
AOL's Userplane, a maker of Web-based communication applications, has been involved in the OpenSocial effort and is eager to see it continue to evolve, said Userplane CEO Michael Jones. "As application developers, we're excited about reducing the code we have to write, so I love the concept behind OpenSocial," Jones said.
"Although it has some uncertainties, I feel we're seeing an initiative that can have a great role in the future," Santos said.
Friday, February 22, 2008
17 arrested in Canadian hacking bust
Quebec provincial police conducted raids on Wednesday, breaking up a hacking ring that police say is responsible for an estimated CDN$45 million (US$44.3 million) in damage to computer systems.
The hackers installed remote-controlled "botnet" software on victims' computers in order to run phishing and spamming operations, said Capt. Frederick Gaudreau, of the Surete du Quebec, in a videotaped press conference posted to the police agency's Web site. "The hackers managed to install botnets on the victims' computers, which permitted them to control at a distance the victims' computers," he said. "These said computers were then used to attack Web sites in order to steal victims' data."
If convicted of computer hacking charges, the accused could face 10 years in prison, he said.
Although the hackers operated from about a dozen towns all over Quebec, their botnet network was international in scope, infecting 39,000 computers in Poland, 28,000 in Brazil, and 26,000 in Mexico -- the top three countries affected by the group. In all, they hacked into more than 100,000 computers in 100 countries.
The accused range in age from 17 years old to 26 years old, but police did not release the names of the accused. Three of them are minors, Gaudreau said.
This is the first time that Canadian authorities have dismantled such a network, he added. The investigation was done in collaboration with the Royal Canadian Mounted Police.
The hackers installed remote-controlled "botnet" software on victims' computers in order to run phishing and spamming operations, said Capt. Frederick Gaudreau, of the Surete du Quebec, in a videotaped press conference posted to the police agency's Web site. "The hackers managed to install botnets on the victims' computers, which permitted them to control at a distance the victims' computers," he said. "These said computers were then used to attack Web sites in order to steal victims' data."
If convicted of computer hacking charges, the accused could face 10 years in prison, he said.
Although the hackers operated from about a dozen towns all over Quebec, their botnet network was international in scope, infecting 39,000 computers in Poland, 28,000 in Brazil, and 26,000 in Mexico -- the top three countries affected by the group. In all, they hacked into more than 100,000 computers in 100 countries.
The accused range in age from 17 years old to 26 years old, but police did not release the names of the accused. Three of them are minors, Gaudreau said.
This is the first time that Canadian authorities have dismantled such a network, he added. The investigation was done in collaboration with the Royal Canadian Mounted Police.
Europe makes moves towards Internet censorship
A debate over the use of internet filtering is heating up in Europe, with privacy advocates and carriers going head to head with authorities.
In Finland programmer Matti Nikki is under investigation for publishing a secret list of domains that authorities had allegedly censored in an effort to stop the spread of child pornography. Nikki published his list to prove the system was being abused, and was himself censored as a result. The Finnish Chancellor of Justice has received a complaint about police handling of the matter.
The authorities distribute their list to the country's twenty largest Internet service providers, which then block access to the sites. The rest of Finland's 200 ISPs haven't implemented the technology, so protection is far from complete.
The problem with filtering is that it is a very blunt tool, according to Swedish Internet activist Oscar Swartz.
"I have seen the list Nikki published and it includes links to sites with regular pornography, so they shouldn't be censored," said Swartz.
The Finnish police force is aware of the problems with filtering.
"The technology we currently use works well with sites that only include child pornography. To filter sites with a mixture of content we need to use other technologies as well," said Lars Henriksson, chief superintendent at the National Bureau of Investigation.
Finland isn't the only country where the temperature is rising. Danish authorities recently decided to block file-sharing site Pirate Bay, after pressure from the International Federation of the Phonographic Industry (IFPI). ISP Tele2 decided to fight the court order. They are so far the only ISP that has been ordered to shut off access to The Pirate Bay, but IFPI has plans to expand the blocking.
Other organizations are starting to show an interest in the use of filtering, including mobile network operators. They are banding together to combat the distribution of child pornography.
"We are here to tackle a very disturbing and damaging phenomenon," said Craig Ehrlich, chairman of the GSM Association, a group of mobile network operators, launching the initiative at a conference in Barcelona last week.
The use of emotive issues to justify the introduction or extension of censorship worries some.
"It's easy to ignore the negative aspects of filtering and censorship when talking about something so universally disliked as child pornography," said Swartz.
But state censorship proposals don't stop there: the European Union's Justice and Security Commissioner Franco Frattini called last September for ISPs to block access to Web sites hosting information about bomb-making, and U.K. Home Secretary Jacqui Smith said in January that she wanted action taken against sites that encouraged terrorism, including social networking sites.
Such actions could have wider consequences: "If the E.U. starts to filter sites related to piracy, terrorism and child pornography, it will have some serious effects on the freedom to communicate," said Swartz.
In Finland programmer Matti Nikki is under investigation for publishing a secret list of domains that authorities had allegedly censored in an effort to stop the spread of child pornography. Nikki published his list to prove the system was being abused, and was himself censored as a result. The Finnish Chancellor of Justice has received a complaint about police handling of the matter.
The authorities distribute their list to the country's twenty largest Internet service providers, which then block access to the sites. The rest of Finland's 200 ISPs haven't implemented the technology, so protection is far from complete.
The problem with filtering is that it is a very blunt tool, according to Swedish Internet activist Oscar Swartz.
"I have seen the list Nikki published and it includes links to sites with regular pornography, so they shouldn't be censored," said Swartz.
The Finnish police force is aware of the problems with filtering.
"The technology we currently use works well with sites that only include child pornography. To filter sites with a mixture of content we need to use other technologies as well," said Lars Henriksson, chief superintendent at the National Bureau of Investigation.
Finland isn't the only country where the temperature is rising. Danish authorities recently decided to block file-sharing site Pirate Bay, after pressure from the International Federation of the Phonographic Industry (IFPI). ISP Tele2 decided to fight the court order. They are so far the only ISP that has been ordered to shut off access to The Pirate Bay, but IFPI has plans to expand the blocking.
Other organizations are starting to show an interest in the use of filtering, including mobile network operators. They are banding together to combat the distribution of child pornography.
"We are here to tackle a very disturbing and damaging phenomenon," said Craig Ehrlich, chairman of the GSM Association, a group of mobile network operators, launching the initiative at a conference in Barcelona last week.
The use of emotive issues to justify the introduction or extension of censorship worries some.
"It's easy to ignore the negative aspects of filtering and censorship when talking about something so universally disliked as child pornography," said Swartz.
But state censorship proposals don't stop there: the European Union's Justice and Security Commissioner Franco Frattini called last September for ISPs to block access to Web sites hosting information about bomb-making, and U.K. Home Secretary Jacqui Smith said in January that she wanted action taken against sites that encouraged terrorism, including social networking sites.
Such actions could have wider consequences: "If the E.U. starts to filter sites related to piracy, terrorism and child pornography, it will have some serious effects on the freedom to communicate," said Swartz.
White spaces group: Device testing on track
A wireless broadband device tested by the U.S. Federal Communications Commission for interference with television and wireless microphone signals has not failed, as a broadcasting group claimed last week, members of the White Spaces Coalition said Thursday.
The National Association of Broadcasters (NAB) on Feb. 11 said a so-called prototype device submitted by Microsoft lost power during tests being run by the FCC. The power failure comes after another whites spaces device malfunctioned in tests run by the FCC last year.
But Ed Thomas, a tech advisor to the White Spaces Coalition and a former chief of the FCC's Office of Engineering and Technology, said Thursday that while the devices power supply failed after many hours of continuous testing, it did not interfere with television signals due to the power failure.
Thomas, during a press briefing, said the NAB was engaged in "rhetoric" designed to complicate the FCC's device testing."Let this be based on science, not politics," Thomas said of the ongoing testing at the FCC. "Let the facts prevail."
The White Spaces Coalition, including Microsoft, Philips, Dell and Google, is asking the FCC to allow wireless devices to operate in the so-called white spaces of the television spectrum, space allocated for television signals but vacant. The coalition wants the white spaces opened up to give consumers more wireless broadband options, and the white spaces devices would be targeted at longer-range broadband than traditional Wi-Fi.
If the FCC approves the devices this year, commercial white spaces wireless devices could be available as soon as late 2009.
The FCC's in-house testing of four devices will continue for a couple more weeks, then the agency will conduct field tests for up to eight weeks. A second white spaces device has experienced no power failure problems, Thomas said.
But television broadcasters have opposed the coalition, saying it's likely that the that wireless devices will interfere with TV signals. The NAB has suggested the FCC should focus instead on a successful transition of TV stations to digital broadcasts, required by February 2009.
White spaces devices are "not ready for prime time," said Dennis Wharton, the NAB's executive vice president.
Wharton responded to Thomas' assertion that the Microsoft device did not interfere with TV signals.
"The devices they've tested haven't performed the way they were expected to perform," Wharton added. "That, in our view, constitutes a failure."
The National Association of Broadcasters (NAB) on Feb. 11 said a so-called prototype device submitted by Microsoft lost power during tests being run by the FCC. The power failure comes after another whites spaces device malfunctioned in tests run by the FCC last year.
But Ed Thomas, a tech advisor to the White Spaces Coalition and a former chief of the FCC's Office of Engineering and Technology, said Thursday that while the devices power supply failed after many hours of continuous testing, it did not interfere with television signals due to the power failure.
Thomas, during a press briefing, said the NAB was engaged in "rhetoric" designed to complicate the FCC's device testing."Let this be based on science, not politics," Thomas said of the ongoing testing at the FCC. "Let the facts prevail."
The White Spaces Coalition, including Microsoft, Philips, Dell and Google, is asking the FCC to allow wireless devices to operate in the so-called white spaces of the television spectrum, space allocated for television signals but vacant. The coalition wants the white spaces opened up to give consumers more wireless broadband options, and the white spaces devices would be targeted at longer-range broadband than traditional Wi-Fi.
If the FCC approves the devices this year, commercial white spaces wireless devices could be available as soon as late 2009.
The FCC's in-house testing of four devices will continue for a couple more weeks, then the agency will conduct field tests for up to eight weeks. A second white spaces device has experienced no power failure problems, Thomas said.
But television broadcasters have opposed the coalition, saying it's likely that the that wireless devices will interfere with TV signals. The NAB has suggested the FCC should focus instead on a successful transition of TV stations to digital broadcasts, required by February 2009.
White spaces devices are "not ready for prime time," said Dennis Wharton, the NAB's executive vice president.
Wharton responded to Thomas' assertion that the Microsoft device did not interfere with TV signals.
"The devices they've tested haven't performed the way they were expected to perform," Wharton added. "That, in our view, constitutes a failure."
Open APIs may help Microsoft repair reputation
If Microsoft executes effectively on its new interoperability promises, it could repair its tarnished reputation in the technology industry and help the company get out of its own way to compete more effectively with Google.
At first glance, Microsoft's news on Thursday that it would provide access to documentation for its major software products, including Windows Vista, Office 2007 and Exchange Server 2007, appeared to be a way to appease the European Commission in its ongoing antitrust case. It also seemed an acknowledgment that Microsoft can't ignore the open-source community's impact on its business and prominence in the industry any longer.
"[The news] validates and places a Microsoft acknowledgment that the open models that have emerged -- which Microsoft has denied almost as vociferously as tobacco companies have fought the idea that smoking causes cancer -- are a perfectly reasonable way to go," said Nick Selby, a senior analyst and research director at The 451 Group.
Still, many remain skeptical that providing easier access to APIs (application programming interfaces), and vowing to allow developers to build open-source implementations on those APIs without interfering, doesn't mean Microsoft is a friend to open source, or that the company will change how it does business. Already open-source companies like Red Hat are adopting a wait-and-see approach to the news -- and rightfully so, as Microsoft has cloaked its own business interests in interoperability announcements before. For example, last year, Microsoft struck a so-called interoperability pact with Linux vendor Novell, while at the same time saying the company would go after people who violated more than 200 patents Microsoft says it holds for technologies in Linux.
But Thursday's news could, if played correctly, repair the long-held notion in the industry that Microsoft is a proprietary bully that buries anyone who jumps in its sandbox. By making a companywide commitment to being more transparent about its technology and friendly to open-source developers and companies that build interoperable technology, Microsoft proves it realizes it can no longer embrace proprietary principles -- and expect the entire industry to go along with it.
"This is the new Microsoft," said Chris Swenson, an analyst at NPD Group. "They really are changing." However, he acknowledged that because of Microsoft's previous business practices and reputation, it's highly likely that "no one is going to give them credit for it."
Still, people should keep an open mind about Microsoft's extension of a new olive branch to open source, he said. If critics take a few steps back, they'll see that Microsoft's decision did not happen overnight.
Microsoft's new attitude is the result of many years of antitrust tussling, beratement at the hands of the open-standards community and product-interoperability challenges that have inspired the company to change its ways in order to stay relevant, analysts said. Under increased global pressure, the company has been slowly coming around to the idea of open source -- through key initiatives like the Open Specification Promise -- over the past few years.
Mike Gilpin, an analyst with Forrester, suggested that many of Microsoft's recent executive changes also represent a shift in mind-set to a more open policy, and noted the rise of executives such as Bill Hilf, general manager of platform strategy and a former IBM Linux specialist, as part of this attitude adjustment.
"I wouldn't be surprised if there wasn't a relationship between the two things," he said. "This does come from the top. I think in the way this is being communicated inside of Microsoft, it places a lot of requirements on developers and product managers to behave in a certain way -- and if they don't do that, they'll be in a lot of trouble with [Chairman] Bill [Gates] and [CEO] Steve [Ballmer]."
Gilpin acknowledged that he has always been skeptical of Microsoft's intentions toward being more open and transparent, but in the past two years, he said the company "has really changed its stripes around interoperability."
In a blog post on Thursday, Hilf himself noted that Microsoft's new commitment has evolved over time, though he called the changes to Microsoft's strategy "broad-reaching" and said they "go above and beyond any prior incremental changes in Microsoft's DNA."
These changes are not only happening because of market forces that have given rise to the success of open source, but also because Microsoft has suffered from its own proprietary legacy. Aside from its embroilment in lengthy and costly antitrust cases both in the U.S. and overseas, a lack of support for open standards and interfaces also have hurt the adoption of its technology. By being more open, the company could also be more successful in areas where it has struggled, like the Internet, analysts said.
For example, when Microsoft created a new version of its Internet Explorer browser, IE 7, to keep up with the latest Internet standards -- and to compete with Mozilla's Firefox browser -- many people who'd built sites to work with previous versions of IE found they no longer worked because they had been designed to support Microsoft's proprietary technologies. In trying to do the right thing and support more open and generally supported technologies, Microsoft found that its own proprietary software got in the way of its best intentions.
In fact, the changing business models on the Internet that have made Google so successful are another example of where Microsoft could have benefited if it had embraced open standards and more technological transparency sooner, Selby said. Google right away gave developers access to APIs to create a community around its Web-based products and services -- and used this fact to criticize Microsoft, he said.
Microsoft's decision to be more open takes a bit of the wind out of the sails of that argument, he added. "It's a simple way to do the right thing and also manage a poke in Google's eye," Selby said.
Providing more open access to technologies also could give Microsoft leverage if it is indeed successful in its bid to purchase Yahoo, which recently said it would open up more APIs to developers in its own pursuit of Google.
At first glance, Microsoft's news on Thursday that it would provide access to documentation for its major software products, including Windows Vista, Office 2007 and Exchange Server 2007, appeared to be a way to appease the European Commission in its ongoing antitrust case. It also seemed an acknowledgment that Microsoft can't ignore the open-source community's impact on its business and prominence in the industry any longer.
"[The news] validates and places a Microsoft acknowledgment that the open models that have emerged -- which Microsoft has denied almost as vociferously as tobacco companies have fought the idea that smoking causes cancer -- are a perfectly reasonable way to go," said Nick Selby, a senior analyst and research director at The 451 Group.
Still, many remain skeptical that providing easier access to APIs (application programming interfaces), and vowing to allow developers to build open-source implementations on those APIs without interfering, doesn't mean Microsoft is a friend to open source, or that the company will change how it does business. Already open-source companies like Red Hat are adopting a wait-and-see approach to the news -- and rightfully so, as Microsoft has cloaked its own business interests in interoperability announcements before. For example, last year, Microsoft struck a so-called interoperability pact with Linux vendor Novell, while at the same time saying the company would go after people who violated more than 200 patents Microsoft says it holds for technologies in Linux.
But Thursday's news could, if played correctly, repair the long-held notion in the industry that Microsoft is a proprietary bully that buries anyone who jumps in its sandbox. By making a companywide commitment to being more transparent about its technology and friendly to open-source developers and companies that build interoperable technology, Microsoft proves it realizes it can no longer embrace proprietary principles -- and expect the entire industry to go along with it.
"This is the new Microsoft," said Chris Swenson, an analyst at NPD Group. "They really are changing." However, he acknowledged that because of Microsoft's previous business practices and reputation, it's highly likely that "no one is going to give them credit for it."
Still, people should keep an open mind about Microsoft's extension of a new olive branch to open source, he said. If critics take a few steps back, they'll see that Microsoft's decision did not happen overnight.
Microsoft's new attitude is the result of many years of antitrust tussling, beratement at the hands of the open-standards community and product-interoperability challenges that have inspired the company to change its ways in order to stay relevant, analysts said. Under increased global pressure, the company has been slowly coming around to the idea of open source -- through key initiatives like the Open Specification Promise -- over the past few years.
Mike Gilpin, an analyst with Forrester, suggested that many of Microsoft's recent executive changes also represent a shift in mind-set to a more open policy, and noted the rise of executives such as Bill Hilf, general manager of platform strategy and a former IBM Linux specialist, as part of this attitude adjustment.
"I wouldn't be surprised if there wasn't a relationship between the two things," he said. "This does come from the top. I think in the way this is being communicated inside of Microsoft, it places a lot of requirements on developers and product managers to behave in a certain way -- and if they don't do that, they'll be in a lot of trouble with [Chairman] Bill [Gates] and [CEO] Steve [Ballmer]."
Gilpin acknowledged that he has always been skeptical of Microsoft's intentions toward being more open and transparent, but in the past two years, he said the company "has really changed its stripes around interoperability."
In a blog post on Thursday, Hilf himself noted that Microsoft's new commitment has evolved over time, though he called the changes to Microsoft's strategy "broad-reaching" and said they "go above and beyond any prior incremental changes in Microsoft's DNA."
These changes are not only happening because of market forces that have given rise to the success of open source, but also because Microsoft has suffered from its own proprietary legacy. Aside from its embroilment in lengthy and costly antitrust cases both in the U.S. and overseas, a lack of support for open standards and interfaces also have hurt the adoption of its technology. By being more open, the company could also be more successful in areas where it has struggled, like the Internet, analysts said.
For example, when Microsoft created a new version of its Internet Explorer browser, IE 7, to keep up with the latest Internet standards -- and to compete with Mozilla's Firefox browser -- many people who'd built sites to work with previous versions of IE found they no longer worked because they had been designed to support Microsoft's proprietary technologies. In trying to do the right thing and support more open and generally supported technologies, Microsoft found that its own proprietary software got in the way of its best intentions.
In fact, the changing business models on the Internet that have made Google so successful are another example of where Microsoft could have benefited if it had embraced open standards and more technological transparency sooner, Selby said. Google right away gave developers access to APIs to create a community around its Web-based products and services -- and used this fact to criticize Microsoft, he said.
Microsoft's decision to be more open takes a bit of the wind out of the sails of that argument, he added. "It's a simple way to do the right thing and also manage a poke in Google's eye," Selby said.
Providing more open access to technologies also could give Microsoft leverage if it is indeed successful in its bid to purchase Yahoo, which recently said it would open up more APIs to developers in its own pursuit of Google.
Hard drive encryption has Achilles heel
If you think that encrypting your laptop's hard drive will keep your data safe from prying eyes, you may want to think again, according to researchers at Princeton University.
They've discovered a way to steal the hard drive encryption key used by products such as Windows Vista's BitLocker or Apple's FileVault. With that key, hackers could get access to all of the data stored on an encrypted hard drive.
That's because of a physical property of the computer's memory chips. Data in these DRAM (dynamic RAM) processors disappears when the computer is turned off, but it turns out that this doesn't happen right away, according to Alex Halderman, a Princeton graduate student who worked on the paper.
In fact, it can take minutes before that data disappears, giving hackers a way to sniff out encryption keys.
For the attack to work, the computer would have to first be running or in standby mode. It wouldn't work against a computer that had been shut off for a few minutes because the data in DRAM would have disappeared by then.
The attacker simply turns the computer off for a second or two and then reboots the system from a portable hard disk, which includes software that can examine the contents of the memory chips. This gives an attacker a way around the operating system protection that keeps the encryption keys hidden in memory.
"This enables a whole new class of attacks against security products like disk encryption systems that have depended on the operating system to protect their private keys," Halderman said. "An attacker could steal someone's laptop where they were using disk encryption and reboot the machine ... and then capture what was in memory before the power was cut."
Some computers wipe the memory when they boot up, but even these systems can be vulnerable, Halderman said. Researchers found that if they cooled down the memory chips by spraying canned air on them, they could slow down the rate at which memory disappeared. Cooling chips down to about -58 degrees Fahrenheit (-50 degrees Celsius) gave researchers time to power down the computer and then install the memory in another PC that would boot without wiping out the data. "By cooling the chips we were able to recover data perfectly after 10 minutes or more," Halderman said.
Led by Princeton University, the team included researchers from the Electronic Frontier Foundation and Wind River Systems.
U.S. states have enacted a series of tough data disclosure laws over the past five years which force companies to notify residents whenever they lose sensitive information. Under these laws, a missing laptop can cost a company millions of dollars as well as public embarrassment as it is forced to track down and notify those whose data was lost.
However, many state laws, such as California's SB 1386 make an exception for encrypted PCs. So if a company or government agency loses an encrypted laptop containing sensitive data, they are not compelled to notify those affected.
The team's research may spur legislators to rethink that approach, Halderman said. "Maybe that law is placing too much faith in disk encryption technologies," he said. "It may be that we're not hearing about thefts of encrypted machines where that data could still be at risk."
Laws like SB 1386 treat encryption as if it's a "magic spell" and ignore the fact that there's such a thing as bad encryption, said encryption expert Bruce Schneier, who is chief technology officer with BT Counterpane.
The underlying problem is that if someone gains access to your machine, it is very difficult to protect the data on your hard drive, Schneier said. "That's an extremely hard problem for a lot of reasons, and this is one example of that."
Hardware-based encryption would probably reduce the risk, Halderman said, but he agreed that "it's a difficult problem."
Hard-drive makers Seagate and Hitachi both offer hardware-based disk encryption options with their hard drives, although these options come with a premium price tag.
They've discovered a way to steal the hard drive encryption key used by products such as Windows Vista's BitLocker or Apple's FileVault. With that key, hackers could get access to all of the data stored on an encrypted hard drive.
That's because of a physical property of the computer's memory chips. Data in these DRAM (dynamic RAM) processors disappears when the computer is turned off, but it turns out that this doesn't happen right away, according to Alex Halderman, a Princeton graduate student who worked on the paper.
In fact, it can take minutes before that data disappears, giving hackers a way to sniff out encryption keys.
For the attack to work, the computer would have to first be running or in standby mode. It wouldn't work against a computer that had been shut off for a few minutes because the data in DRAM would have disappeared by then.
The attacker simply turns the computer off for a second or two and then reboots the system from a portable hard disk, which includes software that can examine the contents of the memory chips. This gives an attacker a way around the operating system protection that keeps the encryption keys hidden in memory.
"This enables a whole new class of attacks against security products like disk encryption systems that have depended on the operating system to protect their private keys," Halderman said. "An attacker could steal someone's laptop where they were using disk encryption and reboot the machine ... and then capture what was in memory before the power was cut."
Some computers wipe the memory when they boot up, but even these systems can be vulnerable, Halderman said. Researchers found that if they cooled down the memory chips by spraying canned air on them, they could slow down the rate at which memory disappeared. Cooling chips down to about -58 degrees Fahrenheit (-50 degrees Celsius) gave researchers time to power down the computer and then install the memory in another PC that would boot without wiping out the data. "By cooling the chips we were able to recover data perfectly after 10 minutes or more," Halderman said.
Led by Princeton University, the team included researchers from the Electronic Frontier Foundation and Wind River Systems.
U.S. states have enacted a series of tough data disclosure laws over the past five years which force companies to notify residents whenever they lose sensitive information. Under these laws, a missing laptop can cost a company millions of dollars as well as public embarrassment as it is forced to track down and notify those whose data was lost.
However, many state laws, such as California's SB 1386 make an exception for encrypted PCs. So if a company or government agency loses an encrypted laptop containing sensitive data, they are not compelled to notify those affected.
The team's research may spur legislators to rethink that approach, Halderman said. "Maybe that law is placing too much faith in disk encryption technologies," he said. "It may be that we're not hearing about thefts of encrypted machines where that data could still be at risk."
Laws like SB 1386 treat encryption as if it's a "magic spell" and ignore the fact that there's such a thing as bad encryption, said encryption expert Bruce Schneier, who is chief technology officer with BT Counterpane.
The underlying problem is that if someone gains access to your machine, it is very difficult to protect the data on your hard drive, Schneier said. "That's an extremely hard problem for a lot of reasons, and this is one example of that."
Hardware-based encryption would probably reduce the risk, Halderman said, but he agreed that "it's a difficult problem."
Hard-drive makers Seagate and Hitachi both offer hardware-based disk encryption options with their hard drives, although these options come with a premium price tag.
EMC buys Pi to round out cloud computing unit
Storage giant EMC continues to push into the consumer territory: Its latest move is to acquire Pi, a company whose software and services will help users keep track of their personal data.
Seattle-based Pi develops software and online services to enable users to control how they find, access, share and protect everything from photos, videos, and music. The data can be stored online or locally.
The company name stands for personal information, not the number 3.14.
The rapidly growing amount of personal data is what prompted EMC to open its wallet, according to CEO Joe Tucci. It's a cash transaction, but EMC won't disclose the amount.
Pi hasn't actually launched any products or services yet: They are in beta testing, according to EMC.
EMC sees Pi not only as part of its consumer push, but also an element of its cloud computing strategy, the next big thing in storage, according to one analyst.
"Cloud computing is the next storage hype. It's all about moving storage, back up, and even clock cycles to the net," said Per Sedihn, chief technology officer at Swedish storage integrator Proact.
EMC expects to complete the deal during the first quarter, at which point Pi and its 100 employees will join EMC's newly minted Cloud Infrastructure and Services Division. It already includes Mozy, an online backup service, and Fortress, a platform for cloud-based services. Pi founder and CEO Paul Maritz (who used to be an executive at Microsoft), will join EMC's executive management team as president and general manager of the divsion.
EMC is far from the only company interested in the area. Amazon launched Simple Storage Service (S3) two years ago. It provides data storage through a web services based interface.
Proact's Sedihn also likes Nirvanix, a company that counts Intel among its investors. "They have a very nice user interface", said Sedihn, adding that Google is also waiting in the wings.
"I think cloud services will mainly be used by consumers and smaller companies. But I also expect larger companies to build their internal infrastructure with this model, said Sedihn.
Seattle-based Pi develops software and online services to enable users to control how they find, access, share and protect everything from photos, videos, and music. The data can be stored online or locally.
The company name stands for personal information, not the number 3.14.
The rapidly growing amount of personal data is what prompted EMC to open its wallet, according to CEO Joe Tucci. It's a cash transaction, but EMC won't disclose the amount.
Pi hasn't actually launched any products or services yet: They are in beta testing, according to EMC.
EMC sees Pi not only as part of its consumer push, but also an element of its cloud computing strategy, the next big thing in storage, according to one analyst.
"Cloud computing is the next storage hype. It's all about moving storage, back up, and even clock cycles to the net," said Per Sedihn, chief technology officer at Swedish storage integrator Proact.
EMC expects to complete the deal during the first quarter, at which point Pi and its 100 employees will join EMC's newly minted Cloud Infrastructure and Services Division. It already includes Mozy, an online backup service, and Fortress, a platform for cloud-based services. Pi founder and CEO Paul Maritz (who used to be an executive at Microsoft), will join EMC's executive management team as president and general manager of the divsion.
EMC is far from the only company interested in the area. Amazon launched Simple Storage Service (S3) two years ago. It provides data storage through a web services based interface.
Proact's Sedihn also likes Nirvanix, a company that counts Intel among its investors. "They have a very nice user interface", said Sedihn, adding that Google is also waiting in the wings.
"I think cloud services will mainly be used by consumers and smaller companies. But I also expect larger companies to build their internal infrastructure with this model, said Sedihn.
Subscribe to:
Posts (Atom)