More than half a million Web sites have been compromised in a new round of attacks that hacked domains in order to infect unsuspecting users' PCs with a variety of malware, a security researcher said today.
"This is an on-going campaign, with new domains [hosting the malware] popping up even this morning," said Paul Ferguson, a network architect with anti-virus vendor Trend Micro. "The domains are changing constantly."
According to Ferguson, over half a million legitimate Web sites have been hacked by today's mass-scale attack, only the latest in a string that goes back to at least January. All of the sites, he confirmed, are running "phpBB," an open-source message forum manager.
Ferguson didn't know how the sites were compromised; Trend Micro's investigation is in progress, he said. "We're not sure if it's [because of] improper configuration of phpBB or a vulnerability. Open-source applications like phpBB tend to be targeted quite a bit."
Visitors to a hacked site are redirected through a series of servers, some clearly compromised themselves, until the last in the chain is reached; that server then pings the PC for any one of several vulnerabilities, including bugs in both Microsoft Corp.'s Internet Explorer and RealNetworks Inc.'s RealPlayer media player. If any of the vulnerabilities is present, the PC is exploited and malware is downloaded to it.
Some of the compromised sites have been hijacked before, said Ferguson. "Some had recently been used for keyword search ranking manipulation, and others to pitch fake pharmaceuticals or just malware," he said.
While other research by Trend Micro identified the malware hitting users' PCs as a variant of the Zlob Trojan horse, Ferguson said that more than just one piece of malware is being served. "We seeing some new stuff coming out of this one," he said.
The last massive site attack was less than three weeks ago, when sites that included government URLs in the U.K. and some domains operated by the United Nations were hacked. At the time, some researchers said that bugs in Microsoft's SQL Server or Internet Information Services (IIS) server software was to blame. A few days later, however, Microsoft denied responsibility.
Don't expect the run of site infections to stop anytime soon, said Trend Micro's Ferguson. "As long as attacks are tied to site development and as long as sites don't secure their content, we'll see these attacks," he said.
Tuesday, May 13, 2008
HP in talks to buy EDS for up to $13 billion
Hewlett-Packard is close to acquiring IT services company Electronic Data Systems for around $13 billion, according to a report in the Wall Street Journal published on Monday.
The deal could be announced as early as Tuesday, according to the news report, which cited sources close to the matter.
The acquisition could boost HP's services business.
A spokeswoman from HP declined comment.
The deal could be announced as early as Tuesday, according to the news report, which cited sources close to the matter.
The acquisition could boost HP's services business.
A spokeswoman from HP declined comment.
Google Friend Connect serves up social networking
Google Monday released a preview version of Friend Connect, a service designed to let Web publishers add social networking features to their sites.
Friend Connect, which will be available on the Web at some point on Monday, lets publishers add social networking applications by inserting "a snippet of code" in their sites, Google said.
"We're seeing social capabilities get baked into the infrastructure of the Web. [They're] increasingly not tied to any one site, to any one source of friends, or any one type of application. We see the Web moving towards an end state where people can use any apps on any Web sites with any of their friends," said David Glazer, director of engineering at Google, during a press conference to discuss Friend Connect.
Thus, sites will be able to add features like user registration, friends invitation and message posting, as well as allow visitors to interact with existing friends in social networking sites like Facebook, Google's Orkut, Plaxo and Hi5, according to Google.
"Google Friend Connect is like giving Webmasters a saltshaker full of 'social' that they can sprinkle on their sites to add social capabilities," Glazer said.
Google's move is yet another in a recent string of data-portability efforts at tearing down the walls in social networking sites and letting users export the data and content they have stored in those sites. MySpace and Facebook took steps in that direction with announcements last week.
As the popularity of social networks keeps rising and people set up multiple profiles in such sites, they are demanding the ability to carry their data, content and connections from one site to another, so that they don't have to reenter all that information again.
At the same time, Web publishers of all sizes are eager to latch on to the craze by adding social networking features to their sites, now that a critical mass of Internet users have embraced the interaction and sharing that social applications provide.
Friend Connect makes use of open standards for authentication and authorization like OpenID and OAuth, and de facto makes any Web site a potential "container" of social applications built with Google's OpenSocial APIs, Glazer said.
"The entire Web has become a container for OpenSocial apps," he said.
Monday night, Web publishers will be able to sign up to a waiting list to get access to the Friend Connect service, but Google expects to make the service available to anyone within a matter of months, officials said.
Friend Connect, which will be available on the Web at some point on Monday, lets publishers add social networking applications by inserting "a snippet of code" in their sites, Google said.
"We're seeing social capabilities get baked into the infrastructure of the Web. [They're] increasingly not tied to any one site, to any one source of friends, or any one type of application. We see the Web moving towards an end state where people can use any apps on any Web sites with any of their friends," said David Glazer, director of engineering at Google, during a press conference to discuss Friend Connect.
Thus, sites will be able to add features like user registration, friends invitation and message posting, as well as allow visitors to interact with existing friends in social networking sites like Facebook, Google's Orkut, Plaxo and Hi5, according to Google.
"Google Friend Connect is like giving Webmasters a saltshaker full of 'social' that they can sprinkle on their sites to add social capabilities," Glazer said.
Google's move is yet another in a recent string of data-portability efforts at tearing down the walls in social networking sites and letting users export the data and content they have stored in those sites. MySpace and Facebook took steps in that direction with announcements last week.
As the popularity of social networks keeps rising and people set up multiple profiles in such sites, they are demanding the ability to carry their data, content and connections from one site to another, so that they don't have to reenter all that information again.
At the same time, Web publishers of all sizes are eager to latch on to the craze by adding social networking features to their sites, now that a critical mass of Internet users have embraced the interaction and sharing that social applications provide.
Friend Connect makes use of open standards for authentication and authorization like OpenID and OAuth, and de facto makes any Web site a potential "container" of social applications built with Google's OpenSocial APIs, Glazer said.
"The entire Web has become a container for OpenSocial apps," he said.
Monday night, Web publishers will be able to sign up to a waiting list to get access to the Friend Connect service, but Google expects to make the service available to anyone within a matter of months, officials said.
Mozilla slates Firefox 3.0 RC1 for late May
Mozilla Corp. announced that it has stopped making changes to the first release candidate of Firefox 3.0 and is working to get that build to users by the end of the month.
"We are code complete for Firefox 3 Release Candidate 1 (RC1)," said Mike Schroepfer, Mozilla's vice president of engineering, in a post to the company's development blog on Saturday. "If all goes well we should have the Release Candidate publicly available in late May."
The release candidate -- typically the final stage before software goes final -- will be pushed to more than 1.2 million users when it launches, Schroepfer said.
It's possible that RC1 will be the one and only release candidate. "The QA cycle for RC1 is more extensive than the betas since this may be our last milestone," Schroepfer said in a message posted to the "mozilla.dev.planning" message forum. However, if serious bugs are uncovered, "we will continue to release new Release Candidates until we are ready for final ship," he said.
Mozilla developers quashed several bugs starting Friday morning to make the Saturday "code freeze" deadline, according to the mozilla.dev.planning forum. Among the fixed flaws was a regression bug that made Firefox 3.0 incorrectly convert characters when loading URLs.
Mozilla issued three release candidates in the run-up to the final code of Firefox 2.0 in 2006; as recently as late March Schroepfer said that he expected Firefox 3.0 to follow that same pattern.
The open-source developer last updated its under-construction Firefox 3.0 nearly six weeks ago when it released Beta 5 to testers. Days before that, Schroepfer said Mozilla was shooting for an early-May RC1, but warned that that target might slip. "The release candidates will move a little slower than beta," he said in late March, because of the need to account for more public feedback than with earlier builds.
Also in late March, Schroepfer said that the final version of Firefox would likely ship in June. Monday, he said that Mozilla is still on track for a final release by the end of next month.
Firefox currently accounts for about 17.7% of the browser market, according to Net Applications Inc.'s most recent data. Microsoft Corp.'s Internet Explorer retains the browser lead with 74.8%, while Apple Inc.'s Safari holds down third place with 5.8%.
"We are code complete for Firefox 3 Release Candidate 1 (RC1)," said Mike Schroepfer, Mozilla's vice president of engineering, in a post to the company's development blog on Saturday. "If all goes well we should have the Release Candidate publicly available in late May."
The release candidate -- typically the final stage before software goes final -- will be pushed to more than 1.2 million users when it launches, Schroepfer said.
It's possible that RC1 will be the one and only release candidate. "The QA cycle for RC1 is more extensive than the betas since this may be our last milestone," Schroepfer said in a message posted to the "mozilla.dev.planning" message forum. However, if serious bugs are uncovered, "we will continue to release new Release Candidates until we are ready for final ship," he said.
Mozilla developers quashed several bugs starting Friday morning to make the Saturday "code freeze" deadline, according to the mozilla.dev.planning forum. Among the fixed flaws was a regression bug that made Firefox 3.0 incorrectly convert characters when loading URLs.
Mozilla issued three release candidates in the run-up to the final code of Firefox 2.0 in 2006; as recently as late March Schroepfer said that he expected Firefox 3.0 to follow that same pattern.
The open-source developer last updated its under-construction Firefox 3.0 nearly six weeks ago when it released Beta 5 to testers. Days before that, Schroepfer said Mozilla was shooting for an early-May RC1, but warned that that target might slip. "The release candidates will move a little slower than beta," he said in late March, because of the need to account for more public feedback than with earlier builds.
Also in late March, Schroepfer said that the final version of Firefox would likely ship in June. Monday, he said that Mozilla is still on track for a final release by the end of next month.
Firefox currently accounts for about 17.7% of the browser market, according to Net Applications Inc.'s most recent data. Microsoft Corp.'s Internet Explorer retains the browser lead with 74.8%, while Apple Inc.'s Safari holds down third place with 5.8%.
Srizbi grows into world's largest botnet
The prodigious Srizbi botnet has continued to grow and now accounts for up to 50 percent of the spam being filtered by one security company.
If the latest figures from security company Marshal can be taken at face value -- their engines scan much the same traffic as do others in the industry -- then Srizbi is now the biggest single menace on the Internet, dwarfing even the feared and mysterious Storm.
Having compromised 300,000 PCs around the world, it was now sending out an estimated 60 billion spam emails per day on "watches, pens, male enlargement pills", a torrent that consumed huge amounts of processing power to keep in check.
"Srizbi is the single greatest spam threat we have ever seen. At its peak, the highly publicized Storm botnet only accounted for 20 percent of spam. Srizbi now produces more spam than all the other botnets combined." said Marshal's Bradley Anstis.
In March of this year, Marshall's Threat Research and Content Engineering team (TRACE) reported the botnet as a growing problem among a small family of super-botnets, a sign that a few highly-successful bots were starting to monopolize traffic.
If it's growing, what is it about this botnet that has made it so successful? Srizbi appears to spread by as part of the spam messages it sends, meaning that its lifecycle extends to reproducing itself and not just distributing email. This is not a unique feature, but it could be that it is either evading detection at this stage or tricking people using more sophisticated social engineering.
What makes Srizbi slightly baffling is that botnet controllers like bots to stay away for the headlines. At the point they become as large as Srizbi has become, the chances of them being detected and countered increases. It's possible that Srizbi has been more successful that its creators expected.
If there's hope, it's in the fate of the infamous Storm, which appeared in early 2007, and became the malware phenomenon of that year. Marshall's figures suggest it now accounts for less than 1 percent of spam traffic, which suggests that Sribzi will one day go the same way. However, by the time that this happens, it is also possible that a new super-botnet will have taken its place.
"Microsoft recently announced its success combating the Storm botnet with their Malicious Software Removal Tool (MSRT). The challenge now is for the security industry to collectively turn its sights on Srizbi and the other major botnets. We look forward to seeing Microsoft target Srizbi with MSRT in the near future," said Marshal's Anstis.
If the latest figures from security company Marshal can be taken at face value -- their engines scan much the same traffic as do others in the industry -- then Srizbi is now the biggest single menace on the Internet, dwarfing even the feared and mysterious Storm.
Having compromised 300,000 PCs around the world, it was now sending out an estimated 60 billion spam emails per day on "watches, pens, male enlargement pills", a torrent that consumed huge amounts of processing power to keep in check.
"Srizbi is the single greatest spam threat we have ever seen. At its peak, the highly publicized Storm botnet only accounted for 20 percent of spam. Srizbi now produces more spam than all the other botnets combined." said Marshal's Bradley Anstis.
In March of this year, Marshall's Threat Research and Content Engineering team (TRACE) reported the botnet as a growing problem among a small family of super-botnets, a sign that a few highly-successful bots were starting to monopolize traffic.
If it's growing, what is it about this botnet that has made it so successful? Srizbi appears to spread by as part of the spam messages it sends, meaning that its lifecycle extends to reproducing itself and not just distributing email. This is not a unique feature, but it could be that it is either evading detection at this stage or tricking people using more sophisticated social engineering.
What makes Srizbi slightly baffling is that botnet controllers like bots to stay away for the headlines. At the point they become as large as Srizbi has become, the chances of them being detected and countered increases. It's possible that Srizbi has been more successful that its creators expected.
If there's hope, it's in the fate of the infamous Storm, which appeared in early 2007, and became the malware phenomenon of that year. Marshall's figures suggest it now accounts for less than 1 percent of spam traffic, which suggests that Sribzi will one day go the same way. However, by the time that this happens, it is also possible that a new super-botnet will have taken its place.
"Microsoft recently announced its success combating the Storm botnet with their Malicious Software Removal Tool (MSRT). The challenge now is for the security industry to collectively turn its sights on Srizbi and the other major botnets. We look forward to seeing Microsoft target Srizbi with MSRT in the near future," said Marshal's Anstis.
Srizbi grows into world's largest botnet
The prodigious Srizbi botnet has continued to grow and now accounts for up to 50 percent of the spam being filtered by one security company.
If the latest figures from security company Marshal can be taken at face value -- their engines scan much the same traffic as do others in the industry -- then Srizbi is now the biggest single menace on the Internet, dwarfing even the feared and mysterious Storm.
Having compromised 300,000 PCs around the world, it was now sending out an estimated 60 billion spam emails per day on "watches, pens, male enlargement pills", a torrent that consumed huge amounts of processing power to keep in check.
"Srizbi is the single greatest spam threat we have ever seen. At its peak, the highly publicized Storm botnet only accounted for 20 percent of spam. Srizbi now produces more spam than all the other botnets combined." said Marshal's Bradley Anstis.
In March of this year, Marshall's Threat Research and Content Engineering team (TRACE) reported the botnet as a growing problem among a small family of super-botnets, a sign that a few highly-successful bots were starting to monopolize traffic.
If it's growing, what is it about this botnet that has made it so successful? Srizbi appears to spread by as part of the spam messages it sends, meaning that its lifecycle extends to reproducing itself and not just distributing email. This is not a unique feature, but it could be that it is either evading detection at this stage or tricking people using more sophisticated social engineering.
What makes Srizbi slightly baffling is that botnet controllers like bots to stay away for the headlines. At the point they become as large as Srizbi has become, the chances of them being detected and countered increases. It's possible that Srizbi has been more successful that its creators expected.
If there's hope, it's in the fate of the infamous Storm, which appeared in early 2007, and became the malware phenomenon of that year. Marshall's figures suggest it now accounts for less than 1 percent of spam traffic, which suggests that Sribzi will one day go the same way. However, by the time that this happens, it is also possible that a new super-botnet will have taken its place.
"Microsoft recently announced its success combating the Storm botnet with their Malicious Software Removal Tool (MSRT). The challenge now is for the security industry to collectively turn its sights on Srizbi and the other major botnets. We look forward to seeing Microsoft target Srizbi with MSRT in the near future," said Marshal's Anstis.
If the latest figures from security company Marshal can be taken at face value -- their engines scan much the same traffic as do others in the industry -- then Srizbi is now the biggest single menace on the Internet, dwarfing even the feared and mysterious Storm.
Having compromised 300,000 PCs around the world, it was now sending out an estimated 60 billion spam emails per day on "watches, pens, male enlargement pills", a torrent that consumed huge amounts of processing power to keep in check.
"Srizbi is the single greatest spam threat we have ever seen. At its peak, the highly publicized Storm botnet only accounted for 20 percent of spam. Srizbi now produces more spam than all the other botnets combined." said Marshal's Bradley Anstis.
In March of this year, Marshall's Threat Research and Content Engineering team (TRACE) reported the botnet as a growing problem among a small family of super-botnets, a sign that a few highly-successful bots were starting to monopolize traffic.
If it's growing, what is it about this botnet that has made it so successful? Srizbi appears to spread by as part of the spam messages it sends, meaning that its lifecycle extends to reproducing itself and not just distributing email. This is not a unique feature, but it could be that it is either evading detection at this stage or tricking people using more sophisticated social engineering.
What makes Srizbi slightly baffling is that botnet controllers like bots to stay away for the headlines. At the point they become as large as Srizbi has become, the chances of them being detected and countered increases. It's possible that Srizbi has been more successful that its creators expected.
If there's hope, it's in the fate of the infamous Storm, which appeared in early 2007, and became the malware phenomenon of that year. Marshall's figures suggest it now accounts for less than 1 percent of spam traffic, which suggests that Sribzi will one day go the same way. However, by the time that this happens, it is also possible that a new super-botnet will have taken its place.
"Microsoft recently announced its success combating the Storm botnet with their Malicious Software Removal Tool (MSRT). The challenge now is for the security industry to collectively turn its sights on Srizbi and the other major botnets. We look forward to seeing Microsoft target Srizbi with MSRT in the near future," said Marshal's Anstis.
Hackers create their own social network
Hackers now have their own social network, backed by GnuCitizen, a high-profile "ethical hacking" group.
The network, called House of Hackers, has signed up more than 1,000 members since its launch earlier this week, according to the site.
GnuCitizen set up the network in order to promote collaboration among security researchers. The site's founders said they use "hacker" in the complementary sense.
The term "should all express admiration for the work of the most skilled, creative, clever, unique, provocative, intelligent, intense, intriguing and interesting people among the human society," said GnuCitizen in a message on the House of Hackers website.
"From our perspective, a hacker is a person people express admiration for his/her work, skills, creative edge, cleverness, uniqueness, intelligence, etc," said GnuCitizen founder Petko D. Petkov in a blog post.
"We do not promote criminal activities. The network is designed to enable its members to exchange ideas with each other, communicate, form groups, elite circles and tiger/red teams, conglomerate around projects and participate in a hacker recruitment market."
Petkov said the ability to create groups on the network could be useful for setting up ad-hoc penetration testing teams. He suggested organizers could use the site's events features to test the water for planned events.
GnuCitizen is encouraging businesses to use the site to seek out security researchers for jobs or particular projects.
The network is built on Ning, a site allowing the creation of ad-hoc social networks, and programmers can create customized add-ons using the Google-backed Open Social API, meaning the add-ons are reusable on other sites.
GnuCitizen was founded in 2005 and has been credited with some high-profile security research of late, including vulnerabilities involving SNMP and BT Home Hub Wi-Fi routers.
The network, called House of Hackers, has signed up more than 1,000 members since its launch earlier this week, according to the site.
GnuCitizen set up the network in order to promote collaboration among security researchers. The site's founders said they use "hacker" in the complementary sense.
The term "should all express admiration for the work of the most skilled, creative, clever, unique, provocative, intelligent, intense, intriguing and interesting people among the human society," said GnuCitizen in a message on the House of Hackers website.
"From our perspective, a hacker is a person people express admiration for his/her work, skills, creative edge, cleverness, uniqueness, intelligence, etc," said GnuCitizen founder Petko D. Petkov in a blog post.
"We do not promote criminal activities. The network is designed to enable its members to exchange ideas with each other, communicate, form groups, elite circles and tiger/red teams, conglomerate around projects and participate in a hacker recruitment market."
Petkov said the ability to create groups on the network could be useful for setting up ad-hoc penetration testing teams. He suggested organizers could use the site's events features to test the water for planned events.
GnuCitizen is encouraging businesses to use the site to seek out security researchers for jobs or particular projects.
The network is built on Ning, a site allowing the creation of ad-hoc social networks, and programmers can create customized add-ons using the Google-backed Open Social API, meaning the add-ons are reusable on other sites.
GnuCitizen was founded in 2005 and has been credited with some high-profile security research of late, including vulnerabilities involving SNMP and BT Home Hub Wi-Fi routers.
BlackBerry Bold beats iPhone to 3G
Amid swirling rumors about the impending announcement of a 3G iPhone, Research in Motion today introduced its slickest, speediest, most powerful, and most connected BlackBerry to date: the BlackBerry Bold 9000.
Equipped with support for tri-band HSDPA and quad-band EDGE (which means that it will support the highest-speed GSM-family data networks wherever they are available worldwide), 802.11a/b/g Wi-Fi, stereo Bluetooth, and both assisted and autonomous GPS, the Bold could prove a formidable challenger to Apple's next-gen iPhone on connectivity alone.
It even looks a bit iPhone-esque, with its glassy display area, generally flat profile, and rounded corners. Still, the Bold comes configured with a hardware QWERTY keyboard, and it retains the general dimensions of its predecessors, so it's much shorter and somewhat thicker than the iPhone.
The Bold's removable back is covered in black leatherette, and you'll be able to personalize the device by buying replacement backs in different colors (blue, brown, green, gray, and red).
The redesigned keyboard has guitar-inspired frets--thin metal strips--between each row. The keys themselves are sculpted to help users avoid fingertip slippage. The device also carries a 2-megapixel camera capable of up to 5X digital zoom.
Fast CPU, High-Res Display
The Bold's 624-MHz StrongARM processor with full MMX (multimedia extensions) is the most powerful CPU on a handheld to date (the BlackBerry Curve, in contrast, uses a 312-MHz chip without MMX). The Bold's extra power enables the device to handle full-motion video on its 480-by-320-pixel, 65,000-plus-color display (that resolution is double the Curve's at basically the same screen size): In a demo at PC World's offices last week, video clips on the Bold looked smooth and exceptionally sharp.
Of course, little commercial video content is available as yet for non-Apple media players. Further, the Bold's screen is diminutive compared to the current iPhone's roomy 3.5-inch display, and it isn't a touch screen. (RIM president and co-CEO Mike Lazaridis simply smiled when we asked about reports that the company is working on a touch-screen BlackBerry).
But since the Bold's smaller display holds the same number of pixels as the current iPhone's, images look much higher-res on it than on its competitor.
The Bold's 1GB of on-board secure memory (on top of its 128MB of flash) will appeal to BlackBerry's core enterprise community, providing storage for items that companies would rather not make available for transport on a micro SD card. But users who want to carry their music and video libraries on their handsets will be able to do so via micro SD.
Carriers will determine pricing, and RIM had no details on which U.S. carrier will introduce the Bold (though AT&T, with the largest HSDPA network in the United States, seems a likelier candidate than T-Mobile, which has just begun to roll out 3G service stateside). RIM said that it expects the Bold to be shipping worldwide this summer.
Equipped with support for tri-band HSDPA and quad-band EDGE (which means that it will support the highest-speed GSM-family data networks wherever they are available worldwide), 802.11a/b/g Wi-Fi, stereo Bluetooth, and both assisted and autonomous GPS, the Bold could prove a formidable challenger to Apple's next-gen iPhone on connectivity alone.
It even looks a bit iPhone-esque, with its glassy display area, generally flat profile, and rounded corners. Still, the Bold comes configured with a hardware QWERTY keyboard, and it retains the general dimensions of its predecessors, so it's much shorter and somewhat thicker than the iPhone.
The Bold's removable back is covered in black leatherette, and you'll be able to personalize the device by buying replacement backs in different colors (blue, brown, green, gray, and red).
The redesigned keyboard has guitar-inspired frets--thin metal strips--between each row. The keys themselves are sculpted to help users avoid fingertip slippage. The device also carries a 2-megapixel camera capable of up to 5X digital zoom.
Fast CPU, High-Res Display
The Bold's 624-MHz StrongARM processor with full MMX (multimedia extensions) is the most powerful CPU on a handheld to date (the BlackBerry Curve, in contrast, uses a 312-MHz chip without MMX). The Bold's extra power enables the device to handle full-motion video on its 480-by-320-pixel, 65,000-plus-color display (that resolution is double the Curve's at basically the same screen size): In a demo at PC World's offices last week, video clips on the Bold looked smooth and exceptionally sharp.
Of course, little commercial video content is available as yet for non-Apple media players. Further, the Bold's screen is diminutive compared to the current iPhone's roomy 3.5-inch display, and it isn't a touch screen. (RIM president and co-CEO Mike Lazaridis simply smiled when we asked about reports that the company is working on a touch-screen BlackBerry).
But since the Bold's smaller display holds the same number of pixels as the current iPhone's, images look much higher-res on it than on its competitor.
The Bold's 1GB of on-board secure memory (on top of its 128MB of flash) will appeal to BlackBerry's core enterprise community, providing storage for items that companies would rather not make available for transport on a micro SD card. But users who want to carry their music and video libraries on their handsets will be able to do so via micro SD.
Carriers will determine pricing, and RIM had no details on which U.S. carrier will introduce the Bold (though AT&T, with the largest HSDPA network in the United States, seems a likelier candidate than T-Mobile, which has just begun to roll out 3G service stateside). RIM said that it expects the Bold to be shipping worldwide this summer.
AMD refreshes low-power Quad-Core Opterons lineup
Advanced Micro Devices is shipping B3 versions of its low-power Quad-Core Opteron processors.
AMD first detailed these processors in September 2007, when it unveiled the Quad-Core Opteron processor. However, earlier versions of the chips were affected by a bug discovered in December that reportedly forced AMD to suspend some processor shipments. The B3 version of the chips announced Monday fixed that bug.
The five chips run at clock speeds ranging from 1.7GHz to 1.9GHz. Three of the chips -- the 2344 HE, 2346 HE, and 2347HE -- are designed for servers with two processors, while the other two -- the 8346 HE and 8347 HE -- can be used in servers with four or eight processors. They are priced from US$255 to $873 in 1,000-unit quantities, a standard way of quoting chip prices.
The low-power Quad-Core Opteron chips have an average power consumption of 55 watts, AMD said.
AMD first detailed these processors in September 2007, when it unveiled the Quad-Core Opteron processor. However, earlier versions of the chips were affected by a bug discovered in December that reportedly forced AMD to suspend some processor shipments. The B3 version of the chips announced Monday fixed that bug.
The five chips run at clock speeds ranging from 1.7GHz to 1.9GHz. Three of the chips -- the 2344 HE, 2346 HE, and 2347HE -- are designed for servers with two processors, while the other two -- the 8346 HE and 8347 HE -- can be used in servers with four or eight processors. They are priced from US$255 to $873 in 1,000-unit quantities, a standard way of quoting chip prices.
The low-power Quad-Core Opteron chips have an average power consumption of 55 watts, AMD said.
MSI's upcoming Wind laptop priced from $560
Taiwanese hardware maker Micro-Star International's upcoming Wind laptop can be preordered starting from US$560.
The Wind, which is expected to use Intel's upcoming 1.6GHz Atom N270 processor, is just one of an expected flood of low-cost systems based on the new chip that will be on show at the Computex exhibition in Taipei during June.
While the Atom processor has yet to be released by Intel, online retailer Expansys has begun accepting orders for the U.K. version of the Wind running either Windows XP Home for $604 or Linux for $560. The laptops are available in three colors: white, black and pink.
The same laptops are priced at £350 ($684) and £320, respectively, on Expansys' U.K. Web site.
The Wind systems available for preorder on Expansys have a 10-inch screen with a resolution of 1,024 pixels by 768 pixels and an LED (light-emitting diode) backlight, which helps lower power consumption. The system, which weighs roughly 1 kilogram (2.2 pounds), also ships with 1G byte of memory, Wi-Fi, Bluetooth, a 1.3-megapixel camera, and an 80G-byte hard drive.
Expansys did not list pricing for a planned version of the Wind that has an 8.9-inch screen.
The Wind, which is expected to use Intel's upcoming 1.6GHz Atom N270 processor, is just one of an expected flood of low-cost systems based on the new chip that will be on show at the Computex exhibition in Taipei during June.
While the Atom processor has yet to be released by Intel, online retailer Expansys has begun accepting orders for the U.K. version of the Wind running either Windows XP Home for $604 or Linux for $560. The laptops are available in three colors: white, black and pink.
The same laptops are priced at £350 ($684) and £320, respectively, on Expansys' U.K. Web site.
The Wind systems available for preorder on Expansys have a 10-inch screen with a resolution of 1,024 pixels by 768 pixels and an LED (light-emitting diode) backlight, which helps lower power consumption. The system, which weighs roughly 1 kilogram (2.2 pounds), also ships with 1G byte of memory, Wi-Fi, Bluetooth, a 1.3-megapixel camera, and an 80G-byte hard drive.
Expansys did not list pricing for a planned version of the Wind that has an 8.9-inch screen.
Powerset unveils test version of Google-killer
The public will get its first chance Monday to test a search engine from start-up Powerset that eschews conventional keyword technology and instead is designed to understand the meaning of Web pages.
As such, Powerset's search engine holds the promise of fundamentally changing people's expectations for search engines by, in theory, offering a smarter, more efficient experience.
However, Powerset's beta version, while delivering impressive results, has a limited scope and index, leaving unanswered questions about its ability to work its magic at the massive scale of Google's keyword-based search engine.
"We're changing the way information is searched by doing a much deeper analysis of the pages we index," said Scott Prevost, Powerset's product director.
Keyword engines treat pages as word bags, indexing their content without grasping its meaning, he said. Meanwhile, Powerset's engine, applying technology developed in-house as well as licensed from Xerox's PARC subsidiary, creates a semantic representation by parsing each sentence and extracting its meaning. "Meaning is what we index," he said.
In an interview in October with IDG News Service, Marissa Mayer, Google's vice president of Search Products & User Experience, acknowledged that the company's search engine should -- and will -- overcome its keyword dependence in time.
"People should be able to ask questions and we should understand their meaning, or they should be able to talk about things at a conceptual level. We see a lot of concept-based questions -- not about what words will appear on the page but more like 'what is this about?'. A lot of people will turn to things like the semantic Web as a possible answer to that," she said.
But she added that Google's search engine acts smart thanks to the humongous amount of data it crunches. "With a lot of data, you ultimately see things that seem intelligent even though they're done through brute force," she said. As examples, she cited a query like "GM," which the engine interprets as "General Motors" but if the query is "GM foods," it delivers results for "genetically-modified foods." "Because we're processing so much data, we have a lot of context around things like acronyms. Suddenly, the search engine seems smart, like it achieved that semantic understanding, but it hasn't really," she said.
For now, Powerset's index is very limited, consisting only of millions of pages from Wikipedia and Metaweb Technologies' Freebase, a Web-based structured database of information. However, Prevost vows that the index will begin growing within a month after its launch and eventually rival in size those of Google, Yahoo and others. "Our technology fully scales," he said.
Still, it's impressive to see Powerset's search engine in action and the promise it holds. Instead of returning the proverbial 10 blue links for search results, Powerset can do more, such as assembling a collection of facts related to the query, as well as summarize the found information. It can also provide direct answers to factual questions.
Because the content from Wikipedia and Freebase can be re-published, Powerset can remain relevant after a user clicks on over to a search result, by providing an outline to navigate through the page and a summary of facts. This, of course, isn't something that Powerset could do with copyrighted content, but the company will seek partnerships with publishers to obtain permission, Prevost said. "We think it'll be a situation where publishers will want their content to be served up in this way," he said.
Industry analyst Greg Sterling of Sterling Market Intelligence calls Powerset's capabilities "impressive" and particularly likes its search results interface. "What they've created is both a better search engine for Wikipedia and a massive 'proof of concept' for their algorithm and technology," he said in an e-mail interview.
Now Powerset has to prove that its search engine can scale and deliver against an index of billions upon billions of Web pages and serving millions of concurrent end users. "There's certainly potential there to build a better mousetrap, it would appear. But bringing what Powerset has done for Wikipedia to the entire Internet seems an enormous challenge that will take both time and lots of additional resources," Sterling said.
Prevost acknowledges that to do this type of deep processing takes a lot of computational power, although once indexed, retrieving pages' information doesn't pose any special challenge.
Powerset also faces the challenges of a start-up technology company, such as generating revenue and going through growing pains. The company has already had some management upheaval, announcing in November the departure of co-founder and Chief Operating Officer Steve Newcomb and its search for a CEO, as co-founder Barney Pell gave up that post to become chief technology officer. "The CEO search is still in process, but we have a strong internal management structure and board of directors," he said.
Prevost said the company's investors are committed to the company and to seeing that it has the resources necessary to scale up the search engine to the level of those with indexes of 20 billion pages.
Powerset's business model is based on advertising, although the search engine will not serve up ads from the beginning. "There's a lot of cool stuff we can do in the ad space by matching the meaning of queries to the relevance of ads, but that's much more longer term," he said.
The search engine will be limited to Web search at first, although Powerset has contemplated adding specialty engines for things like images and video later, as well as targeting verticals such as health, product reviews and travel, he said.
"We've only shown the tip of the iceberg in language analysis," he said.
As such, Powerset's search engine holds the promise of fundamentally changing people's expectations for search engines by, in theory, offering a smarter, more efficient experience.
However, Powerset's beta version, while delivering impressive results, has a limited scope and index, leaving unanswered questions about its ability to work its magic at the massive scale of Google's keyword-based search engine.
"We're changing the way information is searched by doing a much deeper analysis of the pages we index," said Scott Prevost, Powerset's product director.
Keyword engines treat pages as word bags, indexing their content without grasping its meaning, he said. Meanwhile, Powerset's engine, applying technology developed in-house as well as licensed from Xerox's PARC subsidiary, creates a semantic representation by parsing each sentence and extracting its meaning. "Meaning is what we index," he said.
In an interview in October with IDG News Service, Marissa Mayer, Google's vice president of Search Products & User Experience, acknowledged that the company's search engine should -- and will -- overcome its keyword dependence in time.
"People should be able to ask questions and we should understand their meaning, or they should be able to talk about things at a conceptual level. We see a lot of concept-based questions -- not about what words will appear on the page but more like 'what is this about?'. A lot of people will turn to things like the semantic Web as a possible answer to that," she said.
But she added that Google's search engine acts smart thanks to the humongous amount of data it crunches. "With a lot of data, you ultimately see things that seem intelligent even though they're done through brute force," she said. As examples, she cited a query like "GM," which the engine interprets as "General Motors" but if the query is "GM foods," it delivers results for "genetically-modified foods." "Because we're processing so much data, we have a lot of context around things like acronyms. Suddenly, the search engine seems smart, like it achieved that semantic understanding, but it hasn't really," she said.
For now, Powerset's index is very limited, consisting only of millions of pages from Wikipedia and Metaweb Technologies' Freebase, a Web-based structured database of information. However, Prevost vows that the index will begin growing within a month after its launch and eventually rival in size those of Google, Yahoo and others. "Our technology fully scales," he said.
Still, it's impressive to see Powerset's search engine in action and the promise it holds. Instead of returning the proverbial 10 blue links for search results, Powerset can do more, such as assembling a collection of facts related to the query, as well as summarize the found information. It can also provide direct answers to factual questions.
Because the content from Wikipedia and Freebase can be re-published, Powerset can remain relevant after a user clicks on over to a search result, by providing an outline to navigate through the page and a summary of facts. This, of course, isn't something that Powerset could do with copyrighted content, but the company will seek partnerships with publishers to obtain permission, Prevost said. "We think it'll be a situation where publishers will want their content to be served up in this way," he said.
Industry analyst Greg Sterling of Sterling Market Intelligence calls Powerset's capabilities "impressive" and particularly likes its search results interface. "What they've created is both a better search engine for Wikipedia and a massive 'proof of concept' for their algorithm and technology," he said in an e-mail interview.
Now Powerset has to prove that its search engine can scale and deliver against an index of billions upon billions of Web pages and serving millions of concurrent end users. "There's certainly potential there to build a better mousetrap, it would appear. But bringing what Powerset has done for Wikipedia to the entire Internet seems an enormous challenge that will take both time and lots of additional resources," Sterling said.
Prevost acknowledges that to do this type of deep processing takes a lot of computational power, although once indexed, retrieving pages' information doesn't pose any special challenge.
Powerset also faces the challenges of a start-up technology company, such as generating revenue and going through growing pains. The company has already had some management upheaval, announcing in November the departure of co-founder and Chief Operating Officer Steve Newcomb and its search for a CEO, as co-founder Barney Pell gave up that post to become chief technology officer. "The CEO search is still in process, but we have a strong internal management structure and board of directors," he said.
Prevost said the company's investors are committed to the company and to seeing that it has the resources necessary to scale up the search engine to the level of those with indexes of 20 billion pages.
Powerset's business model is based on advertising, although the search engine will not serve up ads from the beginning. "There's a lot of cool stuff we can do in the ad space by matching the meaning of queries to the relevance of ads, but that's much more longer term," he said.
The search engine will be limited to Web search at first, although Powerset has contemplated adding specialty engines for things like images and video later, as well as targeting verticals such as health, product reviews and travel, he said.
"We've only shown the tip of the iceberg in language analysis," he said.
China earthquake takes out mobile network in Chengdu
An earthquake registering 7.8 on the Richter Scale knocked out mobile phone service in the western Chinese city of Chengdu, although fixed-line networks remained in service, Chinese state television reported Monday afternoon.
About 2,300 base stations were affected by power outages or transmission problems, China Mobile's Sichuan office told the state-run Xinhua News Agency, adding that repairs were under way. China is the nation's and world's largest mobile service provider.
Service was affected in both southwestern Sichuan province, and in northwestern Shaanxi province, Xinhua reported, although those two areas do not abut. China Mobile also said that call volume had increased by 10 times what is normal but connections were down by half as a result of the earthquake.
China's online video sites were quick to receive footage shot during the earthquake by users, footage that did not appear on CCTV's nightly newscast, which is carried by most major channels. One clip, labeled "Chengdu Earthquake," showed students in a classroom or dormitory room hiding under their desks, as debris falls from the ceiling. "Don't move, don't move, it's ok," the photographer says to a student who emerges from cover too quickly. Footage from Chengdu would also seem to confirm the availability of Internet service there.
The semiconductor industry and China's growing software outsourcing industry take advantage of Chengdu's status as China's fifth-largest city and southwest China's largest academic center.
Although the Chengdu region is not considered a major manufacturing center for semiconductors, Intel began semiconductor manufacturing there in 2005, and employs 600 at a testing and assembly facility in Chengdu.
"We are now determining if this has implications for Intel's operation in Chengdu. Our first priority is the safety of our people," said Danny Cheung, an Intel spokesman based in Singapore, in an e-mail.
Semiconductor Manufacturing International (SMIC) also operates a testing and assembly facility there, according to its Web site. Sources said that SMIC evacuated a fabrication plant and halted production as a result of the quake.
The earthquake occurred at 2:28 p.m. Beijing local time. The State Seismological Bureau (SSB) originally reported the quake registered at 7.6 on the Richter Scale, but later upgraded it to 7.8. The epicenter was approximately 55 kilometers (33 miles) northwest of Chengdu in Wenchuan County. Shaking lasted for approximately one minute, dislodging lights from ceiling fixtures and knocking over water coolers, a reporter told CCTV.
CCTV did not report aftershocks, but the U.S. Geological Service's Web site reported at least 10 by 8:45 p.m. Beijing local time. The quake was felt as far away as coastal Zhejiang province and Beijing. Beijing experienced a separate 3.9 earthquake at 2:35 p.m., the SSB confirmed.
CCTV's first pictures of the event, broadcast at 4:23 p.m. Beijing time, showed people talking on mobile handsets, although it is not known which networks they were using at the time. They showed traffic moving in the street, and a woman with her head bleeding getting into a car. Footage broadcast during the nightly newscast showed visible cracks in some residential buildings, but no collapsed structures or pictures of people injured or killed by the earthquake.
The strength of Monday's 7.8 earthquake equals China's most famous temblor in modern history, a July 1976 event in Tangshan, east of Beijing. Estimated deaths for the Tangshan earthquake range from over 200,000 to more than 700,000. So far, 107 people are confirmed dead as a result of the earthquake, and as many as 900 children may be buried at a high school in an unspecified location, according to the state-run Xinhua News Agency.
By the end of the day Monday, 8,533 people were confirmed dead as a result of the earthquake, and as many as 900 children may be buried at a high school in an unspecified location, according to the state-run Xinhua News Agency.
(Sumner Lemon in Singapore contributed to this report.)
About 2,300 base stations were affected by power outages or transmission problems, China Mobile's Sichuan office told the state-run Xinhua News Agency, adding that repairs were under way. China is the nation's and world's largest mobile service provider.
Service was affected in both southwestern Sichuan province, and in northwestern Shaanxi province, Xinhua reported, although those two areas do not abut. China Mobile also said that call volume had increased by 10 times what is normal but connections were down by half as a result of the earthquake.
China's online video sites were quick to receive footage shot during the earthquake by users, footage that did not appear on CCTV's nightly newscast, which is carried by most major channels. One clip, labeled "Chengdu Earthquake," showed students in a classroom or dormitory room hiding under their desks, as debris falls from the ceiling. "Don't move, don't move, it's ok," the photographer says to a student who emerges from cover too quickly. Footage from Chengdu would also seem to confirm the availability of Internet service there.
The semiconductor industry and China's growing software outsourcing industry take advantage of Chengdu's status as China's fifth-largest city and southwest China's largest academic center.
Although the Chengdu region is not considered a major manufacturing center for semiconductors, Intel began semiconductor manufacturing there in 2005, and employs 600 at a testing and assembly facility in Chengdu.
"We are now determining if this has implications for Intel's operation in Chengdu. Our first priority is the safety of our people," said Danny Cheung, an Intel spokesman based in Singapore, in an e-mail.
Semiconductor Manufacturing International (SMIC) also operates a testing and assembly facility there, according to its Web site. Sources said that SMIC evacuated a fabrication plant and halted production as a result of the quake.
The earthquake occurred at 2:28 p.m. Beijing local time. The State Seismological Bureau (SSB) originally reported the quake registered at 7.6 on the Richter Scale, but later upgraded it to 7.8. The epicenter was approximately 55 kilometers (33 miles) northwest of Chengdu in Wenchuan County. Shaking lasted for approximately one minute, dislodging lights from ceiling fixtures and knocking over water coolers, a reporter told CCTV.
CCTV did not report aftershocks, but the U.S. Geological Service's Web site reported at least 10 by 8:45 p.m. Beijing local time. The quake was felt as far away as coastal Zhejiang province and Beijing. Beijing experienced a separate 3.9 earthquake at 2:35 p.m., the SSB confirmed.
CCTV's first pictures of the event, broadcast at 4:23 p.m. Beijing time, showed people talking on mobile handsets, although it is not known which networks they were using at the time. They showed traffic moving in the street, and a woman with her head bleeding getting into a car. Footage broadcast during the nightly newscast showed visible cracks in some residential buildings, but no collapsed structures or pictures of people injured or killed by the earthquake.
The strength of Monday's 7.8 earthquake equals China's most famous temblor in modern history, a July 1976 event in Tangshan, east of Beijing. Estimated deaths for the Tangshan earthquake range from over 200,000 to more than 700,000. So far, 107 people are confirmed dead as a result of the earthquake, and as many as 900 children may be buried at a high school in an unspecified location, according to the state-run Xinhua News Agency.
By the end of the day Monday, 8,533 people were confirmed dead as a result of the earthquake, and as many as 900 children may be buried at a high school in an unspecified location, according to the state-run Xinhua News Agency.
(Sumner Lemon in Singapore contributed to this report.)
Vint Cerf supports municipal broadband networks
Municipal broadband networks could help boost the availability of high-speed Internet access and even help to ensure Net neutrality in the U.S., said Vint Cerf, vice president and chief Internet evangelist at Google.
Cerf, known as one of the fathers of the Internet for his role in creating its basic architecture, spoke at a lunch in Seattle, a city that is investigating the possibility of building its own broadband network. Seattle would follow its southern neighbor Tacoma, which has been operating its own fiber network for several years.
Cerf disputed arguments that operators sometimes give for why they should be able to limit or block bandwidth-hungry applications on their networks, and suggested that since they don't have technology facts to back up their arguments, people should be able to build their own networks to meet their needs.
"Many people raise the issue that video use on the Net is somehow going to drive it into congestion," he said. While in certain scenarios that could be true, the reality is that increasing the throughput solves the problem, he said.
A person could transfer an hour's worth of video over a gigabit channel in about 16 seconds, he said. That means that rather than streaming video, which is indeed taxing on the Internet, users would download it instead. "It's much easier on the network, and people have more than enough storage to download," he said.
Some operators also talk about the capacity of the Internet backbone itself. "As for running out of capacity, we've barely touched the surface of the fiber capacity. We are far from having exhausted this capacity," he said.
Operators may simply not want to invest in their networks to bring higher bandwidth to users, he said. "That comes back to the municipal argument. Citizens that want the capacity should be able to decide among themselves to put the resources in place to get that kind of capacity," he said.
Some operators contend that municipal networks create competition between the government and private companies. "That's nonsense," Cerf said. Governments would contract with the private sector to build the network and maybe even operate it, he said, so the two would be partners. In Tacoma the city maintains the network, but other companies serve as ISPs (Internet service providers), selling access to end-users.
Cerf's comments come as a new bill was introduced by lawmakers in the U.S. this week that would subject broadband providers to antitrust violations if they block or slow Internet traffic. Some lawmakers and operators argue that such legislation is unnecessary and would slow investment in broadband networks. The bill follows discussions across the industry and by government leaders around practices at Comcast, which says it has slowed some customer access to the BitTorrent peer-to-peer protocol during times of network congestion.
Cerf has been a vocal opponent of operators that limit access to certain applications. "I still think it's not a bad idea to have legislation that says don't discriminate unfairly simply because you happen to have control over this shared resource," he said on Friday.
Cerf, known as one of the fathers of the Internet for his role in creating its basic architecture, spoke at a lunch in Seattle, a city that is investigating the possibility of building its own broadband network. Seattle would follow its southern neighbor Tacoma, which has been operating its own fiber network for several years.
Cerf disputed arguments that operators sometimes give for why they should be able to limit or block bandwidth-hungry applications on their networks, and suggested that since they don't have technology facts to back up their arguments, people should be able to build their own networks to meet their needs.
"Many people raise the issue that video use on the Net is somehow going to drive it into congestion," he said. While in certain scenarios that could be true, the reality is that increasing the throughput solves the problem, he said.
A person could transfer an hour's worth of video over a gigabit channel in about 16 seconds, he said. That means that rather than streaming video, which is indeed taxing on the Internet, users would download it instead. "It's much easier on the network, and people have more than enough storage to download," he said.
Some operators also talk about the capacity of the Internet backbone itself. "As for running out of capacity, we've barely touched the surface of the fiber capacity. We are far from having exhausted this capacity," he said.
Operators may simply not want to invest in their networks to bring higher bandwidth to users, he said. "That comes back to the municipal argument. Citizens that want the capacity should be able to decide among themselves to put the resources in place to get that kind of capacity," he said.
Some operators contend that municipal networks create competition between the government and private companies. "That's nonsense," Cerf said. Governments would contract with the private sector to build the network and maybe even operate it, he said, so the two would be partners. In Tacoma the city maintains the network, but other companies serve as ISPs (Internet service providers), selling access to end-users.
Cerf's comments come as a new bill was introduced by lawmakers in the U.S. this week that would subject broadband providers to antitrust violations if they block or slow Internet traffic. Some lawmakers and operators argue that such legislation is unnecessary and would slow investment in broadband networks. The bill follows discussions across the industry and by government leaders around practices at Comcast, which says it has slowed some customer access to the BitTorrent peer-to-peer protocol during times of network congestion.
Cerf has been a vocal opponent of operators that limit access to certain applications. "I still think it's not a bad idea to have legislation that says don't discriminate unfairly simply because you happen to have control over this shared resource," he said on Friday.
Hackers find a new place to hide rootkits
Security researchers have developed a new type of malicious rootkit software that hides itself in an obscure part of a computer's microprocessor, hidden from current antivirus products.
Called a System Management Mode (SMM) rootkit, the software runs in a protected part of a computer's memory that can be locked and rendered invisible to the operating system, but which can give attackers a picture of what's happening in a computer's memory.
The SMM rootkit comes with keylogging and communications software and could be used to steal sensitive information from a victim's computer. It was built by Shawn Embleton and Sherri Sparks, who run an Oviedo, Florida, security company called Clear Hat Consulting.
The proof-of-concept software will be demonstrated publicly for the first time at the Black Hat security conference in Las Vegas this August.
The rootkits used by cyber crooks today are sneaky programs designed to cover up their tracks while they run in order to avoid detection. Rootkits hit the mainstream in late 2005 when Sony BMG Music used rootkit techniques to hide its copy protection software. The music company was ultimately forced to recall millions of CDs amid the ensuing scandal.
In recent years, however, researchers have been looking at ways to run rootkits outside of the operating system, where they are much harder to detect. For example, two years ago researcher Joanna Rutkowska introduced a rootkit called Blue Pill, which used AMD's chip-level virtualization technology to hide itself. She said the technology could eventually be used to create "100 percent undetectable malware."
"Rootkits are going more and more toward the hardware," said Sparks, who wrote another rootkit three years ago called Shadow Walker. "The deeper into the system you go, the more power you have and the harder it is to detect you."
Blue Pill took advantage of new virtualization technologies that are now being added to microprocessors, but the SMM rootkit uses a feature that has been around for much longer and can be found in many more machines. SMM dates back to Intel's 386 processors, where it was added as a way to help hardware vendors fix bugs in their products using software. The technology is also used to help manage the computer's power management, taking it into sleep mode, for example.
In many ways, an SMM rootkit, running in a locked part of memory, would be more difficult to detect than Blue Pill, said John Heasman, director of research with NGS Software, a security consulting firm. "An SMM rootkit has major ramifications for things like [antivirus software products]," he said. "They will be blind to it."
Researchers have suspected for several years that malicious software could be written to run in SMM. In 2006, researcher Loic Duflot demonstrated how SMM malware would work. "Duflot wrote a small SMM handler that compromised the security model of the OS," Embleton said. "We took the idea further by writing a more complex SMM handler that incorporated rootkit-like techniques."
In addition to a debugger, Sparks and Embleton had to write driver code in hard-to-use assembly language to make their rootkit work. "Debugging it was the hardest thing," Sparks said.
Being divorced from the operating system makes the SMM rootkit stealthy, but it also means that hackers have to write this driver code expressly for the system they are attacking.
"I don’t see it as a widespread threat, because it's very hardware-dependent," Sparks said. "You would see this in a targeted attack."
But will it be 100 percent undetectable? Sparks says no. "I'm not saying it's undetectable, but I do think it would be difficult to detect." She and Embleton will talk more about detection techniques during their Black Hat session, she said.
Brand new rootkits don't come along every day, Heasman said. "It will be one of the most interesting, if not the most interesting, at Black Hat this year," he said.
Called a System Management Mode (SMM) rootkit, the software runs in a protected part of a computer's memory that can be locked and rendered invisible to the operating system, but which can give attackers a picture of what's happening in a computer's memory.
The SMM rootkit comes with keylogging and communications software and could be used to steal sensitive information from a victim's computer. It was built by Shawn Embleton and Sherri Sparks, who run an Oviedo, Florida, security company called Clear Hat Consulting.
The proof-of-concept software will be demonstrated publicly for the first time at the Black Hat security conference in Las Vegas this August.
The rootkits used by cyber crooks today are sneaky programs designed to cover up their tracks while they run in order to avoid detection. Rootkits hit the mainstream in late 2005 when Sony BMG Music used rootkit techniques to hide its copy protection software. The music company was ultimately forced to recall millions of CDs amid the ensuing scandal.
In recent years, however, researchers have been looking at ways to run rootkits outside of the operating system, where they are much harder to detect. For example, two years ago researcher Joanna Rutkowska introduced a rootkit called Blue Pill, which used AMD's chip-level virtualization technology to hide itself. She said the technology could eventually be used to create "100 percent undetectable malware."
"Rootkits are going more and more toward the hardware," said Sparks, who wrote another rootkit three years ago called Shadow Walker. "The deeper into the system you go, the more power you have and the harder it is to detect you."
Blue Pill took advantage of new virtualization technologies that are now being added to microprocessors, but the SMM rootkit uses a feature that has been around for much longer and can be found in many more machines. SMM dates back to Intel's 386 processors, where it was added as a way to help hardware vendors fix bugs in their products using software. The technology is also used to help manage the computer's power management, taking it into sleep mode, for example.
In many ways, an SMM rootkit, running in a locked part of memory, would be more difficult to detect than Blue Pill, said John Heasman, director of research with NGS Software, a security consulting firm. "An SMM rootkit has major ramifications for things like [antivirus software products]," he said. "They will be blind to it."
Researchers have suspected for several years that malicious software could be written to run in SMM. In 2006, researcher Loic Duflot demonstrated how SMM malware would work. "Duflot wrote a small SMM handler that compromised the security model of the OS," Embleton said. "We took the idea further by writing a more complex SMM handler that incorporated rootkit-like techniques."
In addition to a debugger, Sparks and Embleton had to write driver code in hard-to-use assembly language to make their rootkit work. "Debugging it was the hardest thing," Sparks said.
Being divorced from the operating system makes the SMM rootkit stealthy, but it also means that hackers have to write this driver code expressly for the system they are attacking.
"I don’t see it as a widespread threat, because it's very hardware-dependent," Sparks said. "You would see this in a targeted attack."
But will it be 100 percent undetectable? Sparks says no. "I'm not saying it's undetectable, but I do think it would be difficult to detect." She and Embleton will talk more about detection techniques during their Black Hat session, she said.
Brand new rootkits don't come along every day, Heasman said. "It will be one of the most interesting, if not the most interesting, at Black Hat this year," he said.
FBI worried as DoD sold counterfeit networking gear
The U.S. Federal Bureau of Investigation is taking the issue of counterfeit Cisco equipment very seriously, according to a leaked FBI presentation that underscores problems in the Cisco supply chain.
The presentation gives an overview of the FBI Cyber Division's effort to crack down on counterfeit network hardware, the FBI said Friday in a statement. "It was never intended for broad distribution across the Internet."
In late February the FBI broke up a counterfeit distribution network, seizing an estimated US$3.5 million worth of components manufactured in China. This two-year FBI effort, called Operation Cisco Raider, involved 15 investigations run out of nine FBI field offices.
According to the FBI presentation, the fake Cisco routers, switches and cards were sold to the U.S. Navy, the U.S. Marine Corps., the U.S. Air Force, the U.S. Federal Aviation Administration, and even the FBI itself.
One slide refers to the problem as a "critical infrastructure threat."
The U.S. Department of Defense is taking the issue seriously. Since 2007, the Defense Advanced Research Projects Agency has funded a program called Trust in IC, which does research in this area.
Last month, researcher Samuel King demonstrated how it was possible to alter a computer chip to give attackers virtually undetectable back-door access to a computer system.
King, an assistant professor in the University of Illinois at Urbana-Champaign's computer science department, has argued that by tampering with equipment, spies could open up a back door to sensitive military systems.
In an interview on Friday, he said the slides show that this is clearly something that has the FBI worried.
The Department of Defense is concerned, too. In 2005 its Science Board cited concerns over just such an attack in a report.
Cisco believes the counterfeiting is being done to make money. The company investigates and tests counterfeit equipment it finds and has never found a "back door" in any counterfeit hardware or software, said spokesman John Noh. "Cisco is working with law enforcement agencies around the world on this issue."
The company monitors its channel partners and will take action, including termination of a contract, if it finds a partner selling counterfeit equipment, he said. "Cisco Brand Protection coordinates and collaborates with our sales organizations, including government sales, across the world, and it's a very tight integration."
The best way for channel partners and customers to avoid counterfeit products is to buy only from authorized channel partners and distributors, Noh said. They have the right to demand written proof that a seller is authorized.
The FBI doesn't seem satisfied with this advice, however. According to the presentation, Cisco's gold and silver partners have purchased counterfeit equipment and sold it to the government and defense contractors.
Security researcher King believes that the government is better off focusing on detection rather than trying to secure the IT supply chain, because there are strong economic incentives to keep it open and flexible -- even if this means there may be security problems. "There are so many good reasons for this global supply chain; I just think there's no way we can secure it."
The presentation gives an overview of the FBI Cyber Division's effort to crack down on counterfeit network hardware, the FBI said Friday in a statement. "It was never intended for broad distribution across the Internet."
In late February the FBI broke up a counterfeit distribution network, seizing an estimated US$3.5 million worth of components manufactured in China. This two-year FBI effort, called Operation Cisco Raider, involved 15 investigations run out of nine FBI field offices.
According to the FBI presentation, the fake Cisco routers, switches and cards were sold to the U.S. Navy, the U.S. Marine Corps., the U.S. Air Force, the U.S. Federal Aviation Administration, and even the FBI itself.
One slide refers to the problem as a "critical infrastructure threat."
The U.S. Department of Defense is taking the issue seriously. Since 2007, the Defense Advanced Research Projects Agency has funded a program called Trust in IC, which does research in this area.
Last month, researcher Samuel King demonstrated how it was possible to alter a computer chip to give attackers virtually undetectable back-door access to a computer system.
King, an assistant professor in the University of Illinois at Urbana-Champaign's computer science department, has argued that by tampering with equipment, spies could open up a back door to sensitive military systems.
In an interview on Friday, he said the slides show that this is clearly something that has the FBI worried.
The Department of Defense is concerned, too. In 2005 its Science Board cited concerns over just such an attack in a report.
Cisco believes the counterfeiting is being done to make money. The company investigates and tests counterfeit equipment it finds and has never found a "back door" in any counterfeit hardware or software, said spokesman John Noh. "Cisco is working with law enforcement agencies around the world on this issue."
The company monitors its channel partners and will take action, including termination of a contract, if it finds a partner selling counterfeit equipment, he said. "Cisco Brand Protection coordinates and collaborates with our sales organizations, including government sales, across the world, and it's a very tight integration."
The best way for channel partners and customers to avoid counterfeit products is to buy only from authorized channel partners and distributors, Noh said. They have the right to demand written proof that a seller is authorized.
The FBI doesn't seem satisfied with this advice, however. According to the presentation, Cisco's gold and silver partners have purchased counterfeit equipment and sold it to the government and defense contractors.
Security researcher King believes that the government is better off focusing on detection rather than trying to secure the IT supply chain, because there are strong economic incentives to keep it open and flexible -- even if this means there may be security problems. "There are so many good reasons for this global supply chain; I just think there's no way we can secure it."
Subscribe to:
Posts (Atom)