Saturday, January 12, 2008

Choosing the right virtualization technology for your environment

Fewer and fewer IT organizations are asking whether they should virtualize systems. The focus is now on how should they leverage virtualization in their environment. The maturation of virtualization solutions in the x86 and UNIX realm has opened the door to an endless array of choices. More choices offer organizations greater flexibility, but they can also introduce confusion and complexity. Every virtualization technology operates in a slightly different manner. This is compounded by the fact that every IT environment is vastly different, with its own unique operating patterns, technical compositions, and business constraints. Because of this, there's probably never going to be one ideal virtualization technology for every IT scenario. Thus, it's better to focus resources on choosing the right technology for a specific situation.

Following are six factors to consider when evaluating virtualization software.

1. Mobility and Motioning
Motioning enables applications to move between physical servers without disruption. Available on VMware's VMotion, XenMotion, and IBM P6 LPARs, motioning has the potential to transform capacity management. However, it's not without its problems. Motioning can introduce volatility and create vexing challenges for management groups tasked with incident management and compliance issues. To gauge whether motioning is a good option in the environment, organizations first need to analyze maintenance windows, consistency of workload patterns, and disaster recovery strategies.

Maintenance windows - When combined on a single physical platform, maintenance windows become intermingled. This can easy create scenarios where there is no window of time available for hardware maintenance. The same problem arises for software freezes. The ability to motion virtual machines can alleviate this problem by allowing servers to be moved offline for scheduled maintenance or software updates. Alternatively, without motioning in place, the proper initial placement of applications on virtual hosts is extremely important. In either case, making the right placement decisions is critical, since the mere act of motioning may constitute a change that violates a software freeze.

Consistency of workload patterns - The advantages of motioning may vary widely depending on the level of volatility in workload patterns. It can be very useful to leverage space capacity in highly volatile workloads. However, those benefits diminish in low-volatility scenarios.

Organizations can analyze the ideal placements based on the variations in utilization patterns for a day or a week or both. If patterns do not vary widely from day to day, a static placement may be sufficient and the volatility of motioning avoided. If the patterns are significantly different from day to day, a more dynamic solution is warranted.

Disaster recovery strategy - If application-level replication or hot spares are part of the disaster recovery plan, motioning may undermine these efforts. For example, one might inadvertently place a production server in the same locale as its disaster recovery counterpart. To avoid such pitfalls, organizations undergo a detailed analysis of disaster recovery strategies, roles, cluster strategies, cluster roles, and replication structures.

2. Overhead and Scalability
There are numerous aspects of an operational model that may impact the success of virtualization. These include the way I/O is handled, the maximum number of CPUs per VM, as well as the way vendors license their software on the platform. Organizations can bypass these overhead and scalability concerns by considering the following factors.

I/O rates - Software components such as database servers that are I/O intensive may be better suited to virtualization technologies that don't use virtual device drivers, as these device drivers "tax" the CPUs with every I/O transaction they perform, causing the system to hit its limits before it otherwise would. Techniques such as VMware's raw device mapping also provide higher efficiency in this area, but the use of such features prevents motioning.

To determine the best approach, organizations can use a strategy-specific overhead model that adds up CPU utilization numbers based on the I/O activity on the physical servers. This is an easy way to catch any workload types that are unsuitable for a given virtualization solution.

Non-compute intensive applications - Theoretically, it is possible to place many non-compute intensive applications together on a virtual host. However, there are many factors that may limit the scalability of this scenario. Moreover, pinpointing which factors are constraining the environment can be tricky.

The first step is to apply a CPU "quantization" model. If a technology uses a virtual-CPU-per-physical-CPU model that is rigid in nature, the number of virtual systems is limited by the number of CPUs. While this issue is fading away as new fractional models become available to allow for allocation of fractions of physical resources, it is still prudent to be aware of this constraint to prevent unpleasant surprises.

Memory is a more complex part of the equation. Applications that aren't doing very much will often utilize the same amount of memory as similar applications that are more active. Combining even a small number of these applications can quickly tax the memory capacity of the target system while making very little impact on CPU utilization.

The scalability of the underlying architecture also complicates matters. Some applications will crash when running too many images, regardless of what they are doing. Others leverage robust backplane interconnects, caching models, and high context switching capacity to allow for a maximum number of virtual machines without compromising reliability. To determine if moving to "fat nodes" makes sense, organizations factor in platform scalability and the workload "bend".

3. Software Licensing Models
Some applications are not supported on specific virtualization technology. Even if support isn't an issue, however, software licensing models may play a major role in the ROI gained from virtualization. If, for example, applications are licensed per physical server, this drastically minimizes any hopes for gains from virtualization endeavors. This leaves businesses seeking a physical configuration that will support the workload, typically requiring abandonment of vertically scaled infrastructures in favor of smaller, commoditized servers deployed in a horizontally scaled fashion.

4. Security
Organizations often have little guidance for ensuring security in a physical-to-virtual transition, as there are no published guidebooks or best practices for securing a virtual environment. The following are key considerations to minimize risk.

Security zones - Mixing security zones in a virtual environment is a bad idea, since most virtualization technologies don't provide for strong enough security isolation models. For example, it wouldn't make sense to place systems that are connected to sensitive internal networks on the same physical host as those connected to a DMZ.

In addition, many virtualization solutions have administrator-level roles that allow for viewing the disk images of all the virtual images. This introduces huge vulnerabilities by allowing sensitive security zones to be bridged. This problem is exacerbated with virtualization solutions that have internal network switches that control traffic between VMs on the same physical host. These technologies allow virtual systems to completely bypass all established port-level firewall filters, deep packet sniffers, and QoS rules governing traffic in the environment. This opens up the entire environment to threats that cannot be detected by network-level security tools.

Information privacy - Many virtualization technologies allow access to information stored in offline virtual images by simply mounting them as disk images. While this gives users added convenience, it also introduces considerable liabilities, making it easy for someone to walk off with a hard drive. In addition, this makes it all the more important to be careful when virtualizing any application that leaves residual data in temp files or other local storage.

5. Financial Differences
Savvy businesses undergo "what if" scenarios to determine the most lucrative virtualization solution for their environment. This includes taking into consideration licensing costs, implementation expenses, and hardware/software savings. For example, the cost of implementation varies widely based on the cost of transitioning to next-generation servers and storage, application engineering, etc. Moreover, if application-level changes are required, costs can skyrocket. As a general rule, if functional or user acceptance level testing is involved, ROI quickly disappears.

Another consideration is the type of hardware in use. Virtualizing high-density physical infrastructures such as blades vastly reduces the server footprint, but the cost of associated cooling systems may outweigh the benefits. Likewise, the use of fat nodes and large vertically scaled servers offers high scalability and efficiency but come with higher up-front costs. Commoditized rackmounted servers are simple and easy to deploy, but since they share fewer hardware components they may be less effective in offering economies of scale.

6. Chargeback Models
An important and often unexpected issue arising from virtualization is the lack of viable chargeback models. This means virtual environments must be designed so that they do not cross departments. If chargebacks are in place, the solution must provide a way to obtain accurate utilization information to ensure equitable billing for compute resources.

Conclusion
The sheer volume of virtualization offerings makes choosing the right solution a confusing and often overwhelming task. Having the foresight to analyze key factors impacting the virtualization effort enables businesses to avoid critical missteps. Moreover, conducting "what if" scenarios that analyze business and technical constraints-from security to workloads-drives more informed decision making that will ultimately mean the difference between success and failure in this complex era.

Google denies infringing search patent

Google responded on Friday to a lawsuit filed against it by Northeastern University, denying claims that its search service infringes on patented technology.
Google denied all charges in the suit, which was filed jointly by Northeastern, of Boston, and a search technology company called Jarg, of Waltham, Massachusetts. Google also filed a counterclaim asking the court to dismiss the patent as invalid.

The suit was filed in November in the U.S. District Court for the Eastern District of Texas. It seeks an injunction preventing Google from further infringement, as well as royalty payments and damages.

The patent in question describes a distributed database system that breaks queries into fragments and distributes them to multiple computers in a network to get faster search results.

The plaintiffs say that Google uses this system to run its search engine, and that the system was invented by Kenneth Baclawski, an associate professor at Northeastern and one of Jarg's founders. Northeastern was awarded a patent for the system, which it has licensed exclusively to Jarg.

In its response Friday, Google argued that the patent is invalid and should not have been awarded in the first place. It cites various sections of U.S. patent law, including those that deal with the novelty of an invention and prior art. It also cites the doctrine of "laches," which essentially requires plaintiffs to file lawsuits in a timely manner.

Its counterclaim, asks the court to declare the patent invalid and unenforceable.

Both parties have requested a jury trial and legal experts have said the case could be resolved in 18 months to two years.

The U.S. patent, number 5,694,593, is dated Dec. 2, 1997, and can be viewed by searching the Web site of the U.S. Patent & Trademark Office. U.S. patent law is also described on the USPTO Web site in this PDF file. Google's reply cites sections 101, 102, 103 and 112.

CES: Plasma and LCD TVs getting thinner

Ultra-thin flat panel displays were the highlight of this year's International Consumer Electronics Show, with many vendors showing thinner and sleeker high-definition TVs, giving users a peek of what LCD, plasma and OLED screens will look like in a few years.
Visitors thronged the booths of Sony, Samsung, Panasonic, Pioneer and Hitachi, where the companies were showing larger flat-panel TV prototypes with reduced thickness, ranging from 3 millimeters to 39 mm, depending on the screen size.

The thinnest perhaps was Sony's 11-inch OLED (organic light-emitting diode) TV, the XEL-1, which is 3 millimeters thick. At $2,500, the panel is much thinner than LCDs, which start at 24 mm in thickness. The XEL-1 went on sale in Japan in December and was launched in the U.S. this week. Sony also showed off a 27-inch prototype OLED TV at CES.

Samsung showed off a thin OLED display prototype with a 31-inch screen, the largest of its kind on display at CES. Measuring around 4 mm thick, it is thinner than LCD panels and displayed vivid pictures than LCD TVs.

TVs based on OLED technology have a slender design thanks to the use of an organic material that emits its own light. LCD TVs (liquid crystal display) require a backlight, which takes up space at the back of the panel.

OLED screens may display the best images, but its expected life is only three to four years, and production issues plague the screens. While Sony and Samsung are investing heavily in OLED technology, a Sharp executive said the company is exploring the technology, but it won't consider the technology until its life span is at least 10 years.

Until the OLED technology doesn't resolve its problems, LCD and plasma TVs will rule the high-definition TV roost.

Panasonic displayed a prototype of its thin 50-inch ultra-thin plasma TV, which the company said is lighter and more power efficient. The display is 24.7 millimeters (0.97 inches) thick and weighs around 22 kilograms (48 pounds), half the weight of current high-definition TV models in similar sizes, said Toshihiro Sakamoto, president of Panasonic AVC Networks, during a Monday keynote at the show.

The company also demonstrated a prototype of its massive 150-inch plasma display, which Panasonic said is the largest flat-panel display in the world and is based on the slim design model used by the 50-inch prototype. It slimmed down the plasma TVs by using a thinner backlight unit, Sakamoto said.

Pioneer has minimized the significance of a backlight to deliver more vibrant colors on its 50-inch plasma display prototype shown at CES. However, the 9 mm display won't be bought to market this year, Pioneer said. Dubbed "extreme contrast," the concept display stops idle luminance in a TV, filling the screen will black levels that will allow the company's future plasma displays to offer a deeper spectrum of colors, said Russ Johnston, executive vice president of marketing and product planning, during a CES press conference.

JVC showed the 42-inch LT-42SL89 and the 46-inch LT-46SL89 flat-panel LCD TVs, both of which are 39 millimeters thick across most of the back of the panel, significantly thinner from its earlier models, JVC said. Both models will hit the U.S. market in "early summer" with pricing to be announced at that time, JVC said.

Hitachi unveiled thinner LCD TVs in three sizes -- 32, 37 and 42-inches -- with 1.5-inch (38.1 mm) thickness. The name of the LCD TVs, 1.5, comes from the thickness. The product will be available in later this year and Hitachi did not provide pricing information for the products.

Not only do slim TVs look cooler, they are more power efficient and lighter, which will makes them easier to carry and mount on a wall.

(Martyn Williams and Dan Nystedt of the IDG News Service contributed to this report.)

Congressional report rips US TSA Web site security

A Web site commissioned by the U.S. Transportation Security Administration (TSA) to help travelers whose names were erroneously listed on airline watch lists originally had multiple security problems that could lead to identity theft, says a congressional report released Friday.
In addition, the TSA awarded the $48,816 contract for the Traveler Redress Web site based on a request for quotes with requirements that only one Web design firm could meet, says the report, released by the House of Representatives Committee on Oversight and Government Reform. The TSA's technical lead and author of a request for comments for the project was a longtime friend of the owner of Desyne Web Services and had briefly worked for the Virginia firm, the report says.

"This redress Web site had multiple security vulnerabilities: It was not hosted on a government domain; its homepage was not encrypted; one of its data submission pages was not encrypted; and its encrypted pages were not properly certified," the report says. "These deficiencies exposed thousands of American travelers to potential identity theft."

The TSA press office did not immediately respond to a request for comments on the House report. A receptionist at Desyne said the appropriate person for commenting was not available.

The redress Web site went live in October 2006 and blogger Christopher Soghoian, a graduate student in informatics at Indiana University, pointed out security problems there last February. The TSA took the Desyne Web site down that month and now hosts a traveler redress form on its own Web site.

"This begs the question: Who are these guys, why don't they know how to use SSL and how were they awarded this sweet contract?" Soghoian wrote in February 2006. "Why can't TSA do a simple form submission themselves?"

One of the biggest concerns raised by Soghoian and the House report is that the Desyne Web site did not use SSL (secure socket layer) encryption on its home page or on its submissions page. Travelers were asked to submit personal information such as their Social Security numbers and birth dates. The site was not hosted on a government domain, meaning visitors "lost any assurance they were visiting a legitimate government Web site," the House report says.

The House report was also critical of Desyne's "no-bid" contract to operate the redress Web site. Desyne had done work for TSA since 2004, and as of late 2007, it continued to host the TSA's Web site where travelers could file claims for damaged property. The TSA's April 2006 request for quotes for the redress site said the design had to be consistent with the claims management site, and it had to be hosted on the same server that hosted the claims management site, the House report says.

As of September, Desyne continued to operate Web sites for TSA, and the company has received more than $500,000 in business from the agency since 2004, the report says. "TSA did not take action ... to sanction Desyne for poor performance," the report says.

WSO2 bringing Ruby to SOA

With open-source software being formally introduced Monday, WSO2 seeks to bridge the Ruby programming language and the Ruby on Rails Web framework with the SOA and Web services spaces.
The company is set to debut WSO2 WSF/Ruby (Web Services Framework for Ruby) 1.0, providing a Ruby extension to support the Web Services WS-* stack. Ruby developers can incorporate security and reliable messaging capabilities needed for trusted, enterprise-class SOAP-based Web services, WSO2 said. But the product also supports the alternative REST (Representational State Transfer) Web services.

"Ruby, as you know, has become a very popular language the last few years, and what we are enabling is for Ruby to become part of an enterprise SOA architecture," WSO Chairman/CEO Sanjiva Weerawarana said.

While Ruby has been popular in the Web 2.0 realm, sometimes it needs to talk to legacy architectures, he said. With the new framework, developers could build a Web application using Ruby and then hook into enterprise infrastructures, such as JMS (Java Message Service) queues. For example, a Web site might be built with Ruby that then needs to link to an order fulfillment system based on an IBM mainframe or minicomputer, Weerawarana said.

With the December release of Ruby on Rails 2.0, the builders of Rails swapped out a SOAP library and replaced it with REST capabilities. In doing this, David Heinemeier Hansson, the founder of Rails, stressed that that SOAP and its attendant WS-* stack had become too complex.

But Weerawarana stressed REST may not always be sufficient. "[The REST preference] is a perfectly fine position to take if you don't need any kind of these security and reliability infrastructure [capabilities]," he said. WSO2's framework would replace the SOAP capabilities removed in Rails 2.0, he said.

Weerawarana's stance was seconded by a user of the company's products, Stefan Tilkov, CEO of infoQ, a consulting firm near Dusseldorf, Germany. While saying he likes Rails because it picks and chooses technologies, such as opting for REST over SOAP, Tilkov said businesses may not be ready to welcome Rails and the lesser known REST Web services at the same time, he said.

"Sometimes, you have to decide whether you want to fight all of the possible battles at once. Trying to introduce both Rails and [REST] at the same to a company can really be a challenge," he said. WSO2 is providing WS-* and SOAP capabilities for Rails, Tilkov stressed.

Still, Tilkov likes REST. "I'm a very big REST fan, and I [advocate] the use of REST whenever I can," he said.

WSF/Ruby 1.0 binds WSO2's Web Services Framework for C into Ruby to provide an extension based on three Apache projects. These include: Axis 2C, which is a Web services runtime to support REST and SOA; Sandesha/C, supporting WS-Reliable Messaging; and Rampart/C, for WS-Security capabilities.

Also, WSF/Ruby 1.0 uses Ruby on Rails as its deployment model for providing services.

Client and service APIs are offered. Support is featured for SOAP 1.1 and SOAP 1.2. The software is interoperable with Microsoft .Net, the WS02 Web Services Application Server, and other J2EE implementations. SOAP Message Transmission Optimization and WS-Addressing capabilities are included as well.

Downloadable here, WSF/Ruby 1.0 is offered under the Apache License 2.0. While it is free, WS02 does sell development and production services as well as training for it. Production support prices start at US$2,000, while development support begins at $2,500 for 10 hours. Training costs $400 per day per person with a minimum of five persons required.

WSO2's long-term strategy includes allowing scripting languages like Ruby, Perl, and PHP (Hypertext Preprocessor) to participate in an enterprise SOA.

Data centers take to the high seas

International Data Security, a U.S. startup, plans to open the first of 50 ship-borne floating data centers at Pier 50 in San Francisco in April.
Floating data centers are said to be much more environmentally friendly than land-based data centers. Other green data center locations have previously included Siberia, Iceland, Smartbunker, a UK NATO military site, and a Japanese coal mine.

The company aims to have 22 container ships housing data centers around the U.S. coastline and 28 elsewhere around the globe. The data centers will be constructed in the ship's cargo space and also housed in shipping containers stacked on the deck, using products such as Sun's Blackbox and Rackable's ICE Cube. It is said that each ship will have a minimum of 200,000 square feet of potential data center space.

The Pier 50 ship already has its 'anchor' tenants. The full fleet of 50 decommissioned cargo ships is said to have already been purchased.

The ships will be moored in ports and have power and network connections run out to them. Power demands will be supplemented by on-ship generators running on the ship's bio-diesel supply, allowing sustained power outages of up to one month. To help reduce the demands on the cooling system for the generators and data containers, sea water will be used to cool the air-conditioning towers with a 30-40 percent power reduction expected. Waste heat from the data centers will be re-used to heat the ship's accommodation.

As well as sea-water cooling, bio-diesel and re-use of waste heat, the floating data center's environmental credentials are increased by the ships themselves being recycled instead of scrapped.

These marine data centers are being targeted at the disaster recovery market initially. The San Francisco area is well known for its San Andreas fault earthquake risk.

IDS says that a ship-borne data center can be commissioned in just a few months whereas building a land-based data center can take a year or more and be hindered by real estate constraints.

IDS stands for International Data Security and is run by CEO Ken Choi and president Richard Naughton, an ex-US Navy admiral.

It is a private company and is so new that it doesn't even have a website yet, or indeed, much of a presence via web search engines which is perhaps curious in that it is quite close to the launch of its first ship data center facility. A somewhat basic data sheet has come to light. It is not known if IDS is going to be floated.

Don't upgrade to Vista, UK gov't agency tells schools

British schools should not upgrade to Microsoft's Vista operating system and Office 2007 productivity suite, the British Educational Communications and Technology Agency (BECTA) said in a report on the software. It also supported use of the international standard ODF (Open Document Format) for storing files.
Schools might consider using Vista if rolling out all-new infrastructure, but should not introduce it piecemeal alongside other versions of Windows, or upgrade older machines, said the agency, which is responsible for advising British schools and colleges on their IT use.

"We have not had sight of any evidence to support the argument that the costs of upgrading to Vista in educational establishments would be offset by appropriate benefit," it said.

The cost of upgrading Britain's schools to Vista would be £175 million (US$350 million), around a third of which would go to Microsoft, the agency said. The rest would go on deployment costs, testing and hardware upgrades, it said.

Even that sum would not be enough to purchase graphics cards capable of displaying Windows Aero Graphics, although that's no great loss because "there was no significant benefit to schools and colleges in running Aero," it said.

As for Office 2007, "there remains no compelling case for deployment," the agency said in its full report, published this week.

The agency was equally skeptical about the benefits of Vista and Office 2007 last January, when it published an interim report based on its evaluation of beta versions of the new software. Then, it advised that the added value of Vista's new features was not sufficient to justify the cost of deployment, while Office 2007 contained no "must-have" features.

In this year's report, BECTA warned schools that do choose to use Office 2007 to avoid Microsoft's OOXML (Office Open XML) document format because of concerns about compatibility between different applications -- even though interoperability is one of the benefits Microsoft claims for the format.

It called on schools to make teachers, parents and pupils more aware of free alternatives to Microsoft's products, and asked the IT industry to facilitate their use.

The agency also recommended setting up desktops to make it easy to use such open-source applications, and advised schools to insist their suppliers deliver office productivity software that can open and save ODF documents, setting it as the default file format.

However, it slammed Microsoft for dragging its feet with incorporating support for ODF in Office 2007.

"While the product includes the functionality to read virtually every other relevant file format 'out of the box', the processes for dealing with ODF files are very cumbersome," BECTA wrote.

In addition, it said, ODF file converters provided by Microsoft are not intuitive because they behave differently from the regular file save dialogs.

"We believe that these arrangements present sufficient technical difficulties for the majority of users to make them disinclined to use competitor products and this may weaken competition," the agency said.

Toshiba shows prototype TV running on Cell chip

What happens when you take the powerful Cell microprocessor, the chip that sits at the heart of the PlayStation 3 games console, and put it to use inside a television? Toshiba demonstrated just such a TV at this week's International Consumer Electronics Show and the results are impressive.

The Cell chip was developed by Toshiba along with IBM, Sony and Sony Computer Entertainment, and is dedicated to graphics processing. Each chip contains a single Power PC core and eight co-processors to make heavy-duty processing of video a breeze.



While Sony developed the chip for its PlayStation 3, Toshiba invested money in the project with an eye to using the device in consumer electronics products. Until CES, the company hadn't shown a Cell-powered consumer device, but a pair of flat-panel TVs on its booth at the trade show have changed that.

The first and perhaps most relevant benefit of putting the Cell inside a television is the ability to handle real-time upscaling of standard definition TV to high-def. With more and more HDTV channels, we get more and more used to the crisp, sharp quality offered by HD and that makes standard definition look poor. With a Cell-powered TV you'd be able to enjoy regular channels in higher quality much closer to that of HD, said Hiroaki Komaki, a specialist at Toshiba's core technology center in Tokyo.

The upscaling doesn't stop there. The same feature can be used to zoom in on an area of an HDTV picture, enlarge that single area, and then improve it's image quality. Imagine zooming in on a home movie of a sports event and getting closer to the action.

The Cell also makes it possible to easily navigate a number of video channels simultaneously. In a demo at CES, the chip was streaming 48 chapters from a standard-definition video file in real-time, with each appearing as a video thumbnail on the screen. Clicking on one of the clips would bring it up on the lower half of the screen, with 16 chapters still running in the upper half. Another push on the button would move it to full screen.

If the video streams were HD, it would be able to process six in real-time and display them on the screen, said Komaki.

Toshiba still hasn't decided exactly what features it will build into a Cell-based TV, nor has it decided when such a set will go on sale. One thing Toshiba isn't planning on doing is building a PlayStation 3 gaming system into its TVs. The chips may be the same but Komaki said such a combination isn't likely.

The company has been chasing the idea of Cell-based consumer electronics since it signed on with Sony and IBM to develop the chip in 2001.

Holiday spirit helped double Storm worm

Some clever, sexy Christmas-themed spam and a long holiday season helped the criminals behind the notorious Storm Worm more than double their network of infected PCs over the past few weeks, security experts say.
Storm kicked off its holiday spam-and-malware campaign on the day before Christmas, sending off a flurry of e-mail that invited victims to visit a Christmas-themed strip show on Web sites such as Merrychristmasdude.com. Victims who downloaded the strip show found their PCs attacked by malicious software.

This site, and about 14 other Storm-related domains, was registered using a Russian domain name registrar called Nic.ru, where staff was largely unavailable during the holidays, according to Richard Cox, the chief information officer with the Spamhaus anti-spam effort.

"The trouble was they were quite simply out to Christmas lunch," Cox said. "And they didn't get back until Wednesday."

Spamhaus representatives tried to contact the registrar on Dec. 26, but they soon discovered that the company was essentially shut down. Four days later they received an e-mail from a Nic.ru employee saying that they would have to wait until staff returned to work in January before anything could be done, Cox said.

Storm's creators took advantage of another common problem in the domain name registration system. They targeted a registrar that did not have an established policy for taking down malicious domains, so that gave the criminals a little more time to run their scam, Cox said.

By Wednesday of this week, Nic.ru had removed the Storm-related domains from its database, knocking the criminal network's Web sites offline.

But now security experts say that Storm has more than doubled in size, adding about 25,000 PCs to a network that had been 20,000 strong.

Storm's creators also changed the configuration of their malware, making it harder for security software to detect it, and this helped inflate the number of infections, Cox said.

The Christmas campaign was "pretty effective at growing the botnet," said Jose Nazario, senior security engineer at Arbor Networks. "The contributing factors there were clearly the successful timing, tied to a major Western holiday, coupled with tweaks to the malware to avoid antivirus detection."

Storm Worm has been attacking computer users since January 2007, when it began tricking victims into downloading malicious software, claiming that it was a video of violent storms that had been ravaging Europe.

Since then it has been one of the most virulent sources of malware, although it has shrunk in size as detection methods have improved. Recently the network appears to have begun renting out its infected PCs to phishers, according to some researchers.

"Storm is an insidious pest. We've learned essentially how to manage it," said Nazario. But, he added, the success of this Christmas campaign proves that the network can still be a serious threat. "They're great marketers," he said.

CES: UMPCs whipped for hardware and design flaws

Continued criticism by industry insiders didn't stop vendors from OQO to Lenovo and LG from showing off ultramobile PC products with range of innovative features at the International Consumer Electronics Show (CES), held in Las Vegas this week.
With many of the prototypes displayed due to hit the market later this year, UMPCs continue to be panned for their inconvenient keyboards, small screen sizes and poor battery life ever, since the first UMPC from OQO was introduced at CES in 2004.

OQO showed off a WiMax-capable OQO Model 2 UMPC, powered by Via Technologies' C7-M mobile processors and running Windows Vista OS. It comes with hard-drive or flash-based solid-state drive options, supports up to 1G byte of RAM, and has a sliding display that pops up to show a keyboard. Weighing around 1 pound (453 grams), prices start at US$1,299.

Eyes were locked on UMPC prototypes from companies including Lenovo and Founder at Intel's booth. The Lenovo device includes the Linux OS from Chinese developer Red Flag Software, and boasts a 4.8-inch touchscreen, an onboard camera, and other features. The Founder Mini-Note features a 7-inch screen, a 60G-byte hard drive, Wi-Fi and Bluetooth wireless networking, and weighs around 800 grams.

Intel's prototypes are based on its Menlow platform, a code name given to a set of Intel chips for ultramobile PCs due out next year. Menlow will include a new low-power microprocessor, code-named Silverthorne, and a chipset code-named Poulsbo.

One prototype that may never ship is a slider UMPC displayed in LG's booth, also based on the Menlow platform. The device runs Windows Vista, comes with a 4.8-inch screen, 1G byte of memory, a 40G-byte hard disk drive, a touchscreen, Bluetooth, Wi-Fi and 3G HSDPA cellular data. A representative at the LG booth said the company had not decided whether to market the device, as it suffered from poor battery life and keyboard usage issues.

UMPCs create a design challenge by virtue of being a tweener -- neither a cell phone nor a laptop, said Phil McKinney , vice president and chief technology officer at HP's personal systems group. "The UMPCs -- OQO and those guys -- are trying to be too much on the small side, very heavy, not great battery life, they get hot in your hand too when you use it. But when you get north of 9-inch screens, you're getting pretty close to a laptop," McKinney said.

Screens up to 7 inches are not an appropriate scale for use of screen for touch-based applications, McKinney said.

UMPCs have floundered around for a while as a killer application for mobile devices hasn't been discovered yet, McKinney said. "There's a lot of people coming out with products, I don't think anybody's found what the killer application or what that killer use case model really is," McKinney said.

Alp Sezen, a sales director for Via Technologies based in Fremont, California, said that a reason UMPC sales have not increased dramatically, at least in the U.S., is that wireless bandwidth for mobile devices up to now has been slow.

"The biggest problem with ultramobile devices is they need more bandwidth. When the user experience for mobile wireless is better, that's when you will see ultramobile devices really take off," Sezen said. "Right now you typically get 116K [bits per second] when you are mobile, which isn't a great user experience." A true ultramobile experience is the ability to pull out a mobile device and easily surf the Web. "Right now, you can't get that experience," Sezen said.

It will take a year or two for the mobile wireless experience to get better, Sezen said. "WiMax, and the opening up of Verizon's EV-DO network in the third quarter this year will help give a better experience for ultramobile users."

Intel has categorized UMPCs under the Mobile Internet Device (MID) nomenclature, and segments the devices further based on applications like entertainment, productivity and navigation, said Pankaj Kedia, director of the global ecosystem program for mobile Internet and UMPC platforms. The look and feel of devices, the marketing technique and what users want to buy is different, Kedia said.

Clarion's UMPC, for example, will be marketed as a next-generation navigation device, Kedia said. "It might have the capability of a PC under the hood, but from a user perspective it is a portable navigation mobile Internet device," Kedia said. UMPCs are more like MIDs aimed at productivity with PC capabilities inside.

When asked if a prototype UMPC that Qualcomm showed off at CES would replace cell phones, company chief technology officer Sanjay Jha thought for a second and then replied: "I don't know."

Different people might use them in different ways, Jha said. Also, how consumers use UMPCs might depend on how successful Bluetooth becomes, he said. Some consumers might be happy to use a UMPC instead of a cell phone if they can use a Bluetooth headset to make and receive voice calls, rather than holding the larger device up to their ears, he said.

Fujitsu's Paul Moore, senior director of mobile product marketing, also didn't have a definitive description of the ideal UMPC user. Someone who works on their feet a lot and is OK with typing with their thumbs might be an ideal user, he said. But he said the UMPC wouldn't necessarily replace a laptop.

Terminology doesn't matter though, HP's McKinney said. "Let a marketing person loose for 10 minutes and they'll come up with a category. You can say UMPC or MID, what the hell's the difference?"

(Nancy Gohring, Marc Ferranti, Dan Nystedt and Martyn Williams contributed to this report).

Oracle to ship critical security patches next week

Oracle plans to fix dozens of flaws in its software products next Tuesday, including critical bugs in the company's database, e-business suite and application server.
In its first security update of 2008, Oracle will ship 27 security fixes, some of which will affect several products. Oracle outlined some details of the upcoming patches in a pre-release announcement posted to the company's Web site Thursday afternoon.

Oracle releases security patches every three months, a process known as the Critical Patch Update (CPU). January's bug-fix total is low by Oracle's standards. In October, the company patched 51 vulnerabilities.

As usual, the company's database will be a major focus of the CPU. Oracle plans to ship eight security fixes for the Oracle Database, addressing bugs in the software's advanced queuing, core RDBMS (relational database management system), Oracle Agent, Oracle Spatial and XML (Extensible Markup Language) database software.

None of the database vulnerabilities can be exploited over a network without the attacker first obtaining a username and password for the database.

Oracle's next most-patched product will be the E-Business Suite, which will receive seven updates, three of which are for bugs that can be remotely exploited by attackers who do not have usernames or passwords for the system.

The Oracle Application Server will get six bug-fixes, addressing flaws in components such as the product's BPEL (Business Process Execution Language), Worklist Application, Oracle Forms and Oracle Internet Directory software.

Finally, Oracle is planning four updates for its PeopleSoft and JD Edwards products, as well as one fix each for Oracle Enterprise Manager and the Oracle Collaboration Suite.

Microsoft sends patch to wrong users

A day after Microsoft Corp. accidentally sent a patch to some users running the Windows Vista operating system, the company updated the preview release of Vista Service Pack 1 (SP1) to a small group of testers, the company confirmed Thursday.
"Microsoft [has] released the latest prerelease build of SP1, Windows Vista SP1 RC Refresh, to approximately 15,000 beta testers," a spokeswoman said in an e-mail this morning. "This group includes corporate customers, consumer enthusiasts, software and hardware vendors, and others. The code is not available for public download."

Four weeks ago, Microsoft made Vista SP1 Release Candidate available to the general public for the first time. The 15,000 testers, however, had earlier beta versions to work with, as well as this most recent update.

The company has slated Vista SP1 for final delivery this quarter, and Thursday said it remained on track. "We are still on schedule to deliver SP1 RTM in Q1 [calendar year 2008]," said the spokeswoman.

In a separate issue, though, the company Wednesday admitted a snafu in a Windows Vista update it issued Tuesday to prep PCs for the later release of SP1.

The update, which is described in the support document KB935509, was one of three prerequisites for SP1 unveiled Tuesday, and was supposed to end up only on Vista Enterprise and Vista Ultimate machines, since it targeted BitLocker, the full-drive encryption technology bundled with those premium versions of the operating system. Instead, the update was also offered to PCs running Vista Home Basic and Home Premium.

"We had a small number of early customer reports, that in some cases, this update was being offered for installation on all Windows Vista editions versus just Ultimate and Enterprise," said an anonymous poster on the Microsoft company blog devoted to the Windows Update development team. "For systems set to download and install updates automatically, the update will not install even if it has already downloaded, so most people will not be affected by this," the post continued. "Customers who installed the initial release of the update on editions other than Ultimate or Enterprise should not be concerned as the update will have no negative impact on their systems."

Although some users on Microsoft's support forums wondered why they had seen the BitLocker patch when it didn't apply to their machines, no one running Home Basic or Home Premium had reported problems as of midday Thursday.

The remaining pair of prerequisites tweak Vista so that users will be able to roll back to the debut version of the operating system by uninstalling SP1 if they find that necessary.

This week's glitch was the latest in a series of Windows Updates snafus that include the September revelation that, contrary to users' instructions, Windows' update code had updated itself on their PCs, and charges in October that the company's OneCare security suite was also monkeying with users' update settings. Microsoft denied doing anything untoward with OneCare.

Microsoft to provide virtual access to Library of Congress

Microsoft will provide the technology that allows visitors to the U.S. Library of Congress (LOC) to first take a virtual tour of historic documents and map out what exhibits they want to see, the two organizations announced Thursday.
The project will include the Myloc.gov Web site, to be launched in April, linked to information kiosks at the LOC's Thomas Jefferson Building in Washington, D.C. Interactive galleries will allow visitors to the Myloc.gov site to view and sometimes interact with items such as a rough draft of the U.S. Declaration of Independence, the Gutenberg Bible and a 1507 map that first used the word "America."

The new technology is designed to assist people who want to visit the library in person, said John Sampson, director of federal government affairs at Microsoft. Visitors to the Web site will be able to bookmark areas of interest, then use a bar code at the LOC's information kiosks that will point them to more information in person, he said. Visitors both online and on-site can also engage in a game called Knowledge Quest that sends them searching for clues in the LOC's art and artifacts, Microsoft said.

The library has thought hard about how to bridge the online experience with an in-person visit. Sampson said. The on-site kiosks will help visitors build a custom tour of the library, he said.

The library's plan is to cycle online exhibits in and out, but gradually make more information available online, Sampson said. "What they've realized is they have a vast collection of amazing, historic artifacts, documents and manuscripts that would take far too long to put on display," he said. "To say they have great content ... is almost an understatement. It's the nation's most prized treasures."

Microsoft is donating software, funding and training to the project and in return the company gets to work with one of the premier libraries in the world, Sampson said.

"For us, it's a unique showcase to show the breadth and the depth of the technology," said Keith Hurwitz, a platform strategy advisor at Microsoft.

Microsoft is helping put the library's "unparalleled educational resources literally at the fingertips of students and lifelong learners alike, both on site at the Library of Congress and virtually through the Web," Librarian of Congress James Billington said in a statement. "The Library of Congress and the causes of inspiration and creativity will benefit immensely from this act of generosity and expertise."

Interactive presentation software for kiosks will run on Windows Vista and its Web equivalent, built using Microsoft Silverlight. The project will also use Microsoft Office SharePoint Server 2007 Web content management software.

The library's Exploring the Early Americas" exhibition, which opened Dec. 13, offers a sampling of the new experience.

Mitch Kapor to phase out involvement in OSAF

The Open Source Applications Foundation has announced a major funding and personnel shakeup, including that Lotus Development founder Mitchell Kapor will begin to phase out his involvement and investment in the nonprofit organization, which he founded in 2001.

"Strategically, we find ourselves at a crossroads," OSAF's general manager, Katie Capps Parlante, said in a blog post.
"OSAF has been primarily funded by one person up to this point, Mitch Kapor. Our goal going forward is to modify our organization and our funding model to grow into a publicly supported community project, not propelled by one individual," Parlante wrote.

Parlante said moving forward, OSAF's paid staff headcount will be cut by roughly two-thirds, going from 27 to 10.

"I will be leading the next phase of the project, and Mitch will be winding down his role on the project. Mitch will provide transitional financial assistance to support the organization through 2008. Mitch will step down from the board, and I will replace him," she added.

In September, OSAF shipped a preview of its Chandler group collaboration software, which includes Chandler Desktop, Chandler Server and Chandler Hub, a Web application. The software lets users share information, such as calendars and tasks.

The release was a long time coming for a project once dubbed an "Outlook killer." Its rocky development process became fodder for a recent book, "Dreaming in Code."

"I would say I had a lot of ambitions that we wound up, for very good and practical reasons, scaling back on," Kapor said in an interview Thursday. He described the outcome as "a working subset of a grand vision."

Kapor said his interest in continuing waned. "We found ourselves in the situation that the team wanted to continue on very much," he added. "I found myself in a different place. I did not have that same level of commitment and desire, because I had the original dream in mind."

Kapor said the saga has proven to be a "huge learning experience" for him. "It's been a mixture of many different emotions. I would say it would be unfair to single out disappointment as a leading factor [in withdrawing my support]," he said.

"It felt like the right thing to do is provide this transitional support but now it has to find its own way, and its own funding. I've chosen to decouple from it but I think Katie and the team have a real shot," Kapor added.

Kapor's pending departure prompted a head-shaking eulogy from Web developer Hank Williams, who had been active in the project.

"From my perspective, Chandler was a rudderless ship. I tried to make suggestions which, though small, I felt could greatly reduce the complexity of the product. But their design process seemed to be insular and, honestly, broken," Williams said on his blog. "The failure of Chandler is sad. But indeed after six years with no viable product or even strategy, it is finally time to die."

In an interview Thursday, Parlante said OSAF intended to wean itself off Kapor's support all along.

"I don't think the project has any animosity toward Mitch," she added. "I just think through the next execution phase, he's going to be spending his energy on projects that are in an earlier phase. ... I am not unhappy and this is the right decision. It's a mutual decision between the two of us."

Parlante said she expects OSAF to be funded through a mixture of sources, including grants, partnerships and contributions.

She expressed confidence in the Chandler project's future: "We have something usable now. The user base is growing. At the end of the day we should be judged on the project we deliver ... We're not there yet, but there's a lot of promise, and I think we're going to make it."

Kapor seconded the notion. "Don't forget, Mozilla's obituary was written in 48-point type over and over again during the period before Firefox. I don't see any of those people coming back and eating crow," he said.

Android invades hardware

Google's Android developer kit for mobile phones has been successfully installed on several hardware devices, a step toward turning it into a genuine mobile-phone platform.
The Android Software Developer Kit (SDK), first released in November, has been developed by Google and others as part of the Open Handset Alliance with the goal of spurring innovation in the mobile space. The platform, based on the Linux 2.6 kernel, will comprise an operating system, middleware stack, customizable user interface and applications.

Google released the preview version of Android without support for actual hardware - instead, developers were given a software emulator based on Qemu. However, running the software on actual hardware can give developers a more accurate idea of how their applications will run.

And while the Open Handset Alliance (OHA) has more than 30 corporate members, open source developers are crucial to the project's success. Google released the SDK with a few demonstration applications, and is relying on third parties to come up with the rest.

Some developers have indeed criticized the OHA's hands-off approach, criticizing the lack of support for developers as well as bugs and missing functionality.

Possibly the first hardware platform to run Android was the Armadillo-500 from Atmark-Techno, based on Freescale's i.MX31L mobile processor, according to a blog post, which credited Australian developer Ben Leslie for the initial work.

Japanese telecommunications company Willcom has demonstrated another prototype Android reference board, also running on a Freescale-based chip, according to a Japanese gadget news website.

Fujitsu has published instructions on running Android on one of its reference boards.

Several developers said they had used Leslie's development work to run Android on different versions of Sharp's Linux-based Zaurus handheld computer.

One of these was software development firm EU Edge, which demonstrated Android running on a Zaurus SL-C760.

Another developer used a similar technique to run the platform on a Zaurus SL-C3000.

Installation has become easier via an installable Android image for Zaurus. A developer using the handle "cortez" and running a Zaurus SL-C3100 combined the Android SDK with the Poky Linux kernel, creating an installer that can be booted within a few minutes.

So far the Zaurus implementations have certain limitations - for instance, Android's Bluetooth doesn't yet work with Zaurus, and it can't yet interact with the handheld's touchscreen.

Google hasn't yet announced support for an official hardware development board. The first commercial handsets running Android are expected in the second half of this year.

New York launches antitrust investigation of Intel

New York state Attorney General Andrew Cuomo has launched an antitrust investigation of Intel, and on Thursday, his office served a wide-ranging subpoena on the company.
Cuomo is investigating whether Intel violated state and federal antitrust laws by coercing customers to exclude its main rival, Advanced Micro Devices, from the worldwide market for PC CPUs (central processing units), Cuomo said in a news release.

The subpoena seeks information on Intel's pricing practices and possible attempts to exclude competitors through its market power, Cuomo's office said.

Intel's conduct warrants "a full and factual investigation," Cuomo said in a statement. "Protecting fair and open competition in the microprocessor market is critical to New York, the United States, and the world. Businesses and consumers everywhere should have the ability to easily choose the best products at the best price and only fair competition can guarantee it."

An Intel spokesman confirmed that Intel has received a subpoena from Cuomo's office. Intel intends to "work very hard to comply with the request," said Chuck Mulloy.

"We believe our business practices are lawful, and we believe the microprocessor market is competitive," Mulloy added.

Cuomo's office, in the subpoena, is seeking information on whether Intel penalized customers, including computer manufacturers, for purchasing CPUs from competitors, his office said. Cuomo also wants to know whether Intel improperly paid customers for exclusivity and whether the company illegally cut off competitors from distribution channels.

Intel sells about 80 percent of the CPUs contained in PCs, Cuomo noted.

Authorities in Europe and Asia have also investigated Intel for monopolistic practices, with the European Commission accusing Intel in July 2007 of abusing its dominant position in the microprocessor market. Intel filed a response to the European complaint earlier this month.

In 2005, the Japanese Fair Trade Commission concluded that Intel violated its competition laws.

Macworld show guide for iPhone, iPod touch offered

iViewr.com has introduced an iPhone and iPod touch-based guide to Macworld Expo 2008.
Free to access, the 'Pod SnapShot' provides details of every aspect of the show, from conference schedules to speaker profiles, travel directions, disabled access, details of the Moscone Center's facilities and more.

"Rather than carrying around a jumble of papers, maps and leaflets, iViewr provides visitors to the show with all of the important information especially formatted for display on their iPhones or iPods" said Rod Cambridge, founder of iViewr.

To access the guide, simply browse iViewr.com using an iPhone or iPod touch and select the Events USA category there.

After furor, Network Solutions stands by name policy

Network Solutions is standing by its controversial policy of automatically registering some domain names that are the subject of searches on the company's Web site.
After testing the concept in December, the domain name registration company quietly began doing this over the past weekend. Potential customers who used the company's "Find a domain" search engine would suddenly find the domain names they had been searching for were registered to Network Solutions itself, making them temporarily unable to purchase the domain from another provider.

Industry watchers were quick to blast the new policy, saying that it either forced searchers to become Network Solutions customers, or exposed their ideas to scammers, who would be able to snatch up the domains the second they were released. "It is a deplorable action that Network Solutions would announce potential domain names to the entire world," wrote Jay Westerdal, on the DomainTools blog.

If cutting down on domain name scamming was the goal, "someone should be fired over the implementation," wrote Andrew Allemann, a blogger with Domain Name Wire.

On Wednesday, Network Solutions CEO Champ Mitchell said that his company planned to change the site's design to ensure that users are notified of this policy. The company is also looking into adding a feature that would make give users the option of keeping their searches un-registered, although that would require cooperation from domain name registries, he said.

Ironically, Mitchell said that Network Solutions came up with the search registration process in an effort to cut down on the scamming that has plagued the industry over the past two years. "We are not trying to make a bunch of money off of this," he said.

By registering the domains immediately, Network Solutions is keeping them out of the hands of scammers who take advantage of a loophole in the way names are registered. It has become increasingly common for scammers to register large number of domains for a short period of time and then to keep the ones that generate Web traffic, a practice called domain tasting. Because a domain can be held without charge for up to five days, this practice costs the scammer almost nothing, but it can be lucrative.

In another practice, called front running, scammers have found ways -- some of them illegal -- to keep track of domain name searches and then hold onto those domains themselves, hoping to sell them to the people doing the searching.

Some critics have said that Network Solutions' new practice amounts to front running, but Mitchell disagrees, saying the point of the system is to protect customers from the front-runners.

His company has developed an algorithm, designed to identify legitimate domain name searches and then automatically register the domain names being searched for on behalf of Network Solutions. These domains are held with a Web-page notice saying that they are available for sale for a four-day period. This gives the Network Solutions customer a window of opportunity to purchase the domain before it snatched up by a scammer, Mitchell aid.

Mitchell added that if ICANN (Internet Corporation for Assigned Names and Numbers), the organization that oversees the domain name system, would move to cut down on these type of scams, then his company wouldn't have to engage in this kind of automatic search registration. "We would be perfectly happy to end this process if ICANN or the registries would do something to protect small businesses or other small users," he said.

A $0.25 non-refundable domain name registration fee would probably be enough to make domain tasting or front running unprofitable, he added.

Ask.com names new CEO, president

Ask.com CEO Jim Lanzone is quitting after six years with the company, to join a venture capital firm. His replacement, Jim Safka, currently heads Primal Ventures, the venture capital division of Ask.com's parent company IAC.
The moves are intended to streamline the operating structure of IAC as it prepares to spin off some of its operations, said Barry Diller, CEO of IAC.

The group plans to turn its HSN (Home Shopping Network), Ticketmaster, Interval International and LendingTree activities into separate, publicly traded companies.

Lanzone was responsible for a "turnaround" at Ask.com over the last two years, Diller said in a statement.

However, Ask.com's share of the U.S. search market is small: it had just 4.6 percent of the market in November, compared to 9.8 percent for Microsoft, 22.4 percent for Yahoo, and 58.6 percent for Google, according to market watcher Comscore. Ask.com's share has fallen from 5.2 percent in March, according to Comscore figures.

Ask.com is going down fighting, though: in November, it struck a deal with Google said to be worth up to $3.5 billion to display sponsored search listings for its larger rival.

It has also tried to differentiate itself from the other search engines by promising to protect its users' privacy and erase personal data stored about the searches they make.

Ask.com's new CEO Safka will be aided by Scott Garell who, in the role of president, will manage Ask.com's daily business operations, the company said. Garell is currently head of IAC's consumer applications and portals business, which includes the Evite service.

Ex-CEO Lanzone become an entrepreneur-in-residence at Redpoint Ventures. That company's past investments include Excite, MySpace, TiVo -- and Ask.com itself.

Storm splinters, starts phishing, say researchers

Part of the Storm botnet appears to have been rented out to identity thieves, who are using it to conduct traditional phishing attacks that target customers of a pair of U.K.-based banks, researchers said Wednesday.
Two recent phishing attacks -- one aimed at customers of Barclays, the second at account holders of the Bank of Scotland -- appear to be coming from domains associated with known campaigns designed to build out the botnet of Storm-infected PCs.

Fortinet Inc. was the first security company to confirm that the Barclays attack came from Storm-controlled machines. In a post Monday, Fortinet research engineer Derek Manky noted that the phishing e-mails originated from a Storm fast-flux domain that the botnet had used since the middle of 2007.

In fast-flux, addresses are rapidly registered and de-registered with the address list for either a single DNS (domain name system) server or an entire DNS zone. In both cases, the strategy masks the IP address of the malware site by hiding it behind an ever-changing array of compromised machines acting as proxies. In extreme cases, the addresses change every second.

Tuesday, after the domain used in the Barclays phish was shuttered by a Web domain registrar, the botnet switched domains and started sending mail to customers of Halifax, a division of the Bank of Scotland, Manky said. Like the first campaign, the second tried to dupe recipients out of their banking account usernames and passwords.

The Finnish security firm F-Secure Corp. connected one of the IP addresses used in the Halifax phish to domains previously used by the Storm botnet, including postcards-2008.com, one of several referenced in New Year's Day greeting spam that began appearing just after Christmas.

"Somebody is now using machines infected with and controlled by Storm to run phishing scams. We haven't seen this before," said Mikko Hypponen, F-Secure's chief research officer, in a blog post Wednesday. "But we've been expecting something along these lines."

Paul Ferguson, network architect with Trend Micro Inc., echoed Hypponen in a warning of his own on Wednesday. "We can only suspect that perhaps a portion of the Storm botnet is being rented out to phishers," said Ferguson.

But Joe Stewart, a senior security researcher at SecureWorks Inc., and an expert on Storm, wasn't so sure. Through a spokeswoman, Stewart said that he had seen no hard evidence of the botnet being leased to phishers. In October, Stewart said the Trojan had added encryption to its command and control traffic, and speculated that the move was one way the hackers could partition the army of zombie PCs in preparation for renting pieces to other criminals.

Stewart said he had not found any additional encryption keys used by Storm, which would indicate that a split had occurred.

Storm's first-year anniversary is rapidly approaching; the Trojan was first identified Jan. 17, 2007 as the malicious payload in a large spam run that used news of severe weather battering Europe as the bait to get people to open a file attachment.