Does Facebook really lead to divorce?

Facebook is popping up in divorce cases as all those pokes, instant messages and high school flames catch up with folks with cheatin’ eyes, according to one rather unscientific story.
The Telegraph in the UK reports that Facebook is fueling divorce claims:
One law firm, which specialises in divorce, claimed almost one in five petitions they processed cited Facebook.
Mark Keenan, Managing Director of Divorce-Online said: “I had heard from my staff that there were a lot of people saying they had found out things about their partners on Facebook and I decided to see how prevalent it was I was really surprised to see 20 per cent of all the petitions containing references to Facebook.
“The most common reason seemed to be people having inappropriate sexual chats with people they were not supposed to.”
Read more of "Does Facebook really lead to divorce?" at ZDNet.com.
READ MORE - Does Facebook really lead to divorce?

White House names Howard Schmidt as cyber tsar

The White House has appointed Howard Schmidt as its new cybersecurity chief.
Schmidt, a former Microsoft and eBay executive who is president of the Information Security Forum (ISF), was introduced as the new cybersecurity chief in a White House blog post on Tuesday.
"The president has chosen Howard Schmidt to be the White House cybersecurity co-ordinator," said John Brennan, assistant to the president for homeland security and counterterrorism. "Howard will have the important responsibility of orchestrating the many important cybersecurity activities across the government."
Schmidt has over 40 years' experience in government, business and law enforcement, and was a cybersecurity advisor to the Bush administration. He will be a member of Barack Obama's national security staff, and will work closely with the president's economics team, Brennan said.
Various cybersecurity experts on Tuesday offered congratulations to Schmidt. Marcus Sachs, director of the Sans Institute, said in a blog post that Schmidt will do well in the position.
Read more of "White House names Howard Schmidt as cyber tsar" at ZDNet UK.
READ MORE - White House names Howard Schmidt as cyber tsar

Indian services to hit US$12.8B by 2013

India's IT services market is set to double over a five-year period, between 2008 to 2013, to hit US$12.8 billion, according to a new research note released Wednesday.
According to Springboard Research, the country's domestic IT services industry is expected to grow at a CAGR (compound annual growth rate) of 18.6 percent from 2008 to 2013, driven primarily by India's infrastructure services segment, worth US$7.2 billion, which will account for 53 percent of the overall market.
Within infrastructure services, hosting is marked for the highest growth rate over the predicted period, followed by enterprise outsourcing, network integration and network management.
The report also highlighted the application services segment as the fastest-growing, at a CAGR of 19.6 percent. IT consulting services will account for the smallest share at 5 percent, growing at a CAGR of 16.4 percent.
Sudip Saha, Springboard's senior research analyst for services, noted in the report: "[India's overall IT services market is] on par with international levels in terms of average gross margin, and provides immense opportunity to the vendors."
Saha added that vendors should focus on raising efficiency through reusable tools, templates and replicable models, in order to meet customer expectations.
Top verticals to be served by the country's services market are the banking, financial services and insurance (FSI) sector, followed by the public sector and telecoms.
Phil Hassey, vice president of services research at Springboard, added in the report: "With industries such as the public sector, healthcare, energy and utilities, and transportation and logistics stepping up their IT spending, the appeal for the Indian domestic market has increased tremendously and is drawing the attention of domestic and multinational IT service providers."
According to a recent Ovum report, fellow Asian powerhouse, China, may not be able to catch up with India in the IT services industry for the near future.
The lack of big Chinese IT services players to rival the likes of India's Tata Consultancy Services or Infosys would see the country's sector dominated by Western and Indian outsourcing vendors, Ovum said.
READ MORE - Indian services to hit US$12.8B by 2013

Death by ITIL: How IT departments streamline themselves into oblivion

When it comes to your IT shop, don't put frameworks and methodologies ahead of objectives, an expert advises.

You may or may not know magpies. They are rather large birds from the crow family, with pretty black and white suits, who exhibit an irresistible attraction to small shiny objects, such as spoons, foil, and small mirrors. Most of the objects procured by magpies end up stashed at the bottom of their nests, neglected after the initial feat of fascination.
It strikes me that many if not most departments fall victim to the same kind of fascination, when it comes to various frameworks and methodologies du jour. CMM and CMMI, PMBoK or Prince 2, ITIL and COBiT, Agile (I cannot bring myself to listing its gazillion flavors), TQM and Lean, Six Sigma, RUP, and all that crackle and pop ad infinitum.
If you have been in this profession long enough, you know that every few years a new fad diet comes along.
Please don't get me wrong, every IT or management methodology has at least some value to it. Toyota Production System is behind one of the most efficient car manufacturers in the world. Agile, when used correctly in the right environment, helps to create product when the "big upfront design" approach is ineffective or impossible. Key PMBoK or Prince 2 concepts are a must for any project management professional.
What I do find troubling is that IT management nearly always disregards the fact that methodology is secondary to objectives. If only could they ask themselves a simple question of "What are we trying to achieve?" more often! We could achieve results more expeditiously, while avoiding unnecessary and time-consuming undertakings.
When the only tool you have is a hammer, everything starts to look like a nail. When the only answer one has to any IT management question is "ITIL", you can be sure that it's not going to end well.
A couple of years ago, one IT leader proudly told me about his department's great success with ITIL. When I asked for an example, he said: "Well, for instance, we used to close tickets without asking the user whether they can be closed. Now we always ask and users appreciate that."
If you are in charge of IT operations and cannot figure this out on your own without ITIL, you are probably in the wrong line of work.
How to paint yourself in the corner using best practices
Whenever I am engaged by an organization to advance their IT department, I often look at the current state with the following two variables in mind:
  1. Operational capabilities. How do internal clients rate IT service? Are outages common? Are people knowledgeable in their respective disciplines? In other words, if this were a standalone company, would they be known as rendering good service?
  2. Strategic awareness. Does the CIO appear engaged in the corporate strategy on par with other C-level colleagues? Is the IT department seen as a valuable asset, an inseparable vital organ of the corporate body? Does IT management and staff understand the business their organization is in? Does IT innovate incessantly, propelling their organization forward?
This approach is similar to application of the Gardner's magic quadrant, except that they use it to look at whole industry sectors and I apply it internally to IT departments.
Here are the four states I usually find organizations to be in, depending on the behavior of these two variables:
1. Morass (Ops -, Strategy -)
The quality of IT service is below par. Outages are common. Business often finds itself in a situation where the technology is seen as a limiting factor. Project management is haphazard and the rate of project failure is high.
The IT department views itself is a support function, akin to facilities management. There is often a strong "them vs. us" sentiment among the IT staffers in reference to the "rest of the business".
This state has been a common occurrence until outsourcing became a norm. If you are a new CIO entering a department like this, be warned (as you likely have been!) that you don't have decades to turn things around.
2. Growing pains (Ops -, Strategy +)
IT services are unpredictable. Outages may be common. Operations may be haphazard, with key tools missing or jury-rigged. There is a sense that "too many things are on the go".
At the same time, CIO is one of the key people within the organization. IT managers have a very good understanding of the core business. IT comes up with solutions that wow their business colleagues. There is a lineup of future projects and noteworthy ideas on a whiteboard.
This state is usually transient and is typical for startups or organizations that underwent a major surgery.
3. Reliable service provider (Ops +, Strategy -)
The department is seen as a reliable provider of IT services, no more, no less.
4. Vital asset (Ops +, Strategy +)
Excellent in what they do operationally, IT staff and management see themselves (and are seen in the same way from the outside of the IT department) as a major catalyst in propelling the company forward. Innovation is a norm and is not a mindless tinkering but a quest guided by excellent knowledge of the industry, keen business sense and the understanding of business priorities.
The CIO is one of the most respected executives within the organization. He or she reports to the CEO and is never looked at as a senior "propeller head" but as a wise decision maker, a strategist and a businessperson.
IT departments that become infatuated with ITIL and that pour enormous resources into aligning with it, will achieve, at best, the third state, reliable service provider. Often seen as the best outcome one could hope for, it is not.
These IT departments will find themselves rather more expensive than before, with new staff--IT bureaucrats--hired to monitor and enforce compliance with procedures.
At the same time, they will have established an almost arm's length relationship with the business, having documented services that they render and SLAs that come with it, much like a third-party vendor. Their service, even if it is excellent, is now a commodity.
On top of that, they will have lost their flexibility and agility, due to numerous documentation steps, signoffs and approvals--even though they are there with a good intention to protect the integrity of the vital systems.
In today's economic environment when responsible fiscal management (often, ruthless cost cutting) is a must, the only possible question that can pop in the head of a CEO in this scenario is "Can I not get a comparable service that wouldn't cost that much?"
They can and they do. Having painted themselves in the corner by following a "state of the art" methodology, many IT departments stand a good chance of becoming history.
On the other hand, those IT departments that exhibit both operational excellence and strategic awareness, find themselves completely immune to outsourcing, because they are not merely service providers, they are an indispensable part of the organization's economic engine.
What kind of IT department are you running today? What is your vision for your organization's future? How are you going to get there?
I have recently co-authored a white paper, which may outlines the vision and ideas that will help you to turn your department into a vital asset. You can download "Transformation or Travails: The imperative for IT's shift from support function to strategic asset" by clicking this link.
READ MORE - Death by ITIL: How IT departments streamline themselves into oblivion

Decaf loses sting in Cofee attack

A controversial tool that claims to render useless a cyber-forensics program used by law enforcement agencies, remains a hot topic on the Internet even after it has been removed by its owners.
A pair of hackers reportedly developed Detect and Eliminate Computer Assisted Forensics (Decaf), touting it to be able to detect Microsoft's Computer Online Forensic Evidence Extractor (Cofee) and obstruct the program by, among other activities, locking down a machine, according to an article on the Dark Reading Web site on Dec. 14.
Distributed by Interpol, Cofee is used by law enforcement agencies worldwide, including the Hong Kong Police Force. The software was officially made available last October and according to various reports, it code was leaked on the Internet in November.
Decaf since removed
Since the public announcement of Decaf, its developers have professed the move was a publicity stunt to "raise awareness for security and the need for better forensics tools". The software download on its Web site was also removed.
In a YouTube video posted on Dec. 16, a spokesperson for the Decaf creators maintained they were "not hackers". The duo simply wanted to put a stop to Cofee, as other "pretty good" forensics tools were overlooked.
"We saw Microsoft release Cofee and it got leaked, and we checked it out just like any kid's first day at the fair, where you walk up and you see that cotton candy machine," he said. "And it smells so good and you see it's all fluffed up…you get up there and you bite into it--there's nothing in your mouth. That's the same thing we did with Cofee."
An update released Dec. 22 on the Decaf Web site further asserted that Decaf does indeed exist. "It did what it was set out to do and did it well," said the creators, adding that they "did not remove the tool because of Microsoft".
Richard Boscovich of Microsoft Digital Crimes Unit, noted in an e-mail interview with ZDNet Asia that, despite the emergence of Decaf, Cofee is still relevant to forensics work.
"Not only do we believe the current Cofee technology still holds great value for law enforcement use in the field, we will of course also continue to work with our industry and academic partners to evolve and update tools like Cofee, to meet the needs of law enforcement over time," Boscovich said.
Pointing out "there are far more effective and responsible ways of advancing forensics", he added: "Feedback and constructive criticism are the cornerstones of any development process, and we remain committed to working with experts in this area in the responsible manner required to effectively address digital crime."


READ MORE - Decaf loses sting in Cofee attack

Psystar closes up shop...or does it?

Last one out at Psystar, turn off the lights.
After being ordered by a federal judge on Tuesday to stop selling its Mac clones, Psystar was reported to be going out of business, according to a Dow Jones Newswires story last Thursday. Psystar's Web site was also inaccessible late last week.
Psystar attorney Eugene Action told Dow Jones that founder and President Rudy Pedraza will be "shutting things down immediately," and that all eight employees will be let go. However, Computerworld reported last Friday that another attorney for Psystar, K.A.D. Camara, of the Houston, Texas, firm Camara & Sibley, says the company is not shutting down and that Action was "misquoted," and Psystar "does not intend to shut down permanently."
The Florida-based company was found to be infringing on Apple's copyrighted Mac OS software in a November ruling by the U.S. District Court for the Northern District of California.
Last Tuesday, U.S. District Judge William Alsup granted Apple's request for a permanent injunction against Psystar's sales of its Open Computer. In that ruling, the judge wrote: "Defendant must bring its conduct into compliance with the injunction by midnight on Dec. 31, 2009, at the latest...Defendant must immediately begin this process, and take the quickest path to compliance; thus, if compliance can be achieved within one hour after this order is filed, defendant shall reasonably see it done."
READ MORE - Psystar closes up shop...or does it?

Nothing 'to worry about' customized search

Google's recent move to extend personalized search to all users is not cause for concern on user privacy, the search giant has maintained. Now, an analyst has backed up the claim, noting that the move is really a "money-making" endeavor.
Personalized search was previously offered only to Google account holders who are logged in and have Web History enabled, according to a blog post made by Google employees Bryan Horling, software engineer and Matthew Kulick, product manager. On Dec. 4, the pair explained Google's plan to provide the option to users without requiring them to be signed in.
"This addition enables us to customize search results for [users] based upon 180 days of search activity linked to an anonymous cookie in [their] browser," Horling and Kulick wrote in the post.
The service is currently on an opt-out rather than opt-in basis, which a Google spokesperson said was aimed at reducing "the barrier to receiving customized search results".
In an e-mail interview with ZDNet Asia, the spokesperson explained: "From our personal experiences, we often forget to sign in before searching and we wanted to help improve search for these users."
However, he denied that the move would cross the user privacy line, adding that Google "offers control" for both signed-in and signed-out users to opt out of the service if they wish to.
Chris Perrine, COO and executive vice president of Springboard Research, also told ZDNet Asia that the move was not about invading privacy, but instead generate more ad revenue for the company. "In this specific instance I don't feel there is anything to worry about, but one can sense a growing trend that consumers, businesses, and governments are starting to be more wary [of] Google," he noted in an e-mail.
Perrine added that in terms of "making money", the extension of the feature helps better equip Google to target more relevant ads for the different user groups.
Privacy advocates, on the other hand, are unhappy with Google's move. According to an article by The New York Times, using the Google search engine without logging in was a means of minimizing exposure to Google's data collection practices.
However, this was now no longer possible, Marc Rotenberg, executive director of the Electronic Privacy Information Center, said in the report. "The key point is that Google is now tracking users of search who have specifically chosen not to log in to a Google account. They are obliterating one of the few remaining privacy safeguards for Google services," he noted.
Rival search engine and browser operators ZDNet Asia spoke to claimed that users' choice and need for privacy come first for them.
Peter Joblin, Yahoo Southeast Asia's senior communications manager, said in an e-mail that the company believes that each user should decide individually how much information they want to share.
"We have not adopted a personalized search approach for non-logged-in Yahoo users," he said. "At Yahoo we believe in choice, and our search policies enable choice."
Joblin added that the company does not use personalized rankings for search results under its algorithm Web search engine. Yahoo's search engine function. Instead, Yahoo recommends sites based on users' interests and sites that they have saved previously via its "social bookmarking service", delicious.
Microsoft, which launched its Bing search engine in May this year, said in an e-mail it remains "focused on developing innovative and personalized services for search while also protecting people's privacy".
A Microsoft spokesperson noted the company's search history function provides clear controls to manage history and lets the user know what information is collected and how it will be used. Furthermore, the company has a policy to "anonymize" search data after 18 months by permanently removing the entirety of the IP (Internet Protocol) address and other "cross session identifiers", including cookies, he said in an e-mail.
"We believe that partial approaches such as removing only IP addresses, or portions of an IP address, are less effective in protecting user privacy," added the spokesperson.
Striking a similar tone is Opera Software, creator of the Opera Web browser. Jan Standal, Opera's vice president of desktop products, said in an e-mail that any potential benefits of search results based on a user's browsing history "needs to be balanced with people's rights and needs for privacy and security" when browsing the Web. He added that protecting the privacy of its users is "one of the company's core values".
According to Standal, during a search process, the Opera Web browser would relay the user's search requests directly to third-party Web sites. "What is sent to the third-party site is only the information needed to perform the particular search query; no personal information is sent," he noted.
READ MORE - Nothing 'to worry about' customized search

Abu Dhabi firm finalizes Chartered deal

Advanced Technology Investment Company (ATIC), owned by the government of Abu Dhabi, has finalized its acquisition of Singapore's Chartered Semiconductor Manufacturing.
In a statement, ATIC said the acquisition was effective Friday following the obtainment of regulatory approvals from Singapore's High Court. Chartered Semicon's shareholders had also given their blessings to the deal last month.
"With the approval process behind us, we will now focus our attention on combining two great companies, Chartered and Globalfoundries," ATIC CEO Ibrahim Ajami said in the statement. ATIC has the majority stake in Globalfoundries, the manufacturing arm of AMD (Advanced Micro Devices).
"Customers have told us they are excited about the combination of technology, talent and capacity this new entity will bring to the marketplace," he added. "We share their enthusiasm."
According to the statement, Chartered Semicon will be delisted from the Nasdaq composite index and the SGX (Singapore Exchange) on Dec. 28, and Dec. 29, respectively. Its American Depositary Share (ADS) program will also cease trading Friday.
According to a report released Thursday by research analyst Gartner, worldwide semiconductor industry revenues will reach US$226 billion by year-end, a decline of 11.4 percent over 2008. AMD was ranked No. 9 this year among the semiconductor vendors, with US$4.8 billion in revenue and a market share of 2.1 percent.
READ MORE - Abu Dhabi firm finalizes Chartered deal

Firefox, Adobe top buggiest software list

Firefox was the application that had the most reported vulnerabilities this year, while holes in Adobe software more than tripled from a year ago, according to statistics compiled by Qualys, a vulnerability management provider.

Qualys tallied 102 vulnerabilities that were found in Firefox this year, up from 90 last year. The numbers are based on running totals in the U.S. National Vulnerability Database.
However, the high number of Firefox vulnerabilities doesn't necessarily mean the Web browser actually has the most bugs; it just means it has the most reported holes. Because the software is open source, all holes are publicly disclosed, whereas proprietary software makers, like Adobe and Microsoft, typically only publicly disclose holes that were found by researchers outside the company, and not ones discovered internally, Qualys Chief Technology Officer Wolfgang Kandek said late on Wednesday.
Meanwhile, Adobe took the second-place spot from Microsoft this year. The number of vulnerabilities in Adobe programs rose from 14 last year to 45 this year, while those in Microsoft software dropped from 44 to 41, according to Qualys. Internet Explorer, Windows Media Player and Microsoft Office together had 30 vulnerabilities.
A shift in focus
The numbers illustrate the trend of attackers turning their focus away from operating systems and toward applications, Kandek said.
"Operating systems have become more stable and harder to attack and that's why attackers are migrating to applications, he said. "Adobe is a huge focus for attacks now, around 10 times more than Microsoft Office. However, other widely used targets like Internet Explorer and Firefox are still far from secure."
Research from F-Secure earlier this year provides further evidence that holes in Adobe applications are being targeted more than Microsoft apps. During the first three months of 2009, F-Secure discovered 663 targeted attack files, the most popular type being PDFs at nearly 50 percent, followed by Microsoft Word at nearly 40 percent, Excel at 7 percent, and PowerPoint at 4.5 percent.
That compared with Word representing nearly 35 percent of all 1,968 targeted attacks in 2008, followed by Reader at more than 28 percent, Excel at nearly 20 percent, and PowerPoint at nearly 17 percent.
As a result, Adobe needs to respond the way Microsoft did in 2002 when it launched its Trustworthy Computing initiative, and make securing its software a company-wide priority, researchers say. F-Secure even recommended that people stop using Reader and use an alternative PDF reader.
Adobe has taken some action, announcing in May that it would release its security updates on a regular schedule, quarterly and coinciding with every third Microsoft Patch Tuesday.
Another study released this week focuses on which applications are the riskiest to users. Based on the most severe vulnerabilities in popular applications that run on Windows and which are not updated automatically, Firefox again tops the list, followed by Adobe Reader and Apple QuickTime, according to Bit9, a provider of application whitelisting technology.
The list of risky software compiled by Bit9 based on the National Vulnerability Database also includes Java, Flash Player, Safari, Shockwave, Acrobat, Opera, Real Player, and Trillian. Last year, the Bit9 list of the most risky apps included Skype, Yahoo IM, and AOL IM, but those three were not on this year's list.
Not included on the list are programs from Microsoft and Google because of the ability for users of their software to have patches installed automatically. Microsoft software can be automatically and centrally updated via the Microsoft Systems Management Server and Windows Server Update Services, and Google Chrome is automatically updated when users are on the Internet, Bit9 said.
The lists do not take into account the amount of time it takes for companies to release patches, particularly when there is an exploit in the wild. Bit9 noted that Microsoft Internet Explorer was given an "honorable mention" because of a zero-day vulnerability related to ActiveX that went unpatched for three weeks in July.
Microsoft isn't alone in taking longer than customers would like to fix holes. In March, Adobe released a patch for a zero-day vulnerability in Reader and Acrobat--about two weeks after it was disclosed to users and nearly two months after exploits had been discovered in the wild.
Adobe customers will have to wait about a month for a fix to the latest critical zero-day hole in Reader and Acrobat. The company announced on Wednesday it would not patch the vulnerability until its next scheduled quarterly security update release on January 12.
READ MORE - Firefox, Adobe top buggiest software list

Integrated inboxes wave of the future?

Some users see the new slew of integrated inboxes being limited to the workplace, but Web companies such as Google and Mozilla Foundation see these platforms as an evolution of the humble e-mail inbox, as we know it today.
In May this year, Google debuted its Google Wave project at its I/O Developer conference, an integrated messaging platform tying in functions including instant messaging and embedded applets into an e-mail-like interface.
Firefox-maker, Mozilla, too announced an experimental "aggregator" called Raindrop, which aims to park messages from a plethora of media such as Twitter, Facebook, instant messaging and Google Docs into one "inbox".
But while public interest in these platforms have been high--Google noted in September it received over a million requests from users to preview Wave--some users see such integrated platforms being limited to business collaboration.
Reene Ho, a Singapore-based marketing executive, said in an e-mail interview with ZDNet Asia, an integrated platform would be most suited for collaborative work, where multiple users can join a conversation thread for planning.
These platforms, she noted, offer increased functionality over the vanilla inbox via by apps and plug-ins. "Maps and 'yes/no' gadgets can [help] make quick polls," she said.
Another Singapore-based user, Angeline Yeo, said her company was trialing Google Wave to share and edit documents.
In an e-mail interview, Yeo said she would be open to e-mail evolving into a Wave-like interface: "If e-mail can look and act like Google Wave it would help with keeping track of the latest e-mail [messages]. I don't have the habit of replying e-mail immediately, and sometimes the message that needs [attention] gets lost in the tsunami of e-mail that comes in after."
Ho noted however, that the "live typing" feature in Google Wave would pose a privacy issue for enterprise users. The platform allows participants in a thread to view what others are typing as they type it; users joining the thread later can also view amendments made via a playback function.
"With e-mail, I can type out a draft, look at it, review it, and if I don't like what I'm writing, or can't think of a good way to phrase things, I can chuck it in the drafts folder," she pointed out. "But with Wave's live typing, there's no way I can ensure my message goes out the way I want it to be.
"Anyone can just look at the playback and know exactly what I've typed. Not a good idea."
According to Ho, the learning curve involved may turn casual users off. On top of that, there are also inconveniences. For example, Ho's Google Wave account incorporates her Twitter and MSN messaging accountswidgets and she has to physically open up the Wave window to find out if it's a new tweet, IM (instant messaging) or Wave message.
She noted: "Personally, I don't think [platforms such as] Wave will replace e-mail anytime soon. It's too 'gadgety' to catch on like e-mail."
An inbox evolution
In an e-mail response to ZDNet Asia, a Google spokesperson said the creation of Wave was to update e-mail and instant messaging, which were "originally designed in the '60s to imitate analog formats--e-mail mimicked snail mail and instant messages mimicked phone calls".
The evolution of online communication, which has seen people hop onto blogs, wikis and collaborative documents, led Google to initiate Wave as a fresh starting point of communication from these "advances", said the spokesperson. "It's still early days, but we feel that the integration of different data formats...like photos, videos and maps...and interactive tools presents a new way for users to communicate and collaborate online," she explained.
David Ascher, CEO of Mozilla Messaging, the Foundation's subsidiary which focuses on Mozilla's Thunderbird e-mail client, said in an e-mail to ZDNet Asia the integrated inbox is not just for the power user, but the generation that has grown up on a staple of social networking platforms.
"Today's power users aren't just the BlackBerry-addicted business types of the past," he noted. "These days, even young consumers have multiple e-mail accounts, twitter and facebook accounts, and more. With the rise of social networking and microblogging, lots of types of users feels the need for identifying what's relevant or urgent, or even just struggle with finding messages that they know they saw, but can't remember where."
Ascher said openness will pave the way to more integration between messaging platforms online. "It will really depend on whether the companies that produce products are willing and able to recognize that their users use services that go beyond their own offerings.
"The integrated inboxes that will likely be most successful are those that realize users don't limit their contacts to specific social networks," he pointed out.
On the eventuality of a Raindrop-type interface finding its way into the enterprise, Ascher said the integrated movement will be consumer-driven.
"I think the consumer market is leading the charge here, because people are communicating in a huge variety of new ways, and enterprises tend to be slower to adopt new technologies," he said, noting that worker demand will help move the way enterprises choose their IT products, "even if [these] were not on the CIO's plan" initially.
READ MORE - Integrated inboxes wave of the future?

Symantec confirms zero-day Acrobat, Reader attack

Symantec on Tuesday confirmed a vulnerability in Adobe Acrobat and Reader and said it was being exploited by a Trojan hidden in e-mail attachments.
The malicious Adobe Acrobat PDF file is distributed via an e-mail attachment that "drops and executes when opened on a fully patched system with either Adobe Acrobat or Reader installed," Symantec said in a statement.
Symantec identified the file as Trojan Pidief.H, which targets Windows 98, 95, XP, Windows Me, Vista, NT, 2000 and Server 2003.
The rate of infection is extremely limited and the risk assessment level is very low, according to Symantec.
The exploit has been in the wild since at least last Friday, according to the Shadow Server blog.
"Several tests have confirmed this is a 0-day vulnerability affecting several versions of Adobe Acrobat [Reader] to include the most recent versions of 8.x and 9.x. We have not tested on 7.x, but it may also be vulnerable," the post says. "We did not discover this vulnerability but have received multiple reports of this issue and have examined multiple different copies of malicious PDFs that exploit this issue. This is legit and is very bad."
The vulnerability is in a JavaScript function within Adobe Acrobat Reader itself, the Shadow Server post says, before advising users to disable JavaScript.
Adobe posted a security advisory late on Tuesday saying that it had confirmed a critical vulnerability in Adobe Reader and Acrobat 9.2 and earlier versions that could crash the system and allow an attacker to take control of the computer.
Affected software is Reader 9.2 and earlier for Windows, Macintosh, and Unix, and Acrobat 9.2 and earlier for Windows and Macintosh, Adobe said. The company recommended disabling JavaScript to protect the system.
Adobe had said on Monday night that it was investigating reports of a vulnerability in Adobe Reader and Acrobat 9.2 and earlier versions being exploited in the wild.
Adobe has increasingly had to deal with holes in and exploits targeting its popular software. Adobe issued updates in October that fixed nearly 30 holes in Reader and Acrobat 9.2. Earlier that month, Trend Micro reported on a zero-day exploit targeting Adobe Reader, as well as 9.1.3 and earlier versions of Adobe Systems' Acrobat.
In July, Adobe warned of attacks in which malicious PDF files were exploiting a vulnerability in Flash. And in April a new Reader hole emerged after Adobe fixed a two-month-old critical vulnerability in Adobe Reader 9 and Acrobat 9.
READ MORE - Symantec confirms zero-day Acrobat, Reader attack

Adobe to patch zero-day Reader, Acrobat hole

Adobe on January 12 will patch a critical hole in Reader and Acrobat that is being exploited in attacks. That date is the company's next scheduled quarterly security update release.
The zero-day hole, which affects Reader and Acrobat versions 9.2 and earlier, could crash the system and allow an attacker to take control of the computer.

Malicious Adobe Acrobat PDF files are distributed via an e-mail attachment that, when opened, executes a Trojan that targets Windows systems, according to Symantec. The rate of infection is extremely limited and the risk assessment level is very low, the company said.
Adobe decided to issue the patch in cycle in about four weeks rather than work on an earlier patch release because that would take between two and three weeks to deliver and would put the regular quarterly update off schedule, the company said in a blog post.
"The team determined that by putting additional resources over the holidays towards the engineering and testing work required to ship a high confidence fix for this issue with low risk of introducing any new problems, they could deliver the fix as part of the quarterly update on January 12, 2010," Adobe's Brad Arkin wrote.
In the meantime, customers can use a new JavaScript Blacklist mitigation feature that allows for easy disabling of JavaScript, Arkin said.
"Additionally, an informal poll we conducted indicated that most of the organizations we talked with were in favor of [releasing the patch in cycle] to better align with their schedules," he wrote.
Meanwhile, Webroot analyzed the payload of the malware and found that it installs three files that look like Windows system files that are digitally signed with a forged Microsoft certificate. Unlike legitimate Microsoft-signed certificates, these lack an e-mail address and a time stamp, the company said in a blog post.
"Authors of Trojan horse apps rarely go to the trouble of digitally signing files in this way," writes Webroot researcher Andrew Brandt. "It's not clear why they would be digitally signing files, but clearly the person or people behind this are up to no good."
READ MORE - Adobe to patch zero-day Reader, Acrobat hole

Microsoft top lawyer: EU deal opens new chapter

Perhaps the next time Brad Smith heads to Brussels, it will be for a vacation.
After years of wrangling with Microsoft, the European Commission announced an accord with the software giant Wednesday on several fronts that seems poised to put an end to its antitrust concerns with Redmond.
In the wake of the announcement, I spoke to Smith, Microsoft's general counsel, about the decision, what it means for the future of Windows, and whether the company sees its spot on the antitrust hot seat now being taken up by other companies, including Google.
Here's an edited transcript of our conversation:
Q: Is this really it as far as Europe is concerned?
Smith: This is definitely a major milestone for Microsoft. Today's announcement reflects a broad set of agreements that really address a wide array of issues. At the same time, we obviously need to keep our eye on the ball. Antitrust issues will continue to be important for us, just as they are going to continue to be important for a number of other leaders in our industry. We're going to have to do an excellent job implementing these agreement. We are going to have to do an excellent job addressing any new issues that arise in the future. Having said all that, I also think it is fair to say, as Commissioner [Neelie] Kroes did when she spoke in Brussels, this does represent the closing of one chapter and gives us the opportunity to open a new chapter. We're definitely enthused about that opportunity and we're committed to ensuring the next chapter is a positive and constructive one.
One of the things that Steve Ballmer talks a lot about in terms of antitrust issues is getting legal clarity on what one can and can't do. Do you feel like you now have that understanding with the EU?
Smith: I think this gives us a great deal more clarity. I think it gives the industry as a whole more clarity. It's perhaps most helpful in the area of interoperability because it really implements a new framework. It applies to a broad array of Microsoft products--Windows, Windows Server, Exchange, SharePoint--and for all of these products it has certain principles that we have to adhere to. It addresses the way we implement file formats.
At the same time, no advance on any single day can ever answer all questions for all companies for all time.
Essentially the EU has said through its very objections that you can't put a media player in Windows and you can't put a browser in Windows. What do you feel Microsoft can include in future versions?
Smith: There are two things to think about. First is what gets included in Windows, and second, what's the right way to address something that is included.
Our basic approach is to include in Windows, software that has APIs (application programming interfaces) that will be beneficial for other applications to call on and use. The browser is definitely an example of that. It's quite probably even more important in that role today than it was, say, when the browser issues first arose in the 1990s. The media player plays a similar role in terms of some broad APIs that are used by a wide variety of other applications.
There are other things that we have put in Windows in the past that don't necessarily involve the same role. A good example of that is Windows Live Messenger. We had Windows Messenger in Windows XP. It's not in Windows Vista or Windows 7. We're trying to make thoughtful decisions about what is included.
Then the second question that arises is how do things get included. How do we document APIs that our browser is using so that other browsers can use them as well? That's part of the U.S. consent decree. "I think that what we are going to see in the next decade is this field of law being applied to a wide number of technology leaders that have high market share."
How do we ensure that [computer makers] have flexibility to offer competing choices? How do we ensure that consumers are aware of competing choices and can use them if they wish. That latter part is an area where different governments have chosen different approaches at different times. The U.S. Department of Justice chose one approach in its consent decree. The Korean Fair Trade Commission chose a second approach. The European Commission in the media player case in 2004 chose a third approach. Today's announcement on the browser reflects the European Commission choosing a fourth approach.
Some people have the opinion that as a result of these different antitrust issues, Microsoft really finds itself with one hand tied behind its back as it competes in the battles of today. Do you believe Microsoft in the current antitrust environment competes on an even footing with some of the other Internet giants?
Smith: I do believe it is very important for all technology leaders in our industry to follow the same laws and obey the same rules. The rules don't necessarily apply in the same way when a company has a small market share as it does when a company has a large market share. But there are a number of companies that have large market shares for very important products. We've taken a number of steps to get into line with new legal rules in this field. The law has evolved and we've needed to evolve to address these new obligations.
We do believe our competitors need to play by the same rules. They've often been at the forefront of asking regulators to evolve the law in new directions. Now that the regulators have done so, we believe they need to pay attention as well.
Do you anticipate a period of time over the next few years where Microsoft is more likely to be the subject of antitrust inquiries or the company on the other side of the table for a change?
Smith: I think that we have addressed a very wide array of issues. Perhaps, in part because we were the first company to have to go through these inquiries, at least since the dawn of the PC era. We've probably had to go farther and sooner than other companies have had to do. We're now in an era where a different company seems to be in the headlines for competition law issues, if not every day, at least every month.
I think that what we are going to see in the next decade is this field of law being applied to a wide number of technology leaders that have high market share. We're going to see that, not only in Washington and Brussels, but we're likely to see that in more countries around the world simply because the global economy has evolved.
Have you expressed concerns specifically to Europe or Washington, D.C., about some of Google's behaviors?
Smith: We were very transparent last year when Google entered into its agreement with Yahoo. We felt that that was an illegal agreement that Google had entered into for the sole purpose of preventing Microsoft from becoming a more successful competitor, together with Yahoo, in the search space. "One shouldn't move faster than speed of thought and yet one shouldn't be so thoughtful that one simply analyzes problems and fails to solve them."
It was only when the Department of Justice informed the parties that it was on the verge of filing suit that Google decided to drop that agreement. We have not been shy about raising concerns when we have them.
It was only a couple hours after you guys settled with Brussels that we heard from D.C. with regards to Intel. When you initially heard that the FTC was filing suit against Intel, did you have feelings of empathy toward what their lawyers are going through, or what were your initial reactions?
Smith: I obviously know from a lot of firsthand experience the challenges that arise when a company needs to address these kinds of issues. Our road was a long one and it had its share of difficult moments. Antitrust issues are never easy for company to address.
This isn't a case where Microsoft has taken a public stance or even voiced to the regulators a position, is it?
Smith: We have not taken any public or nonpublic positions on the issues.
Are you guys looking to reach an agreement with Plurk? You guys said that you used code you shouldn't have? I'm curious if you are trying to negotiate some sort of settlement with them?
Smith: I wouldn't want to say anything that goes beyond the public statement we put out.
It does seem when I look at any particular issue with regards to the Internet, Microsoft tends to have a much more cautious approach. It seems like it is tough to compete when others are bundling more than you.
Smith: I think our goal is to be thoughtful but also fast-moving. As we look at the Internet today, it is increasingly a regulated space. That wasn't the case a decade ago. I think a thoughtful company needs to really think through how its products and services are going to comply with the regulations that are going to be enforced or likely to be applied in many different countries around the world. At the same time, one cannot let that get in the way of moving forward quickly. I think it's striking that balance that is really quite important. One needs to move fast. One shouldn't move faster than speed of thought and yet one shouldn't be so thoughtful that one simply analyzes problems and fails to solve them.
Do you think Microsoft has erred a little too much on side of caution in recent years?
Smith: I don't know that we've erred too much on the side of caution, but I do think it's extremely important we move quickly. This is a very dynamic space it is certain to remain a very dynamic space. Customers are interested in deploying new products and services, whether it is on the client, on the server, or on the cloud. The real key is to develop the capability to be both thoughtful and fast moving.
READ MORE - Microsoft top lawyer: EU deal opens new chapter

Businesses get a Second Life

The spotlight on Second Life as a marketing tool has dimmed with the emergence of social media tools like Facebook and Twitter, although the virtual world is increasingly used by enterprises internally for virtual meetings and events, says its developer, Linden Lab.
Second Life saw its hype peak two years ago as users flocked to the platform. According to reports at the time, however, businesses setting up virtual shop there did not profit from their online ventures, nor where they expected to in the immediate years to come.
Some companies today are putting more emphasis on social media platforms like Facebook--which reached 300 million users in September--and Twitter.
Greg Fisher, head of advertising and marketing services at Intel Technology Asia told ZDNet Asia in an e-mail interview that Intel, which launched a Second Life marketing campaign in 2007, is increasingly leveraging "more traditional" social media channels such as Facebook, Orkut and Twitter to reach Asia-Pacific mainstream consumers.
IDC Asia-Pacific's market analyst for digital marketplace and new media, Debbie Swee, thinks Second Life was not able to capture a mass audience because of the learning curve it presented.
"When Second Life was launched, it was considered way ahead of its time... It would have thus required [people] to be fairly advanced users of the Internet, or even possess some technical know-how," she explained.
"Now that a larger majority of Internet users have become savvier, Second Life has pretty much lost its allure. Coupled with the bad press about 'griefing' and sexual harassment in the virtual world, it will take some time for Second Life to pick up adoption again," she said.
Swee added that Second Life as a virtual world operates on a vastly different dimension, compared to Facebook and Twitter which are online applications that facilitate social activity on the Internet.
On whether Facebook and Twitter might see interest surrounding them dip as well, Swee replied that "some attrition may occur after people get bored of these tools" but she believes that "social network sites are here to stay, whether it is Facebook, Friendster or an upcoming brand in the future".
Second Life complementing social media
Linden Lab, Second Life's creator, does not see Facebook and Twitter as competitors but complements to its virtual world platform. Chris Collins, general manager, enterprise at Linden Lab told ZDNet Asia in an e-mail interview that organizations can "tweet" their location in Second Life for followers to meet them or they can share photos of their experiences in-world on Facebook.
Collins added: "Like Facebook and Twitter, Second Life is a social tool. What makes Second Life unique is the immersive and rich experience it offers as a tool for communication, collaboration and more."
"Whereas platforms like Facebook and Twitter are often used for one-to-many communication and direct customer outreach, Second Life is increasingly being used today as an internal enterprise tool for collaboration, prototyping, meetings and conferences, and training," he said.
Collins said that more than 1,400 organizations including universities and Fortune 500 companies are set up in Second Life. He added: "While a broad range of companies are currently using Second Life, the virtual world is particularly well-suited for organizations with a distributed workforce that needs to meet and collaborate, and organizations that need to rapidly and inexpensively prototype or train workers."
To that end, Linden Lab in November launched the beta of Second Life Enterprise, a version of Second Life that can be hosted behind an organization's firewall.
"Second Life Enterprise provides all of the benefits of working in Second Life--communication tools, resources, content creation and collaboration--with the added benefits of enhanced privacy and centralized administrative controls," said Collins, adding that 14 organizations are currently using Second Life Enterprise.
IBM is one of the organizations which has adopted Second Life Enterprise. Neil Katz, distinguished engineer from IBM's Office of the CIO told ZDNet Asia in an e-mail interview that the company holds internal business meetings "nearly every day in Second Life", and uses it to work with clients as well.
IBM, which has had a virtual Business Center set up in Second Life since 2007, uses different social media channels to address different audience segments, said Katz. "One is not a replacement for another and, in fact, they can complement each other depending on the application."
He said IBM's Second Life audience includes its "technical, non-technical, and services oriented population, spread across multiple countries and geographies".
"In today's world, where there is a need to focus on customer needs and minimize internal travel, there remains a strong need for employees to continue to collaborate, communicate, and meet even when face-to-face meetings are impossible," he said, adding that Second Life helps to provide "an immersive, 3D, real-time experience that is impossible by other means, and very valuable when it is impossible to have a face-to-face meeting".
READ MORE - Businesses get a Second Life

Analyst: 'Classic' feature phone will survive

The "classic" feature phone form factor will persist, in spite of dominant feature phone maker Nokia's decision to focus on the smartphone segment, according to an analyst.
Facing pressure in the smartphone arena from the likes of Apple and Research in Motion, the Finnish phone giant announced last month plans to consolidate its handset lineup and focus on smartphones. Nokia is the veritable king of the low-end phone segment, and has over the last few years relied on emerging market sales to boost its overall handset share .
John Strand, CEO of Strand Consult, thinks Nokia's shifting focus on the smartphone segment is indicative of the trend that lines have blurred between smartphones and feature phones.
In an e-mail interview with ZDNet Asia, Strand said one example of blurring the lines is phone makers moving their smartphone OSes onto their mid-range handsets. In Nokia's case, it is placing its high-end Series 60 Symbian platform onto mid-ranged Series 40 devices, while it revamps its high-end range with Maemo, he said. The move, he pointed out, merely represented a "technology shift" or the increasing capability of mobile phones, and not an attempt to abolish low-end phones.
Strand added that manufacturers are also expected to churn out an increasing number of smartphones, due to the falling cost of producing these devices. However, he noted that the upward trend in smartphone production has been largely due to cost efficiencies in technology, not because consumers are specifically demanding smartphone devices.
According to him, "only a few customers deliberately purchase smartphones" while most want a "reasonably-priced" phone that is capable of several "smart" features. Whether a customer ends up with a smartphone or a feature phone was a matter of "coincidence", he added.
As a result, the "classic" feature phone will survive, as there exist consumers who prefer that form factor.
Strand Consult estimates that entry-level devices make up the bulk, or 45 percent, of Nokia's sales. Its mid-range Series 40 devices account for 40 percent of total units shipped, while smartphones contribute the remaining 15 percent.
Nokia, in contrast, told ZDNet Asia that it expects the smartphone market to grow faster from next year, and eventually overshadow feature phones. In response to an e-mail query from ZDNet Asia, a Nokia spokesperson said the company intends to take its smartphones to the mass market at lower price points.
"We expect that in 2010, volume for the overall mobile device market will increase about 10 percent year over year," she said. " And we expect the smartphone segment will grow much faster."
And while feature phones may survive, they could eventually be eclipsed by smartphones, the spokesperson indicated. "People can do more with smartphones which offer advanced capabilities and PC-like functionality, and this is definitely more attractive to consumers compared to feature phones.
"As we drive our smartphones into new markets [at] more accessible price points to [present greater value to customers], we would be able to see it [become] the choice phone form factor."
READ MORE - Analyst: 'Classic' feature phone will survive

Project management: Escaping the vortex

Have you been on a project where everywhere you look a process or procedure or design is broken? Here's how to deal.

I attended a conference several months ago where the CTO of a small development company was adamant about one thing: Making excellence a standard should be a focus on getting teams to work together. This is not a new concept, but one any successful organization embraces.
Around the same time, I ran across Robert Martin's 'Whiners that Fail' post at objectmentor.com. In it, he basically addresses how there are those who will complain about having to pay for things like white papers and how those people expect employers to shell out cash to make their own personal lives better. His point? "YOU are responsible for YOUR career and no one else."
I agree with that statement strongly, but not just because I realize my boss or company is not my mother. This mindset extends to how we deal with our work lives, especially when confronted with managing a problem project with difficult people, systems and requirements. How we take responsibility for our own lives/work/projects makes all the difference.
Making excellence a standard should be a focus on getting teams to work together. And part of that excellence model? Establishing excellence is not whining.
A somewhat recent example jumps to mind. A project I'll call Vortex does a bunch of stuff: It retrieves some records, manipulates them, massages them, then makes them reportable (allegedly). There are several environments at play, including flat files generated from a mainframe, a monolithic database, some half-working Java-based Windows apps, and some database engine processing. It's slow, it has jobs to run jobs which may or may not run, has entirely too many moving pieces, and is a mess overall.
The point is this: Even the very nature of its architecture is suspect, but it's way too late to even entertain gutting the overall project and rewriting it. While it can be shown we spend thousands of dollars on it, the millions it would take to recreate it from scratch mean we're pretty much stuck with it. Most large companies have at least one of these.
Of course, there are way too many voices in the conversation when it comes to making changes to it and in determining how those changes should be facilitated. The rules of overall agreed process are regularly overruled by political demands. Roles are unclear or dismissed altogether, process is conditional, and the support requirements are incredible. The ongoing support alone is a medieval gauntlet–those who survive may not have been eaten by lions, but would probably prefer it.
It becomes very easy to complain about a project like this. It almost encourages a chant of bonding where all involved fully agree with only one thing: It sucks. Have you been on a project like this, where everywhere you look a process or procedure or design is broken? A project like this truly has the potential to become the thing that ushers us all into Armageddon. Or at least wish for it.
If you are (un)fortunate enough to manage a "Vortex"-type project, you can bet everyone involved already knows how bad the project is. The very last thing you must allow is the introduction of complaining into the equation. As a manager, you must not tolerate it. All whining does is choke the team and encourages an attitude of negativity and blame all around. Encouraging a bad attitude happens to be the exact opposite way of pulling a problem project out of the miry clay it has lived in up until now.
Here are some steps to start shifting the mindset on a "Vortex" project into one of excellence:
  1. As the blessed manager, make the decision to stop the negativity in its tracks. Beginning with you. You hold the keys to raising the bar, so you need to check yourself first before you take the position of checking someone else's behavior.
  2. Regardless of your role, make the decision to be solutions-minded. That means for each of the items you find wrong, think of possible solutions to fix them. Don't just complain, think of ways to fix what you're complaining about.
  3. Take the solutions mindset into every meeting or discussion you have with anyone attached to the project. As a manager, you are responsible for establishing the atmosphere in the meeting. If needed, even draft some 'rules of engagement' of what will and what will not be tolerated.
  4. If a role identity crisis has occurred, clarify the roles and who belongs to them. This can take some time, but it must be done so there are no misunderstandings about who should do what. Chances are the upper management must make this clear to everyone involved to ensure it sticks.
  5. Hold a separate meeting or brainstorming session to identify and consolidate the issues causing the torment. It is a safe bet everyone has their own list of what is wrong. You may need more than one session to hammer out all of the details.
  6. Draft an attack plan with your team to deal with each of the items recorded in the above meeting or meetings. You are not "the hero" here. Your team must help create the plan and buy into it. Period.
  7. Execute the plan. Execution will take some time to do well and will likely face resistance, even after the pretty plan is drafted. Stick to the plan and revisit it often, tweaking it when necessary. It should be recognized this will be painful, but not any more painful than the grip the Vortex already has on you and your team.
Ultimately, it is YOUR responsibility to set the atmosphere and direction for your team. Do it already. Otherwise, you will only have yourself to blame for its continual failure. Just like you are ultimately responsible for your own career and professional growth, you are ultimately responsible for your (project's) success. Foster and fan the flames of an attitude of excellence.
If you're the one whining or allowing a spirit of complaint to dominate your project, you are intentionally taking away from the one thing that can fix a bad project: your team. Spend that ill-spent energy getting in small groups with your team to knock out an attack plan, then focus on conquering the enemy--disorganization, undefined issues, stakeholder strangleholds--one battle at a time. If you can realize your team and you hold the keys to escaping the Vortex, you may just do it.
READ MORE - Project management: Escaping the vortex

Indian SMBs up hosted apps spend

Small and midsize businesses (SMBs) in India will chalk up US$45 million on hosted applications by end 2009, according to a study by AMI-Partners.
In particular, midsize businesses (MBs)--defined as companies with 100-999 employees--will account for 63 percent of the country's total SMB expenditure on software-as-a-service, the analyst firm said Tuesday in a statement. Small businesses, it added, are also increasingly adopting basic hosted applications.
Sumeeta Misra, Bangalore-based research associate at AMI-Partners, noted in the statement that the current economic environment have led cost-conscious SMBs in India to spend less on technology and IT infrastructure. As a result, companies are favoring a hosted model over customized or package-based software.
AMI-Partners attributed the popularity of hosted applications to the lack of upfront investment and "little or no allocation" of time or resources for software deployment and maintenance.
"During the current scenario, it makes more sense for companies to host IT applications with a third-party than managing all of them in-house. The total cost of ownership is decreased further [with the reduction in] hardware and support costs," said Misra.
According to AMI-Partners, the number of Indian businesses adopting software-as-a-service has on the whole grown 8 percent this year and the trend is expected to continue. It expects the country's SMB spending on hosted customer relationship management (CRM) this year to register a 10-percent increase over 2008, while hosted enterprise resource planning (ERP) and supply chain management (SCM) will grow at a "marginally lower rate".
Expenditure on hosted VoIP (voice over Internet Protocol) will also grow at more than 5 percent year-on-year, AMI-Partners predicted, as the Indian SMB market is increasingly attracted to on-demand collaboration applications.
Interest in cloud computing, the analyst firm added, is also on the rise in India as the technology enables customers to pay only for the computing resources they use. AMI-Partners noted the launch of many new products are on the cloud, that allows Indian organizations to use a wide range of services including development and off-premise hosting of managed applications off-premises.
READ MORE - Indian SMBs up hosted apps spend

Install MAP to see if servers are ready for Windows Server 2008 R2

With Windows Server 2008 R2's release, you may want to upgrade existing servers to the new version of Windows Server.

If your organization is considering an upgrade to Windows Server 2008 R2 on existing hardware, now is a good time to see if your server resources will fit on the new server operating system.
Microsoft's solution in this space is the Microsoft Assessment and Planning (MAP) Toolkit. This free tool is used for a number of Microsoft solutions, including Hyper-V virtualization, Windows Vista readiness, and Microsoft Office applications. MAP will assess a server inventory of existing systems for their readiness for new roles on Windows Server 2008 R2.
Installing MAP is straightforward except that the pre-requisite inventory may need installation. This inventory includes the .NET Framework 3.5 SP1, Microsoft Office, and a modern Windows Installer version. These requirements are not needed for Windows Server 2008 R2 tasks, but the requirements are needed for the MAP tool to install.
Installing MAP also includes a local SQL Server database installation. The database installation and the prerequisites are likely the largest drawbacks of MAP. Figure A shows the MAP installation.
Figure A

Once MAP is installed, the first step is to run the Inventory And Assessment wizard; this will discover computers by scanning an IP address range, searching an Active Directory domain, or importing a list of server names from a text file. Figure B shows the first screen of the Inventory and Assessment Wizard.
Figure B

When the Inventory And Assessment wizard is completed, it will launch and search for what you want to perform a MAP assessment against. Depending on the size of networks or Active Directory domains, the scan can take some time. Figure C shows the result of the scan.
Figure C
Click the image to enlarge.
At this point, you can perform various consolidation and analysis functions, including Windows Server 2008 R2 readiness. Hyper-V consolidation recommendations can also be made with an additional performance metrics data collection event.
For most situations, MAP is worth a try. This can be to catch the obvious omissions that may be overlooked as part of an upgrade project.
READ MORE - Install MAP to see if servers are ready for Windows Server 2008 R2

Is it game over for Microsoft on consumer front?

With Microsoft's Windows Mobile unit having run in slow motion for the past several years, it doesn't surprise me that there are calls for the company to get out of the phone business.
The more interesting question raised in a New York Times blog post, to me, is whether Microsoft flat out just doesn't get the consumer.
That, to me, is a much broader issue for Microsoft, given the fact that more and more parts of computing--even enterprise software--are taking their lead from consumer trends--think Facebook, Gmail and Twitter.
Microsoft certainly has its challenges on this front, and no business illustrates those challenges more clearly than the phone business, where Microsoft has squandered an early position in smartphones and now faces a massive task to catch up to Apple, Research In Motion, and even upstart Google, which has not been at the game nearly as long.
Luckily for Redmond, I don't think it is that they don't get the consumer at all. Products like Surface and Windows 7 and Zune HD show that Microsoft is thinking about the consumer experience and does have some sense of what appeals to the average user.
So it's not that Microsoft totally doesn't get the consumer. Rather, I would argue, the consumer it understands best is the nerd, as opposed to the mainstream user. That's why, from my way of thinking, its products tend to start as niche products for gearheads and work their way toward the average consumer.
And Microsoft's nerd focus isn't always a bad thing, particularly in the enterprise where it is nerds who tend to be making the decision. Windows has fared pretty well against the Mac, although the PC's lower cost also has a hand in that.
What's happened on the mobile side, though, shows that the focus of power is shifting. It's only a matter of time before similar trends more deeply affect the corporate desktop, whether it is e-mail, collaboration, or social networking.
The consumer business also represents a huge opportunity on its own for Microsoft. When it comes to connected entertainment, for example, Microsoft has what should be a big advantage. Because of its size and breadth, Microsoft's software powers multiple living room devices (Xbox, Windows Media Center, and Mediaroom digital TV) as well as devices that delver media onto phones, cars, and other portable devices.
And of course, a huge part of the battle has moved off of the PC or any single device and onto the Internet. That explains Microsoft's huge investment in Bing, but also its other online moves, including offering Office via the browser, and projects like Live Mesh that aim to bring together our myriad gadgets.
There is still a huge win to be had for the first company to allow people access to their media seamlessly in all these places. The best experience right now, I would argue, is taking one's iPod or iPhone with them into all of these different locations. That's a good experience, but not as good as being able to buy content once and have it automatically show up, on-demand in all of these places.
The company has shown glimmers of hope in some areas, though clearly there is more change that needs to happen. Its new retail stores, though similar to Apple, show Microsoft knowing how to highlight its coolest side. There are more products coming out with memorable names like Silverlight and fewer with mouthfuls like Windows XP 64-bit Edition for 64-bit Extended Systems.
There are pockets of understanding, particularly in the entertainment unit, which is developing things like the eminently cool Project Natal. But, then, as The New York Times blog post points out, there is the Windows Mobile unit where it seems the phone has been ringing for years and Microsoft has yet to answer the call.
READ MORE - Is it game over for Microsoft on consumer front?

Microsoft investigating charges it stole rival's code

Microsoft said on Monday afternoon that it is investigating allegations that a recently launched microblogging site in China lifts the code and interface of a start-up's rival service.
"Microsoft takes intellectual property seriously, and we are currently investigating these allegations," company spokesman Mark Murray said in a statement. "It may take some time due to the time zone differences with Beijing."
Earlier on Monday, Canada's Plurk went public with charges that Microsoft's Juku service "rips off" the look and feel of its microblogging service and also appears to use more than 80 percent of the same code, all without permission.
"Imitation may be the sincerest form of flattery, but blatant theft of code, design, and [user interface] UI elements is just not cool, especially when the infringing party is the biggest software company in the world," Plurk said on its blog.
Plurk said it is still evaluating what to do in the case.
"We're not entirely sure but we are exploring our options," Plurk said. "We have been seeking advice from respected colleagues, responding to press inquiries and gathering facts on the timeline of events and parties involved here to understand why and how this took place."
It's the second time in recent weeks that Microsoft has been accused of lifting other's work in its products. Last month, the company was forced to pull down a tool for loading Windows 7 onto Netbooks after allegations that the product improperly included open-source code. Microsoft later apologized and last week re-released the tool under the GPL open-source license.
READ MORE - Microsoft investigating charges it stole rival's code

Web accessibility no longer an afterthought

Yahoo's Victor Tsaran knows how much time Web designers spend agonizing over color and font-width choices when laying out an application. So when he started Yahoo's accessibility push two years ago, he had a tough time arousing sympathy for engineers grousing about how much extra time was needed to create accessibility features.
Fortunately for Tsaran, Yahoo's accessibility manager, he's running into that problem less and less. Web designers are starting to take accessibility as seriously as button placement or heading layout when they develop their products, improving the Web experience not only for people like Tsaran--who lost his sight at the age of five--but for Web users in general.
"We're seeing a lot more awareness and involvement in Web accessibility than we did a few years ago, particularly among big companies," said Judy Brewer, director of the Web Accessibility Initiative (WAI) at the World Wide Web Consortium. "It's becoming a solid business expectation that Web sites need to meet the needs of all users."
At the two biggest Internet companies in the world, accessibility is seen as an increasingly important part of what they do. Yahoo requires every new hire to receive accessibility training from Tsaran and Alan Brightman, senior policy director of special communities. And it books engineering teams for tours of their Accessibility Lab.
Google recently rolled out a service that will let YouTube users add captions to their videos, and believes that as the Web moves more from an era of presentation to an era of two-way "data-driven" communication, accessibility becomes even more important, said Jonas Klink, accessibility program manager.
Web accessibility has come a long way in the decade since many of these proposals were first floated. It's still a challenge, however, for the Web community to remember that as it pushes forward with exciting new technologies like HTML5 that could reinvent the Internet experience, it must keep in mind the needs of those who can't type 60 words per minute, operate a mouse like a scalpel, or see the unobtrusive pop-up windows that point to the next destination on the page.
"As the Web gets more and more dynamic, the accessibility requirements get more and more interesting, and sometimes challenging, to implement," Brewer said.
The challenges
There are about 60 million people in the U.S. who can't use a computer to get on the Internet in the normal fashion, said Yahoo's Brightman. For those people, a mix of screen reader software, keyboards with special buttons, and even motion-sensing Web cameras must take the place of the mouse and QWERTY keyboard.
That can cause problems for Web designers who rely too heavily on mouse navigation, or who design pages with special multimedia whiz-bang effects that look cool only to the people that can see them. "There can be an assumption of homogeneity on the Web," said Naomi Bilodeau, technical program manager for Google.
Users of screen readers--software that essentially reads out loud a description of text, links, and buttons on a page--are confounded the most by Captchas and Flash Web pages, according to a recent survey of screen-reader users conducted by WebAIM.
But simple things like photos or images can also create problems if the Web publisher doesn't add alt text to those photos, or relies primarily on images as a way of explaining what is happening on the page. And as Web designers push forward with Javascript and AJAX-based technologies that overlay Web content over the primary Web page, there's great potential to confuse screen readers.
The good news is that most of these problems aren't as much technology issues as design issues; content created with things like Flash can be made accessible if designers start off with that principle in mind.
"There are a bunch of things (in Web design) that are not features," said Nicholas Zakas, principal front-end engineer for Yahoo's home page, meaning that while you can jazz up a page all you like with additional features, there are certain things that should be standard fare. "Performance is not a feature, internationalization is not a feature, and accessibility is not a feature."
However, features can make the Web more accessible. As mentioned, Google recently rolled out automatic captioning software for YouTube videos, making it much easier for deaf people to enjoy the world's largest collection of cute cat videos.
In all seriousness, the automatic captioning technology is being rolled out first on YouTube's Educational channel, allowing deaf or hearing-impaired people to take advantage of distance learning programs or other educational systems. It's most definitely a work in progress, (check out this YouTube video of a lecture by a University of California at Berkeley professor by clicking on the "cc" tab, the left arrow, and then "Transcribe Audio") but with refinement could really add to the amount of knowledge that can be consumed by disabled people.
"I wanted this so badly (that) it's good enough, I don't care if there are some bad captions," said Google's Ken Harrenstien, a deaf software engineer who played an instrumental role in bringing the project to life.
The reasons
There are no explicit laws that companies design Web sites to be accessible to the disabled, but many disability experts and Web companies believe that portions of the U.S. Americans with Disabilities Act of 1990 do apply to the Internet, despite having been written several years before the Web emerged as a mainstream phenomenon.
And in order to do business with the U.S. government, companies must comply with Section 508 of the Rehabilitation Act, which insists that electronic and information technology products sold to government agencies be designed with disabled employees in mind, and that government services produced by contractors consider disabled citizens in equal measure.
But these are businesses, after all: Yahoo's Brightman estimated that there's about $220 billion in discretionary spending available to disabled people. Making a Web site accessible to as many people as possible isn't just the right thing to do, it also makes business sense, he said.
Also, with a rapidly aging population in many parts of the world--notably the U.S.--accessibility requirements will become useful for today's crop of baby boomers as they grow older. People over 65 are increasing their use of the Internet, according to Nielsen, and features designed for accessibility could aid those who aren't technically disabled but wouldn't mind a little extra help.
The future
The immediate challenge for those working on Web accessibility is to ensure that accessibility standards are not trampled in the rush to finalize the HTML5 collection of standards that Google and other Web browser companies are currently debating. Brewer said it's "extremely important to be sure that HTML5 can support accessibility fully," and her group is working closely with the other parts of the W3C to realize that goal.
But beyond that goal, Web accessibility advocates have reason to feel optimistic about their cause. Long-awaited technologies like sophisticated speech recognition are finally coming to fruition after decades of joking about how such capabilities were just two years away. And 46 percent of respondents to the WebAIM survey reported that Web content has become more accessible in recent years.
"Anybody should be able to use anything on this page," said Yahoo's Zakas, keeper of the all-important Yahoo.com page. "If anybody can't use it, it shouldn't be there."
READ MORE - Web accessibility no longer an afterthought

APAC C-level execs warm up to green

The importance of green IT has increased in the Asia-Pacific region, with senior executives leading sustainability efforts, a new study has revealed.
In a telephone briefing Monday, Philip Carter, associate research director at IDC Asia-Pacific, noted that about 26 percent of respondents in some countries had indicated that being green was "very important" in the research firm's 3rd annual Green IT and Sustainability Survey.
Conducted in August, the poll involved 450 organizations of varying sizes across the region.
Carter, who also heads up IDC's green IT & sustainability research in the region, added that senior management was also likely to lead the green charge. Over in Australia and Japan, for example, C-level executives including CEOs and CIOs were identified as responsible for green efforts and initiatives, while in China, the burden was on IT management.
The study also shed light on three broad areas respondents indicated they would like some help in: measuring and monitoring of energy consumption, planning and design of a green IT project, particularly in context of developing metrics, and services associated with videoconferencing.
As for the reason why more are embracing green IT, the survey showed that 60 percent of respondents in the region stated cost of energy to be the primary driving force, while recycling of most IT assets were also found to lack accountability, whether conducted in-house or outsourced to a third-party, noted Carter.
Asia key piece to carbon reduction puzzle
During the briefing, Carter reiterated that the region played a significant role in helping to reduce greenhouse gases within the next decade.
According to IDC's G20 ICT Sustainability Index released last week at the 15th United Nations Climate Change Conference (COP 15) in Copenhagen, the region's economies are expected to play a big role in the reduction of carbon emissions worldwide. The research firm predicted the six Asia-Pacific markets in the G20--Australia, China, India, Indonesia, Japan and Korea--will contribute 41.4 percent, or 2.4 billion tons of a targeted 5.8 billion tons decrease brought about by the use of ICT-based products and services, by 2020.
Japan, said Carter, has the highest potential amongst the G20 nations to lower carbon levels using 17 technologies highlighted by IDC, including smart grids and energy management systems.
China, on the other hand, had the potential to reduce about 1.4 billion tons of carbon emission by 2020--a significant proportion by a single economy.
Carter added that going forward, policy makers in the countries need to invest "a significant amount of time" to providing incentives and "to a certain extent, regulation where possible, for the great usage of ICT in their jurisdictions".
READ MORE - APAC C-level execs warm up to green

Larrabee pullout: GPU battle 'far from over'

Intel's decision to shutter what would have been its first discrete GPU (graphics processor unit) offers more breathing space for graphics market leaders Nvidia and AMD, but the battle is far from over, an analyst has pointed out.
On Monday, the chip giant announced it would not release a standalone graphics chip as its first Larrabee product, contrary to its earlier plans.
The chipmaker had performed a public demonstration of the Larrabee platform at the Intel Developer Forum in September, and followed up with another presentation at the SC09 conference last month--leading many to believe the company would deliver on its "2009 or 2010" timeframe for Larrabee.
According to In-Stat's chief technology strategist Jim McGregor, however, the latest development was not totally unexpected. "History in the electronics industry indicates that few new technologies meet their initial schedules and adoption of new technologies and methodologies takes two to three times longer than anticipated," he said in a research note this week.
Intel is trying very hard to jam a square peg into a round hole. It may be possible, but obviously, it isn't easy.
Tom Halfhill, In-Stat
McGregor noted that the decision to drop the standalone graphics chip was nevertheless "a major blow to Intel", and would allow AMD and Nvidia "more breathing room in the higher margin discrete GPU space". But in terms of integrated graphics, Intel still has an edge.
"[Intel's] plan was to enter the discrete GPU market on the high-end and [subsequently scale] the technology down to the integrated graphics solutions," he said. "Now, [it] will have to rely on the older graphics architecture for integrated solutions while it regroups and reinforces the Larrabee development efforts.
"This does put Intel at a disadvantage in term of graphics technology, but with at least a twelve-month lead on rival AMD [in terms of] introducing processors with integrated graphics, the upcoming Westmere processor generation should still provide Intel with a price-, power- and performance-competitive offering for the value and mainstream PC segments."
Tough task at hand
From the start, in developing Larrabee, Intel tried to create a graphics architecture "that was programmable just like a standard x86 processor", which required both a new hardware architecture and a new programming model. Both were significantly challenging tasks, he noted.
Tom Halfhill, senior analyst for In-Stat's Microprocessor Report, concurred. "Larrabee was a potential threat to [AMD and Nvidia's] GPU businesses, but now it should be apparent that designing a state-of-the-art graphics processor is very hard, even for the world's biggest semiconductor company. Anyone who thought Intel would easily stomp AMD and Nvidia needs to rethink their position," he said.
To achieve its goals for graphics performance, Intel may have to compromise on x86 compatibility, he pointed out, "Intel is trying very hard to jam a square peg into a round hole. It may be possible, but obviously, it isn't easy."
Robert Sherbin, vice president of corporate communications at Nvidia, added in an e-mail: "The fact that a company with Intel's technical prowess and financial resources has struggled so hard to succeed with parallel computing shows just how exceptionally difficult a challenge this is."
Graphics efforts to continue
In-Stat's Halfhill noted however, that Intel is not likely to give up on its GPU ambitions anytime soon.
"AMD is working on x86-compatible PC processors with integrated ATI graphics, so Intel will need a competing product," he said. "In addition, the growing market for general-purpose high-performance computing on GPUs (GPGPU) demands a response from Intel. And finally, even if Intel's graphics technology isn't competitive with the discrete GPUs from AMD and Nvidia, Intel could still adapt it for PC chipsets with integrated graphics."
McGregor pointed out that Intel could not afford to consider ending graphics development, given its growing focus on consumer electronics, as graphics "is a critical technology to all the major consumer and computing platforms". According to him, the battle in the graphics space is "far from over", given the rate of innovation in the industry.
On GPU handling of computing tasks, Nvidia's Sherbin elaborated: "GPU computing has surely reached the tipping point. CUDA (Compute Unified Device Architecture) has been adopted in a wide range of applications."
"In consumer applications, nearly every major consumer video application has been, or will soon be, accelerated by CUDA," he said, naming computational biology and chemistry, and fluid dynamics simulation, as examples of high-performance computing applications.
With the launch of Microsoft Windows 7 and Apple Snow Leopard, GPU computing also "went mainstream", said Sherbin. "In these new operating systems, the GPU was not only [a] graphics processor, but also a general-purpose parallel processor accessible to any application."
Responding to ZDNet Asia's queries, a Hong Kong-based Intel spokesperson reiterated that Larrabee silicon and software development are behind where the chipmaker wanted to be at this point. "Additional plans for discrete graphics products will be discussed some time in 2010," he said.
READ MORE - Larrabee pullout: GPU battle 'far from over'