Microsoft Worms its Way Into The Silicon Woodwork
The objective is (1) to find the factors that limit the effectiveness of the individual's basic information-handling capabilities in meeting the various needs of society for problem solving in its most general sense; and (2) to develop new techniques, procedures, and systems that will better match these basic capabilities to the needs, problems, and progress of society."Pop quiz: Was this Microsoft hyping its new .Net and how it was going to forever change the PC users universe, or some Sun marketing maven extolling the manifold civic virtues of using Java Everywhere?
These prophetic, almost naively hopeful words are from a 1962 report prepared for the Headquarters of the US Air Force Office of Scientific Research. And none other than Douglas Engelbart of Stanford Research Institute (which eventually became SRI International) prepared this piece of now ancient computer history. Engelbart, among other innovations, is credited as being the father of the mouse-driven user interface. The title of his 1962 report to the USAF was "Augmenting Human Intellect: A Conceptual Framework."
And here we are almost forty years later and using computers is still a huge pain in the ass, regardless if its a Macintosh, WinTel box, Linux system, Palm PDA, smart cell phone, or network appliance. The takeaway here is that when you try to more deeply accommodate a users changing whims and needs in varying social contexts, current computer systems fall apart. Worse, no matter how disgusted and disgruntled you become with these devices and their all too often screwed up ways, you still think you cant live without them.
As a consequence of their rigid explicitness, we have been forced to change our naturally human behaviors and now live in a world filled with ignorant machine tyrants. This oppressive digital regime is going to get worse thanks to always on wireless connections. Your new body part devices will soon surround you in a constantly heaving sea of beeping information and attention draining intrusions. Welcome to the brave new world of Invasive Computing, Bunky. There is no longer a computer there, there; it is everywhere. But if things keep going at this inflexibly dumb rate, it will soon be time for a digital divorce. So where do you go after the break up? Back to mother mainframe? Duh.
Anthropologists, sociologists, psychologists and all the rest of the twentieth century spawned social science grab bag have long tried to nail down the complex issues concerning the dynamic interactions between people, culture and their environment. In the 1960s a new bunch of very bright people joined the what-is-this-and-how-do-we-improve-it social order fracas. The difference being that they were intent on injecting computers into the already complex social and environmental equation, e.g., Douglas Engelbart. The highly interactive, all-encompassing systems/human environment promulgated by these techno-arrivistas came to be known as pervasive computing.
As with the social scientist groups, its no surprise that we also find factional thinking dividing computer scientists as to the best approach to pervasive computing. But conveniently clear boundaries between the various factions can be hard to see sometimes as things can get philosophically loose and technically porous. However, two generalized distinctions can be made about how to implement pervasive computing. One approach lies in creating smart spaces, the other in enabling ubiquitous computing.
The 1980s and the pioneering work at Xerox PARC (Palo Alto Research Center) marked the arrival of ubiquitous computing. Generally speaking, ubiquitous computing seeks to augment human intellect via embedded or otherwise not noticed smart systems. Under PARCs research director, Mark Weiser, a highly talented group of people began taking ID badges, notebooks, pads and a slew of other everyday "dumb" items and started transforming them into "smart" devices. Users continued in their regular, natural interactions with their "stuff" yet were rewarded with the much richer experience intelligent devices can offer. The ubiquitous challenge started at PARC was how to use the already in-place physical attributes of an object, however prosaic, to enable an implicitly smart system.
For example, if we get in over our heads or into an emergency situation in some new model cars, their smart suspensions will attempt to keep you from spinning out of control, regardless of road conditions. But despite such computer-enabled sophistication, there is no explicit interaction between you and the myriad microprocessors in the cars smart suspension. You dont have to expressly command them to do anything. Their on-demand distributed smarts comprise an implicit, user invisible system and there is nothing new you have to learn about driving that car. Obviously, this is a very simple example of ubiquitous computing, and the "ubiquity" extends only to the confines of the vehicle.
One of the more extreme examples of ubiquitous computing can be found in the "Tangible Bits" work of Hiroshi Ishii and Brygg Ulmer of the Tangible Media Group at the MIT Media Lab. As envisioned by Ishii and Ulmer, Tangible Bits is an "attempt to bridge the gap between cyberspace and the physical environment by making digital information (bits) tangible." In their all-pervasive scenario, we will live surrounded by such things as "Interactive surfaces, whereby walls, desktops, ceilings, doors, windows, etc. become an active interface between the physical and virtual worlds." Moreover, there will be "seamless coupling of everyday graspable objects (e.g., cards, books, models) with the digital information that pertains to them."
And finally "ambient medi" such as sound, light, airflow, and water movement for background interfaces will surround us with cyberspace at the periphery of human perception. In other words, the very air you breathe could also be linked into a pervasive computing system! Ishii and Ulmer state that "Ultimately, we are seeking ways to turn each state of physical matter - not only solid matter, but also liquids and gases - within everyday architectural spaces into "interfaces" between people and digital information." This future tense world is about as extreme an example of ubiquitous computing as you will likely find (wake up and jack-in to your semi-real, semi-virtual life).
Smart spaces are the second major theme found in pervasive computing. Engelbart, as noted above, is famous for his efforts in promoting rodent-driven interfaces in the 1960s, which carbon dates this interface methodology as a thoroughly dog-eared concept. But he is more highly regarded by those in the community for his seminal work in the bigger broad-brush issues, and specifically his notion of "smart space," which is generally defined as the seamless integration of people, computation, and physical reality. In the smart space scenario, space is no longer an empty and passive construct; it is now imbued with high intelligence. Fully realized, a smart space is an active or even proactive participant with the people interacting in its confines. Some smart space researchers are also focusing on linking people into AI systems that are far more intelligent than their human inhabitants.
In the 1970s the Architecture Machine Group at the Massachusetts Institute of Technology undertook their "Media Room" project. The goal of this early smart space group (arguably the digital DNA daddy of todays MIT Media Lab) was to understand how people interacted in room sized spaces and then design new types of system interfaces that turned the entire room into a computational environment. To make their smart space work, new interfaces were developed for the MIT Media Room that used a combination of speech recognition, gesture input, and text and graphics. This was a multi-modal "atmospheric" interface that transcended the highly scripted mouse and rigorous small screen GUI. However, despite such sophistication and design elegance, this MIT effort still used a highly explicit system interface. The smart environment was coldly clueless about a users implicit needs. Typically, a smart room, even today, only reacts if spoken to, so to speak. A smart space is an explicit system, however indirectly we may interact with it.
Research is also going on in computer mediated smart spaces. IBM has an R&D effort in this particular area, www.research.ibm.com/journal/sj/384/mark.html and the company has also made a strategic commitment overall to pervasive computing, www-3.ibm.com/pvc/pervasive.shtml. In a mediated scenario, the space has a minimal or even no role to play. The focus is instead on how computers can enrich person to person interactions. The invisible computers and A/V sensors silently eavesdrop on our interpersonal communications and watch us. The system attempts to follow along with what we are doing and occasionally offers a helping hand (mentions relevant information, volunteers to do something for us, etc.) whenever it "thinks" it might be useful.
For example, a cars interior could be turned into a mediated smart space if in addition to voice recognition, the car also "understood" the meaning of the emotional tone of your voice, your facial or hand gestures, or how you were interacting with other passengers in the car. Based on its observed interactions, the car might be able to differentiate if the passengers were family or business associates and react accordingly to the different social contexts. If it was a business group riding in the vehicle and the car heard the word lunch mentioned, it might use its onboard GPS and restaurant database to make a spoken suggestion about an appropriate nearby place for a professional luncheon. On the other hand, if it was your kids in the car, it may tell you where the nearest McDonalds was. Obviously, much more complex and diverse systems could also be built, serving many different types of user space environments.
One MIT researcher currently focused on creating easy to implement smart spaces is Michael Coen and his groups "Intelligent Room" project. As Coen puts it in his 1998 paper, "A Prototype Intelligent Environment," his "approach is to advocate minimal hardware modifications and "decorations" (e.g., cameras and microphones) in ordinary spaces to enable the types of interactions in which we are interested." Coen makes it clear that his approach is very different from the ubiquitous computing model, which, for example, could have the rooms inhabitants sitting on smart sensor equipped chairs and wearing smart ID badges.
In Coens work, distributed, inexpensive sensors like cameras and microphones are used, which are then tied into inexpensive real time systems. This is very different from embedding computational smarts into everything the user may come in contact with. He makes the argument that real time systems and highly capable A/V sensors are getting ever more powerful and cheaper. Coen argues that his pervasive approach is much easier to broadly and quickly implement than relying -- and waiting -- for ubiquitous computing (and especially the variant proposed by Ishii and Ulmer) to fully enter our daily scene. However, being a graduate of the MIT AI Lab, Coen is not at all shy about using any and all of the tools and tricks of the AI trade developed over the past several decades to make his Intelligent Room as smart as possible
Coens most recent Intelligent Room project is "Hal," which uses his unique AI language creation known as "Metaglue". 100 Metaglue agents are used to control Hal and interconnect its room components. Metaglue is a specialized language for building systems of interactive, distributed computations, which Coen notes, "are at the heart of so many IEs (Intelligent Environments.) Metaglue is an extension to the Java programming language and provides "linguistic primitives that address the specific computational requirements of intelligent environments." In his 1999 paper, "Meeting the Computational Needs of Intelligent Environments: The Metaglue System," Coen lists the needs of an IE as follows:
Interconnect and manage large numbers of disparate hardware and software components,
Control assemblies of interacting software agents en masse;
Operate in real-time,
Dynamically add and subtract components to a running system without interrupting its operation,
Change/upgrade components without taking down the system,
Control allocation of resources, and,
Provide a means to capture persistent state information.
Coen says the invention of Metaglue was necessary because "traditional programming languages (such as C, Java, and Lisp) do not provide support for coping with these issues." His group hopes to make Metaglue eventually available for more widespread use by others.
Coens IE requirement list reads like a Java Everywhere marketecture piece written by Scott McNealy. So what may be most interesting here is Coens position on Java. If you follow Suns marketing logic to its natural conclusion, then Java should end up being everywhere and thus running Intelligent Environments. But Coen is asserting that Java will not work in such environments without it being stuffed with new and nonstandard -- extensions. Bottom line: Java could be sadly broken when used for pervasive computing.
Ultimately, pervasive computing may turn out to be all about distributed AI systems. In this scenario, Sun can proudly point to all the AI work being done with Java as pervasive proof positive. Unfortunately, a little deeper digging shows that Java may also flunk the grade when it comes to doing the required AI heavy lifting. For example, a paper entitled "Lisp as an Alternative to Jav" prepared by Erann Gat at Jet Propulsion Laboratory, California Institute of Technology discusses a series of experiments that pitted Java against Lisp, the powerful patriarch of AI-oriented languages.
Gats research found that " development time for the Lisp programs was significantly lower than the development time for the C, C+ and Java programs. It was also significantly less variable." Next, the researcher found that "Development time for Lisp ranged from a low of 2 hours to a high of 8.5, compared to a range of 3-25 hours for C/C++ and 4-63 hours for Java." In addition, "The Lisp programs were also significantly shorter than the C, C++ and Java programs." And there is more, "While execution times of the fastest C/C++ programs was faster than the fastest Lisp programs, the runtime performance of the Lisp programs in the aggregate was substantially better than C/C++ (and vastly better than Java)."
The researcher was left with this summary conundrum: "Our results beg two questions: 1) Why does Lisp seem to do as well as it does and 2) If these results are real why isn't Lisp used more than it is?" The answer to question two, of course, is that in todays coding climate, Lisp is not politically correct like Java. Nor does Lisp have the fervent religious power behind it of a major marketing maven, like Sun. Gat closes out his paper with these comments. "Our results suggest that Lisp is superior to Java and comparable to C++ in terms of runtime, and superior to both in terms of programming effort, and variability of results. This last item is particularly significant as it translates directly into reduced risk for software development." This paper leaves the pervasive developer with an interesting coding choice: religion or results?
But rather than religion or politics, an area much more worthy of your pervasive attention is the law, and in particular, the Fourth Amendment in the Bill Of Rights of the U.S. Constitution, which states:
"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
In 1976, the Supreme Court ruled that personal data given to a third party loses its Fourth Amendment protection. Thus, the minute you put all your digital records, e-mail, calendars, preferences, etc. into somebody elses on-line hands, you automatically lose all Fourth Amendment safeguards. In the Internet era, the implications of this Supreme Court ruling concerning possible governmental abuse are mind boggling. So much so, the U.S. Congress has discussed specifically extending Fourth Amendment protection to on-line/electronic use. To date, however, nothing has happened in Congress and a large uncertain cloud still surrounds the issue.
These Fourth Amendment protection issues obviously loom large over the future of pervasive computing -- especially if you are contemplating putting all of your lifes data into .Net, Microsofts hugely ambitious first foray into pervasive computing. As Gates sees his new all invasive MS world, .Net will make pervasive computing much easier for everyone because instead of your personal data meandering all over the place, its all kept in a secure central location. The beneficent Redmond potentate will act as your custodian, doling out your appointment books, e-mail information, credit card information, purchasing patterns, etc., to only "trusted Microsoft partners."
Of course, centrally doing that with .Net means you can kiss your Fourth Amendment protection good-bye. All an overzealous government agency has to do is attach a fat pipe to the centralized .Net servers and instantly suck everything out on the personal lives of millions of users. And there is nothing unconstitutional about it. It positively makes the FBIs Carnivore system look like a wimpy vegetarian. The various legal food fights over the Fourth Amendment and its lawful extension into cyberspace will be interesting to watch.
"By posting messages, uploading files, inputting data, submitting any feedback or suggestions, or engaging in any other form of communication with or through the Passport Web Site ... you are granting Microsoft and its affiliated companies permission to:
1. Use, modify, copy, distribute, transmit, publicly display, publicly perform, reproduce, publish, sublicense, create derivative works from, transfer, or sell any such communication.
2. Sublicense to third parties the unrestricted right to exercise any of the foregoing rights granted with respect to the communication.
3. Publish your name in connection with any such communication."
This little legal ditty sat on Microsofts web site for a couple of years before some sharp-eyed folks spotted it. When publicly confronted with its own words, Microsoft immediately issued a very pubic mea culpa, saying it was all just a simple oversight. But in truth, Microsoft is historically notorious for its Draconian contracts and legal intimidation, which, if anything, are getting even more heavy handed as time goes by, e.g. Windows XP and its onerous licensing policies. If you dont register your new XP copy within a set period of time, your OS is illegal, even though you paid for it. Plus, all XP drivers, in order to be "certified," must be stored on central MS update servers. In sum, you no longer own a copy of your system; you rent it and pay ongoing service fees. The same legal use practice will no doubt be applied to. Net. As a consequence, Microsoft will be in a position to exact a toll on every transaction in your new pervasive computing life. Thus the new .Net question: whose life is it, anyway?
For example, Microsoft has already laid trademark claim for Hailstorm (the MS code name for its .Net distributed services) to use such items as myCalendar, myDocuments, myContacts, myDocuments, myProfile, MyWallet, and myNotifications. Obviously, the central .Net server repositories containing all this rich, highly sensitive personal user data will be the ultimate Fort Knox hacker challenge. Unfortunately, Microsofts system security track record, even on its best days, is abysmal and does not exactly inspire confidence.
The complete architecture of Hailstorm and .Net are still difficult to discern, for in typical MS fashion they are a puzzle palace of ongoing marketecture construction. However, the Open Source community might take heart in the fact that .Net is built upon several open standards. http is used for Internet data transport and remote application access. . Net also uses XML, which allows easy transformation of data between systems and applications. . Net also uses SOAP (Simple Object Access Protocol) for remote execution services.
For a while it looked like the Microsoft-championed SOAP and the Sun-backed ebXML were going to square off against each other. (IBM backed both approaches) ebXML is a joint initiative between the United Nations (UN/CEFACT) and the standards body OASIS. ebXML, like SOAP, uses such well-established open standards as http, TCP/IP, mime, smtp, ftp, UML, and XML. However, in February 2001, UN/CEFACT and OASIS announced efforts to integrate SOAP 1.1 and SOAP with Attachments specifications into ebXML
The ebXML Messaging Specification is potentially quite attractive when used in a pervasive computing context. ebXML encompasses a set of services and protocols that allow a client to request services from servers over any application-level transport protocol, including SMTP, http and others. ebXML defines a general-purpose message, with a header that supports multiple payloads, while allowing digital signatures within and among related messages. Although the header is XML, the body of the message may be XML, MIME or virtually anything digital. The messaging infrastructure of ebXML will now be built on top of SOAP.
UDDI (Universal Description, Discover, and Integration) is also part of .Net. UDDI is a DNS-like distributed Web directory that allows distributed services to discover each other and defines the rules of distributed engagement. Interestingly, Microsoft will also turn its UDDI over to UN/CEFACT and OASIS for incorporation into the ebXML specification after MS beta testing is complete, which will be in a year or so.
So far it appears that Microsoft is taking a truly open approach to pervasive computing and its new .Net. But as always with Redmond, one must peel back the curtain to see whats really afoot. All of these open standards are finding their way into a panoply of very proprietary Microsoft products. At last count, the .NET Server family of systems included (but are not necessarily required to implement .Net): Windows 2000 Datacenter Server; Microsoft Exchange 2000; Microsoft Sharepoint Server; Microsoft Mobile Information Server 2001; Microsoft SQL Server 2000; Microsofts BizTalk Server 2000; Microsoft Commerce Server 2000; Microsoft Internet Security and Acceleration Server 2000; Microsoft Application Center 2000; and Microsoft Host Integration Server 2000. Whether all of this will truly work together in enterprise .Net unison or be delivered as promised is another story. Moreover, apart from SQL Server 2000 and Host Integration Server 2000, which run in Windows NT 4., all the rest of the other .Net servers will only run on Windows 2000. You want your .Net? You must buy all-Microsoft. The fact that cross platform XML, SOAP, etc. are used in the .NET Servers becomes moot.
And of course, there are still the Microsoft .Net developer tools to reckon with, such as .NET Framework and Visual Studio.NET. The .NET Framework attempts to outdo Java -- its not supported in .Net -- with its C # programming language and its portable runtime environment, the Common Language Runtime. Both C # and a subset of CLR have been proposed by Microsoft to ECMA as open standards. In this regard, Microsoft can legitimately point out that Sun first promised and then hastily backed away from submitting Java as a true open standard. However, like Sun, Microsoft first giveth then taketh away, just in a different fashion. Like it did with its .NET servers when it made them Windows platform specific, Microsoft has strongly tied the .NET Framework to Windows-only methods and procedures, and these coding dependencies run very deep in the MS operating system. In sum, there is open and interoperable pervasive computing and then there is Microsofts idea of what those words mean.
None of this comes as news to those who followed a very ugly episode on Slashdot a while back concerning Kerberos and Microsoft. Currently, MS Hotmail users use Passport to log into their e-mail accounts. But Passport also has a much larger role in the coming .Net scheme of things, as it will be used to authenticate users. To get to MyWallet, you will need to get past Passport.
Hailstorms Passport uses Kerberos, the open source user authentication system that began its life way back when at MITs Project Athena. More specifically, this Microsoft system is known as Microsoft Authorization Data Specification v. 1.0 for Microsoft Windows 2000 Operating Systems. A much publicized firestorm blew up around Hailstorm back in the spring of 2000 when a well informed Slashdot user (Michael Chaney) noted that, "Microsoft states that their (Kerberos) specification is "confidential information and a trade secret," and that "you must take reasonable security precautions... to keep the Specification confidential." (http://slashdot.org/features/00/05/16/1321225.shtml)
In brief, Microsoft took open source Kerberos code, modified it somewhat, and slapped a proprietary legal label on it. Given that the whole point of open source Kerberos is to ensure true interoperability, as well as to allow the open source community to continually scrutinize the code and improve its robustness, this does not make too much sense -- Unless, of course, you live in Redmond. Slashdot went on to publicly post the formerly secret details of how Redmond implemented Kerberos, which details were available only under this strict Microsoft license. Microsofts lawyers then threatened legal action against Slashdot if it allowed such postings to continue. Slashdot willfully ignored their lawyers, and there it all ended, quite ignominiously for Redmond.
The way Microsoft sees it, Kerberos interoperability extends only to the authentication process, which is clearly defined in the specification. As a matter of course, a vendor is allowed to make proprietary extensions to undefined aspects of the Kerberos code, and as it happens, the guarantee of interoperability does not extend to the authorization process. It is in this technical area that Microsoft made its own changes and declared it off-limits. Thus, to gain authorization to distributed .Net services; you must purchase a Windows 2000 Server, even if you already own several other Kerberos systems from other vendors. What it all boils down to is that MS-Kerberos servers will not act as clients for true open source Kerberos servers (typically running UNIX or LINUX). But the latter can, however, be clients to MS-Kerberos servers. This, in the MS worldview, makes its new .Net Passport service "interoperable." Those organizations having such multi-Kerberos environments must now duplicate and maintain essentially the same data in separate systems. As pervasive computing will absolutely rely on robust authentication/authorization services to be successful, this is not a trivial matter.
Via such onerous licensing policies and over the top legal intimidation, .Net might accomplish what Microsoft could never do in of itself. Namely, carve up the Internet and cyberspace in general, and lay legal claim to it As well as to your newly pervasive life.
On the other hand, it also appears that Suns Java does not offer a complete and compelling platform either to fully enable pervasive computing. As a result, someone or something totally unexpected could surprisingly upend both of these highly ambitious control freak vendors. And thus, Pervasive Computing might truly herald a new "PC" era. But Doug Engelbart knew that forty years ago.
Copyright 2001 Francis Vale, All Rights Reserved
21st Pub Date: November 2001
21st, The VXM Network, http://www.vxm.com