Skip navigation

Category Archives: technology

Despite my skeptical blog post in 2017, I did go out and buy the first version of the Apple Watch. And the second version. And I became an Apple developer, learning a whole new development framework and a new language (Swift) and then, I wrote my first app for the Apple Watch. The Apple Watch quickly proved to be useful. My treasured self-winding watches sit in storage. The Apple Watch doesn’t really replace anything – you can get a basic watch or a health-tracker at a fraction of its price. Instead, it augments what other devices can do. I love being able to filter calls and read transcribed messages without having to find my phone.

When Apple released the new Vision Pro last month, I thought it looked kind of bulky, and adopted the nickname one of my kids gave it: “The Face Computer.” My wife had to convince me to go get a demo. I have seen my share of VR and AR demos including devices that were still in the prototype stage, and I did not think I needed to see another one. But having spent most of my career building 3D graphics technology, I couldn’t resist. 

Having fun with the demo

The demo was easy to arrange and the Apple Store demo folks were great as usual. Unfortunately, we couldn’t get eye tracking to work, spending over an hour on calibration attempts. So we had to proceed with the rest of the demo with the much more basic and somewhat awkward finger-pointing accessibility mode. But even without the eye tracking, it was clear that this is the most advanced visual computer ever made. The graphics are spectacularly clear, due to the incredibly high density of the OLED panels, and the fast refresh rate makes motion appear fluid and natural. But it is the immersive 3D content that is truly compelling – better than anything I’ve ever seen. The device is a technological tour-de-force – like a supercomputer you can wear on your head.

So did I buy one? No. Not because of the eye-tracking issue, or the high price. I can’t think of a practical application for this device in my life. Despite the amazing capabilities, I also can’t think of an app that I could write on my own that would make any sense for the experience. Vision Pro versions of my altimeter-barometer or weather station app for a mostly-stationary device? Probably not worth my time. The device shines with immersive content which is expensive to produce and requires teams of people.  There’s a reason there isn’t a Pro version of a full-length movie let alone a TV series. Even with Apple’s resources, creating immersive experiences is daunting.

Perhaps one way to think about the Vision Pro is that it’s the immersive computing equivalent of Apple’s Lisa.  When launched in 1983, the Lisa was the first mass-market personal computer with a graphical user interface. It was a commercial failure, but it paved the way for the Macintosh that Apple shipped a year later. Vision Pro may not be a commercial mass-market success either. Instead, it establishes a certain inevitability of the direction for human-computer interaction. Apple’s Macintosh popularized the core concepts introduced with the Lisa, and I can’t wait to someday buy the Macintosh-equivalent of the Vision Pro.

Xbox founder's shipit

Xbox founder’s award

While working on DirectX, I became friends with Ted H who was responsible for DirectX developer evangelism. His team was on the front lines with the game developer community and provided information, education, and hands-on support. Because of the team’s close relationships with game developers, they had a good sense of what the community wanted, and helped guide the evolution and prioritization of DirectX functionality.

Ted’s office was in Building 4, and at the time, the cafeteria there had a rare resource – an external food vendor called Pasta Ya Gotcha. They were a welcome change from the generic on-site institutional food fare, and Ted and I started having regular lunch meetings to discuss the state of the technology world and where things were headed. For me, it was always a toss-up between the Texas Tijuana, Tennessee Jack, and Thai Peanut entrees.

Although we were just starting work on DirectX 8, Ted and I both could squint and see the day of diminishing returns with Windows PC computer graphics. We were starting to ask “What’s next?” We had made significant strides legitimizing the PC as an entertainment-capable platform, but entertainment was not a natural destination for the PC. The PCs on the market were big and bulky, and engineered for productivity.

We were also keenly aware of the ongoing battle for supremacy in the living room between Sony, Sega, and Nintendo, with Sony in rapid ascendency. Where was Microsoft? Was it going to get into this fight, and if so, how?

At the time, Microsoft had a partnership with Sega to ship Windows CE on Sega’s platform. Sega already had their own development tools, infrastructure, and content development pipeline, though, so CE brought nothing fundamentally new or compelling for game developers. Windows CE had a very limited implementation of DirectX, so there was little incentive for a Sega developer to use anything other than Sega’s own tools. Windows CE checked the box in terms of having a Microsoft living room platform strategy, but it offered neither better economics nor higher quality of content.

DirectX was part of Microsoft’s Windows division, and Ted and I felt that Microsoft was missing an opportunity to lead with its strengths. We had a vibrant developer ecosystem, operating systems leadership, great relationships with silicon vendors, and a name-brand game API. Why not bring DirectX in the form or a platform to the living room?

With a kernel of an idea, we needed recruits. Ted brought in Kevin B from his team to help think through business models, and I drafted Seamus B, a recent PM hire for DirectX, to help plan and coordinate.

A handful of intrepid and talented developers on my team signed up to build the proof-of-concept of a Windows/DirectX-based living room device. Colin M led the effort to construct a prototype that would prove the idea: that a PC-based architecture could boot up in a few seconds and play games without input from a mouse or keyboard.

This was an all-volunteer effort, and our managers were not yet aware of what we were up to. We all had our regular jobs to do, and we didn’t want to jeopardize progress by asking for official support. We chose to stay in stealth mode knowing we could be punished with the dreaded 3.0 annual review score – or worse. But we believed we were onto something important.

Word was getting around. As we made progress, people came on board to help, advocate, and advance the cause of what Ted had named “Project Midway”. I no longer recall when my manager became aware of the effort.

At a critical point, Nat B offered to help build support in the right executive circles and to make the pitch. He was the technical assistant to the guy running Microsoft’s consumer version of Windows. I remember talking to Nat on my chunky, red Nokia on a typically horrific commute from Redmond to Seattle across the 520 floating bridge. Among other things, he was eager to have a more marketable name for our effort. At its essence, what we were proposing was a DirectX device – a box – in the living room. A DirectX box. I think it was Nat who suggested “X-box” for short.

At this point, X-box wasn’t about Microsoft building its own hardware; hardware is capital-intensive and Microsoft’s expertise was software. We thought partners could build the hardware and Microsoft would build the software ecosystem. Getting into the console hardware business was not yet a consideration.

Somehow, we got a billg meeting. We were expecting an intimate gathering in the usual windowless conference room with twelve or so chairs.  Colin wheeled in a small but sturdy cart with the prototype hardware. We passed out our printed Powerpoint slides and readied ourselves for the pitch and a lot of other people started showing up. By the time we were underway we found ourselves seated with two rows of people behind us, sitting, standing, leaning, and filling the space. Nat spoke. I went over the technical details. There was some other discussion. Colin pressed the power button. A few, anxious seconds later, he was playing a game on a TV with a game controller in his hand. Bill was convinced. There was also a very nice Powerpoint presentation about a future version of Windows CE for consoles, but it didn’t stand a chance next to a real game running on our own technology stack.

An expensive agency was later hired to come up with a real product name to replace “X-box”. They removed the hyphen, and came back with “Xbox”.

This is part three; part one is OpenGL, part two is Early 3D hardware.

As the team continued to work on improving OpenGL’s capabilities, another graphics technology effort – DirectX – had also been moving forward in parallel. In this earlier era at Microsoft, it was not uncommon to have overlapping product efforts, or even efforts the competed directly against each other. The prevailing attitude about such cases was to let the market decide. DirectX was focused on enabling games on Microsoft’s Windows 95 consumer operating system. DirectX got its start as a 2D-based technology providing game developers direct access to the graphics card’s frame buffer. This was very useful functionality up to a certain point, but games like Doom and Quake made it clear that 3D graphics would play a dominant role in the future of interactive games.

Microsoft purchased one of the few available software-based 3D APIs – Rendermorphics’ Reality Lab – early in 1995. The goal was to jumpstart the addition of 3D graphics capabilities to DirectX by incorporating the Reality Lab technology. I was asked to do the technical due diligence of the Reality Lab stack.

Unlike OpenGL, Reality Lab was a scene graph API. This meant that the full geometric representation of a 3D environment was fed to the API. Based on the viewer’s position and other variables, the scene graph engine would compute the necessary details for rendering the objects in the 3D environment all the way down to the individual primitives rendered at the end of the pipeline. As a scene graph API, I thought Reality Lab was pretty good, but it seemed to me an odd match for the intended game developer audience.

Games like Doom and Quake built their own engines for rendering 3D environments, and these engines were highly optimized and specialized for the game’s specific characteristics. What most game developers were looking for was a lower-level immediate-mode 3D API that allowed them to plug their game engines at the stage of processing where primitives were ready to be rendered.

directx_logoDirectX hoped to satisfy this need by providing a mechanism for developers to pre-construct lists of low-level graphics primitives that could then be handed to the graphics card driver to execute. In theory, this allowed direct hardware execution of 3D primitives. In practice, the interface was cumbersome and difficult to use, provided no performance benefit for all the extra trouble, and had limited functionality compared to OpenGL.

What game programmers like John Carmack were discovering was that OpenGL already provided a clean, efficient immediate-mode 3D API that could be used very effectively for game development as well as non-game applications. Whereas DirectX required adhering to overhead-inducing COM-based protocols, OpenGL was a clean, straightforward, well-documented C API.

The game developer community tends to be data-driven and immune to market-speak and spin; their bullshit detectors can be exceptionally good. They were being told to use DirectX for 3D games, but they wanted to use something proven that already existed and just worked. John Carmack’s observations on the topic is still a good read.

More and more developers were coming to Microsoft and demanding that it fully support OpenGL on both Windows NT and Windows 95. The technical work for Windows 95 had already been done, and hardware vendors work working with a pre-release version of the OpenGL MCD driver development kit anticipating final release. Developers wanted the OpenGL driver update for Windows 95 released as soon as possible.

An influential group of game developers even banded together and signed an open letter to Microsoft asking that OpenGL be fully supported on both Windows NT and Windows 95.

This was not how it was supposed to work out. DirectX was supposed to be Microsoft’s consumer graphics technology, and OpenGL was supposed to be “just” for high-end CAD for non-consumer markets. But the reality was that visualization technology didn’t care about somewhat arbitrary market segmentations: a good 3D API was a good 3D API. Many developers felt that OpenGL was the best 3D API at the time and wanted to use it for their games. This has been described as a religious issue, but game developers simply wanted tools that would help them get their work done.

Microsoft was in a tough spot. It now had two competing efforts on its hands, and it couldn’t simply kill either. OpenGL was necessary for continuing to compete in the workstation market, and DirectX was its high-profile brand and technology for enabling PC games. And while OpenGL was an external standard, Microsoft owned DirectX and was free to shape it on its own. Even though Direct3D had gotten off to a rough start, Microsoft couldn’t afford to retrench from DirectX.

The open letter had pushed things to a crisis point, and shortly after it went public I had a meeting with the senior manager in Windows whose task had recently become to clean up the OpenGL/Direct3D mess. I remember the central point of the conversation was that I was being asked to take responsibility for both OpenGL and DirectX graphics and to focus on ensuring that DirectX improved as quickly as possible while continuing to support OpenGL for non-game applications.

This would mean a number of things for me. I would have to leave programming as anything other than an occasional hobby since I would have my hands more than full with providing engineering leadership. I would never see my MCD driver work available in the much larger Windows 95/98 market. And I would also have to disappoint a large contingent of game developers who had hoped for a different outcome.

Of course I could have gone and worked on something else, but that would have served no constructive purpose. Microsoft was sticking with DirectX graphics, and the best thing for developers, users, graphics vendors, and Microsoft was to make DirectX as good as possible. Combining the OpenGL and DirectX graphics development efforts was probably the best way to achieve that; the OpenGL team had at this point a deep bench of 3D graphics talent.

So I said yes, and began the hard work of integrating and re-balancing the teams and getting people on board with a slightly awkward new mission and new priorities.

Over the course of the next three years we steadily improved DirectX. I felt we had finally accomplished our goal of building a world-class 3D API when we released DirectX 8.

The experience was a formative one for me. I had to step outside the emotions surrounding the situation to find a way to make the greatest positive contribution possible. I had to earn the trust of a new team that had viewed my team and me as the enemy. And I had to take something that I felt was technologically inferior and make it great.

But great challenges can provide great opportunities…

A couple of years into my work on OpenGL I was given responsibility for running the OpenGL team as the dev lead. The team – like many of Microsoft’s development efforts back then – was small enough so that I could continue to have a hands-on role in coding while being the team’s manager. Although I knew that I would have less time to code, I was comforted knowing that I wouldn’t have to leave it behind completely. I was excited to have the opportunity to lead the OpenGL development effort but also apprehensive about managing people, especially people who had been my peers.

By this time, I had turned my focus to enabling graphics hardware to accelerate OpenGL performance. Although OpenGL on Windows already had an available hardware acceleration model – the Installable Client Driver, or ICD – it required the hardware vendor to license the OpenGL technology from SGI. In addition to that obstacle, developing an ICD was also complex; it required implementing the entire OpenGL API stack rather than just the parts that the hardware could accelerate. The specific method for accelerating the parts that made sense for some specific piece of hardware was entirely up to the vendor to design and implement from scratch. There was no common template or protocol or framework to follow. This approach provided maximum flexibility but at a very high implementation and maintenance cost. An ICD vendor wasn’t simply maintaining a device driver; they were maintaining an entirely separate implementation of all of OpenGL. OpenGL ICDs and the Windows OS kernel had a mechanism to exchange chunks of generic data which allowed the hardware portion of the ICD to communicate with the rest of the ICD implementation. Again, this was very flexible but also meant that every vendor had to come up with their own custom approach to structuring the communications between client mode (where regular programs ran) and kernel mode (the trusted execution environment where the OS and hardware-level drivers executed).

At the time, ICDs were fine for their intended target of high-end workstations. But my passion was to continue to push 3D graphics into the mainstream. A number of hardware vendors had become interested in providing dedicated 3D-acceleration hardware at lower cost, and given Moore’s Law and the volume-based economics of the PC business, hardware-accelerated 3D graphics was well positioned for mass adoption. To help move things forward, I wanted to make it much easier for hardware vendors who were new to 3D to bring their products to market. The difficulty and investment required to write an ICD driver was a significant obstacle to an emerging ecosystem of commodity 3D hardware.

Having done a bunch of driver work in the past, I set out to architect an OpenGL driver model that would provide a standardized interface to lower-cost hardware and remove as much of the software complexity of ICDs as possible. I focused on exposing only the functionality that lower-cost hardware could reasonably support, namely, the rasterizing of 3D primitives near the bottom of a complex 3D graphics pipeline. For example, 3D transformations and lighting operations would be done on behalf of the driver; the driver just had to render the computed 3D primitives on the screen. This division of labor could provide massive increases in overall graphics performance.

I called the driver model the Mini Client Driver, or “MCD” since it was similar in flow to an ICD, but the vendor only had to implement the rendering-specific part of the OpenGL stack.

S3 VirgeI wrote a corresponding sample driver (if I remember correctly, it used S3’s Virge hardware), and with the help of the OpenGL team, got the sample code and the corresponding MCD documentation into the next releases of Windows NT DDK (Driver Development Kit).

It’s worth making a few comments on driver development in general. Writing driver code can be one of the most satisfying and frustrating experiences possible as a developer. It’s incredibly exciting to have a new driver you’re building to actually do something useful with a piece of hardware for the first time (for example, rendering a test triangle on the screen). But drivers run as part of the operating system, so bugs and driver crashes can take down the whole OS. And with graphics drivers in particular, you always risk screwing up the thing you rely on the most to program and interact with the machine – the display. Add to this the fact that hardware doesn’t always work as documented, and that it’s very easy to miss setting the needed bit on some register or to have an off-by-one or some other error send the hardware into oblivion.

With enough persistence, lots of reboots, and the occasional debug print when all else fails, a robust driver will eventually emerge. And with any luck, you will never hear about your device driver because the only time that you do is when it’s NOT working. As with so many jobs in technology, writing drivers can be a thankless and invisible job despite being critical to making the technology we take for granted actually work.

Back to the main story: having released MCD for Windows NT, any graphics card vendor could now quickly and relatively easily implement OpenGL hardware acceleration using a standard driver model. Since the driver model itself was largely OS-agnostic, I then shifted our focus to providing a Windows 95 version of ICD to satisfy both growing developer and hardware vendor interest in OpenGL and 3D graphics. Windows NT had a growing but still relatively small share of the market compared to Windows 95, and I wanted to see OpenGL fully enabled on both operating systems. We engaged the hardware community around making OpenGL MCD drivers available on Windows 95, got the Windows 95 version of the code up and running quickly, and everyone was expecting the DDK update to be released very soon.

And then, I was asked to do something that would change everything.

My wife bas been pestering me to write a post or two about some of my early years at Microsoft. Thinking about what to write took me back to a time when my entire focus at work was writing the best code possible. In those days, I would even sometimes dream about code. This post talks about software implementation details and may leave some readers behind in a few spots. Bear with me; future posts will be less technical.

For many years, one of my main pursuits as a developer was software-based graphics acceleration. This meant using the CPU to render graphics on the screen as fast as possible by carefully tuning software and algorithms. The goal was to extract every last bit performance from the CPU. One of the key reasons I joined Microsoft in 1993 was for the opportunity to work for Mike Abrash who even then was renowned for mastery of CPU-based performance optimization, and he was aware of some of my work. He was running the Windows NT GDI (Graphics Device Interface) team at the time, and I had come on board to focus on optimizing GDI performance in the first version of Windows NT (somewhat confusingly named “Windows NT 3.1”).

GDI was a 2D graphics framework doing the important work of rendering fonts, lines, rectangles, and all of the primitives that rendered the Windows desktop UI and programs written for it. But being strictly 2D-based, GDI was never going to include 3D graphics, and 3D was as fascinating to me as VR is to some people today (including Mike, who is now Oculus’ chief scientist).

One of NT’s goals was to take a share of the workstation market, and Microsoft had licensed the OpenGL graphics API from Silicon Graphics (SGI). SGI’s business was based on selling expensive high-end hardware, and they had put little effort into the performance of their reference software implementation of the API. Without very expensive hardware, OpenGL was pretty useless.

I had fallen in love with 3D graphics in the late 80’s, and I joined the recently-formed OpenGL team soon after Windows NT shipped. Even though speeding up OpenGL wasn’t part of my job when I joined the team, I was excited to be part of effort that would broaden access to 3D graphics capabilities. My dream was to have 3D graphics be standard on every PC. My job was to help integrate OpenGL into the Windows operating system.

But official responsibilities aside, I couldn’t resist the temptation and challenge of speeding up OpenGL to make it useful without requiring very expensive hardware. I immediately started tinkering with the OpenGL stack when I had free time. This was the mid-90’s – the era of Intel’s new Pentium processor line. This family of CPUs allowed overlapping of floating-point and integer instructions – a primitive form of parallel processing. A floating-point instruction could be started, and then integer instructions could continue to be executed while the much lengthier floating point command was processed. This mixing of floating-point and integer operations was perfect for speeding up 3D rendering operations which could be broken into floating-point setup and fixed-point, integer-based scanline fragment processing.

A complicating factor in speeding up OpenGL was the complexity of its state machine. There were many possible combinations of rendering modes based on attributes such as color depth, z-buffering, shading model, transparency, texture-mapping, etc. An early (Windows 3.1 or Windows 95) solution for optimizing GDI rendering was to build the rasterizer on the fly on the stack based on the GDI state (I believe this was all or mostly Todd Laney’s handiwork). But I was working on Windows NT, and such clever hackery was not allowed in a next-generation operating system. After considering my options, I determined that my best bet was to pre-compile a set of renderers that represented the most common cases for rendering (for example, Gouraud-shaded, 16-bit color, 16-bit z-buffered triangles). I did this by building rendering functions that consisted of groupings of macro statements that themselves were chunks of hand-crafted inline assembly code. These macros could then be grouped together to perform unique sets of rendering operations. In order for this approach to work, I constructed a framework for the chunks of assembly code in the macro blocks to be able access variables and registers in a common and consistent way so that they could interoperate predictably and efficiently.

This was definitely not how the C language was intended to be used. It wasn’t pretty – I believe I even used “goto” statements out of necessity – but the code was highly effective. The approach also entailed risk because things could go horribly wrong with unanticipated edge cases. I remember one embarrassing bug that in certain cases failed to return the floating-point control register to its previous state which effected floating-point operations in the rest of the operating system. I quickly found and fixed the issue but it was a reminder that there was no safety net with what I was doing.

It’s interesting how thinking about past work jogs the mind. In the middle of writing this post, I had a dream about how I may have implemented dithering. Dithering is an old technique going back to the print business that allows gradations in tone and color to appear smoother by breaking up transitions using patterns of dots or pixels at different densities. For example, an area halfway between one color and another would have half the pixels in that area set to one color, and the remaining half set to the other color. Today, even our phones are capable of producing millions of colors eliminating the need for such techniques, but in the mid-90’s, the capabilities of most PCs were far more limited. In my dream, I implemented dithering look-up-tables (LUTs) for red, green, and blue values so that I could construct the right 15 or 16-bit RGB value using three highly-cached indexed memory operations and two OR instructions. I probably did something along those lines, but who knows…I’d love to have access to that old code (of no use to anyone at this point) just to remind myself how I did what I did.

My initial performance improvements were compelling enough to justify making the software acceleration work a full-time endeavor. I added functionality over time culminating with real-time texture-mapping using quadratic subdivision. Even though it was still part of a fully compliant general-purpose OpenGL implementation, textured rendering throughput got reasonably close to that of Doom, the texture-mapping speed champion at the time. In fact, my software-accelerated pipeline became competitive with high-end hardware graphics cards and beat them in some instances (anti-aliased lines comes to mind).

I had taken rendering performance from being measured in seconds per frame to dozens of frames per second; many operations were over 100 times faster than the original versions. Ah, the power of highly-tuned, efficient coding! I often worry that Moore’s Law is being buried under so many layers of frameworks, objects, interpreters, and interfaces that it’s hard to tell what the hardware is actually doing at the bottom of the pile multiple layers of indirection. Then again, even CPUs now hide some of their internal operation with sophisticated out-of-order instruction execution engines. Sadly, the last time I tried to out-optimize a modern C compiler with an integer-only routine, the results were a draw.

Pipes screen shotmaze2Our OpenGL implementation was incorporated into both Windows NT and Windows 9x code bases and was starting to get significant traction. Being part of the operating system and eliminating the need for hardware acceleration meant that an OpenGL application could now target a very wide audience. As a means of promoting our OpenGL’s capabilities, I wrote the first set of 3D screen savers (“Flying Objects”) which proved popular and inspired other people on the team to write their own as well (such as “Pipes” and “Maze” pictured here). For many years afterward, I would get a chuckle when I saw one of our screen savers running in the background of a TV or movie set. Our efforts had taken what had been an expensive workstation technology and made it readily available on millions of desktops.

Despite the effort’s success, the writing was already on the wall for the future of software-accelerated computer graphics. Hardware acceleration and the rise of the GPU were just around the corner.

This is the text of my 5/17/2014 commencement address for the University of Vermont’s graduate college:

commencement

Thank you very much for the introduction.  It’s an honor and a privilege to be speaking to you today.

This is a very special day for me as well because I received my master’s degree from UVM at a similar May commencement 25 years ago.  When I graduated, I did not imagine that I would be returning one day to address a future class of graduates.

I confess that getting ready to speak with you today has posed a real challenge for me.  I’m a perfectionist.  I wanted to find something to say that each one of you would find useful or at least thought provoking.  I wasn’t really sure that I could give the same advice to someone studying historic preservation as someone studying biochemistry or public health, so that goal seemed like a tough engineering problem to me.  Also, my preferred presentation format includes lots of Q&A and interactive dialogue rather than simply talking to an audience.  I did ponder the possibility of trying something a bit different, but I ultimately decided that it may be too soon to innovate with the commencement address format just yet.  And finally, as a UVM graduate, I felt that I had an extra measure of responsibility to this audience given my shared connection to this school and this community.

My path to UVM and to computer science was not a direct one.

My family and I came to the United States as political refugees.  It was the late sixties, and my native Hungary was still behind the Iron Curtain.  In addition to lacking many other basic freedoms, education was highly controlled and censored by the political system in place, and my parents didn’t want to raise their children in such an artificially limiting environment.  We ended up gaining political asylum in the United States, and I had to get busy learning English.  Back then, there were no classes in elementary schools for English language learners.  But I remember learning lots of English watching The Three Stooges and Bug Bunny cartoons on our small black-and-white TV, and trying to figure out what the characters were saying.

A few years later in the mid-1970’s, the first generation of video game consoles were coming to market, and the first real blockbuster video game was Pong.  For those of you haven’t ever heard of Pong, the game consisted of two electronically simulated paddles that could be moved up or down on the screen with a pair of controllers to try to keep a ball – really, just a crude square – bouncing between them.  If you missed the ball, your opponent got a point.  That was it.  But it was a simple, fun, and intuitive game, and the market was eventually flooded with Pong game consoles hooked up to TVs.  My brother and I received one as a gift at some point, and after the novelty of playing the game finally wore off, I took it apart to find out what made it work.  How did the paddles and the ball get painted on the TV screen?  How did the ball know whether or not it missed the paddle?  I was determined to find out.  Inside the device, there was a printed circuit board with a bunch of components soldered to it, including funny-looking rectangular parts with lots of legs.  As I discovered, these were chips.  And as I found out, the mysterious process that made the Pong machine work involved those chips, and digital electronics.

In the process of taking the Pong machine apart, I broke it, and when I put it back together it no longer worked.  But I still wanted to get to the bottom of the mystery of what made the device work.

The late 70’s and early 80’s were a golden age for digital electronic hobbyists.  Technology was still simple enough to be able to build your own projects from the ground up.  I learned to solder, and how to design and make my own printed circuit boards to do things like count up or down, or measure things like temperature, or create sound effects.  With enough trips to the local Radio Shack, it seemed you could build anything.

Eventually, my projects got complicated enough that they required being controlled by a computer to be useful, so I taught myself how to program.  The majority of my early software efforts were simply a way to bring my hardware projects to life.

I went to college and ended up being a physics major.  Middlebury didn’t have a computer science major yet, and besides, computers were still just a side interest of mine.

That changed when I got my first real job after graduating from college.  I applied for a programming job in the “help wanted” section of the newspaper.  It might be hard to imagine now, but answering help wanted ads in the paper was how people actually found jobs back then.  This was the mid-80’s, the PC revolution was just starting to take off, and people who knew how to write software were in high demand.  In my case, the people looking for software help looked past the self-taught nature of much of my knowledge and hired me.  I dove right in, and re-wrote the tiny company’s bread-and-butter product over the course of a number of months.  By now, I was now completely hooked.  Not only did I love what I was doing, but I was getting paid for it!  It was also fantastic to be able to thumb through a magazine and point to an ad for the product I was responsible for and to be able to say “I wrote that”.

But I also knew that much of what I was able to do was self-taught, and as valuable as teaching yourself is, it has its limits.  I felt that there would eventually be a gap between what I wanted to do with technology, and the deeper knowledge that more advanced work would require.  I loved the science and the craft of building software, and I wanted to be as good as I could possibly be.  That’s how I finally ended up studying computer science at UVM.

When I graduated with my master’s, I could not imagine everything that would unfold in the computer technology area over the next 25 years.  In 1989, the PC was still emerging as a mainstream product, the Internet was essentially a research project, and so many things that we take for granted today – everything from mobile phones and connected devices to seamless access to information and connectivity – were still in the future, yet to be invented, created, and developed.

My next three jobs after graduation were software programming jobs, and I wrote many thousands of lines of code and loved programming.  But there came a point when I was asked to become a development lead at Microsoft.  This role entailed management responsibility in addition to continuing to write software.  After some consideration, I agreed, figuring that I could go back to pure software development if the management part of the job became too distracting.

You may know where this is going.  About a year later, I was asked to take on even more responsibility as a development manager.  This meant an end to me writing code.  But it did not mean an end to me being an engineer, and everything I had learned in grad school continued to be incredibly useful, just applied in different ways.

In fact, this was the period of time when I co-founded Xbox.  The Xbox effort started as an unofficial side project that was not approved by senior management.  I was able to formulate an engineering and technology plan, but now as a manager, I was also able to assemble a small team of volunteers within my group to build the prototype software for Xbox.  This working prototype convinced Bill Gates that the idea of creating a console platform using Windows technology was actually feasible.

Later, I led an effort in Microsoft research developing and patenting new technologies in anticipation of a future boom in mobile computing and touch-based interaction for product categories that did not yet exist such as today’s smartphones and tablets.

I also served in general management and architecture roles developing products, product concepts, and designs that were predecessors to modern tablets and e-reader devices.

When I graduated from UVM, I never imagined that I would have a product design portfolio, or patents, or management experience leading teams of hundreds of people.  Much of my work since graduate school may not seem directly related to a computer science degree, but from my perspective, all of it was built on the foundation of engineering that I established here.

The basics principles in my field are still true today.  Sound engineering practices don’t go out of style, and creative problem solving and innovation still look very much as they did when I graduated.

I have a whiteboard in my office, and I use it to map out designs, processes, architectures, and potential solutions in the same basic way as I would have used it 25 years ago even though today I may be solving organizational or business challenges rather than engineering ones.

Trust the foundation you have established here, and your ability to build your future upon it.  Remain open to new possibilities to develop and grow as the future reveals itself to you.  And stay curious about how things work even if means that you occasionally take something apart and can’t put it back together as I did with Pong.

I want to wish you the best of luck, and congratulations on your achievement.

Thank you.

I’ve tried to keep the number of obsolete reference manuals and technical books I have to a minimum over the years.  That stuff has been getting outdated at the same rapid rate as the evolution of the technology industry.  And with on-line references available for all things technology-related, there is almost no need to keep paper copies of anything.

Despite best intentions, however, possessions tend to accumulate, and when we moved from Seattle to New York a few years ago after being in the same house for close to two decades, it was necessary to do some significant culling.  If I had a book or manual that didn’t pass the “will you ever use this again” question, it went into the donation pile.  The Friends of the Seattle Public Library organizes book sales every year to support the library, and this made saying goodbye to about thirty boxes of books our family assembled much easier.  In this process, I did make allowances for sentimental reasons.

One of the exceptions I made was to hang on to my original copies of Borland Turbo Pascal. It came on a single 5.25” floppy disc along with a paperback reference manual.  This is a picture of the original 1.0 and 2.0 versions that I’ve kept:

photo

I credit this product as much as any other for taking me down a path that would lead me to become a professional software developer.

I was an undergraduate at Middlebury College when I bought it.  Much of the software development I was doing was self-taught using one of the earliest IMB PC clones available – a Sanyo MBC-555.  The Sanyo was not a very good machine and had lots of problems with compatibility, but it was the cheapest PC I could convince my parents to buy.

I had reached the limits of what I could do with Basic, and let’s face it – a real program was a compiled, self-contained executable package (a proper “app” for all the young readers out there), not some Basic file that you had to run through a slow interpreter.  Also, I had been involved with assembly-level programming since the beginning of my interest in computers, and wanted a tool that allowed access to BIOS- and hardware-level functionality, even if it meant hand-compiling the opcodes using the 8086 CPU reference manual.

Turbo Pascal would let me do all of this, and at a price that a college student could justify to his parents: $49.95.  This was a bargain compared to the high cost of any of the Microsoft tools available then; Microsoft’s Pascal compiler was $400.  That was a lot of money back in the early 80’s, and a $400 compiler for a student was out of the question.  At the time, I couldn’t have imagined that I would eventually go work for the Microsoft that wanted so much money for a software development tool.

I bought Turbo Pascal mail order, sight unseen.  There was no Internet as we know it today, no Amazon, no on-line reviews, and my connectivity consisted of a 300 baud modem (that translates to 0.00029 megabits).  Everything I knew about the product was contained in a glossy advertisement in Byte Magazine.  I realize how quaint that all sounds, but when I got the package with the small paperback reference manual and the floppy, I was in programming heaven.  The compiler was incredibly fast even by today’s standards, and produced real executable programs even if they were limited to the smaller .com variant rather then .exe files.  And the fact that Middlebury’s math department taught a few Pascal classes (the college did not have a computer science department back then) was a big plus.

I would remain a big Turbo Pascal fan for a number of years until I fell in love with the C programming language, but that’s another story that also involves a thin paperback that I have also kept to this day.

My wife Maggie was recently looking for pictures of our old dog Pluto for her blog, and she came across a lone CD loose in a photo box full of old, uncategorized photos.  The disk had a Ritz Camera logo and was “Powered by Microsoft PictureIt 2000.” Back when the photo world was just starting to transition to digital, you could get your film pictures put on a CD when you got your film developed.

We do not have many pictures of Pluto; he was not one to hold still too long, but 2000 was the right time frame for Pluto pictures.  The box turned out to have only a few shots of him, so she put the CD in her laptop in the hope that it would have a few more pictures of our first dog.  Her Panasonic laptop did not even recognize that there was a CD in the drive.  I figured I would try the CD with a different PC, and stuck it in the DVD drive on our desktop.  I immediately heard bad sounds coming from the drive.  I took it out, and tried sticking the CD on the PC’s second DVD drive (yes, two DVD drives since the desktop machine is a Dell XPS gaming rig).  Same bad result, with ugly sounds coming from the drive as it tried in vain to read the disk.  I popped the disk out to do a visual inspection that in retrospect I should have done sooner.  It turned out to be seriously warped.

Maggie’s assumption was that the CD was toast, but I was more hopeful.  After all, the bits were probably still there.  And something warped can be un-warped, right?  After thinking about the problem for a bit, I concluded that some time in the oven might do the trick.  An Internet search revealed nothing consistently useful on the topic of un-warping CDs or DVDs, so I thought I’d improvise.  First, I needed a flat surface.  My son was recently home from college and had taken apart an ancient hard drive.  He likes to see how things work and had plenty of time on his hands.  Hard drive platters are incredibly smooth and rigid, so I though that would make a good straightening platform.  I also needed something heavy to place on top of the CD to help the flattening process.  I settled on a flat-bottomed drinking glass filled with water.  I put the platter-CD-glass sandwich in the oven, and set the temperature to 200 degrees Fahrenheit.  It was a guess that it would be hot enough to soften the DVD but not cause damage.

I meant to set a timer for perhaps 30 minutes, but then promptly forgot about the whole affair.  About an hour later, I spelled the faint but distinctive odor of hot plastic.  I peeked in the oven expecting to see a gooey mess, but things actually looked pretty good.  Whew!

I wanted to make sure the CD stayed flat as it cooled, and I settled for a piece of scrap Melamine with a few sheets of paper to avoid scratches.  I put the hot CD on the Melamine slab, and then put three volumes from an encyclopedia on top to keep the thing flat.

About 30 minutes later, I took the now-cool CD and put it into the desktop drive.  No bad sounds, and I could access the directory!  There were a bunch of random folders and icons all ready to install PictureIt 2000 Express, but there was a folder in the mix that had our actual pictures including some of Pluto:

I had the opportunity to be part of the opening of the Microsoft Store at the Mall of America in Minneapolis today.  The energy and the excitement was amazing, and the store itself is gorgeous.

 

 

 

The Mall of America has an amusement park in the middle including a roller coaster.

The day before…the store was still under wraps.

At a separate Kinect experience demo in the mall, people were lining up and having fun trying out the controller and some of the new games.

The dancing title was very popular but my colleague and I chose table tennis.  I lost on match point.

Next day…ready for the opening.  A very large crowd showed up.

Microsoft cut some big checks in support of community groups like this $300,000 gift to the High School Technology Program.

The curtain was finally removed and the store was revealed and officially opened…

…and people who had invested a lot of time in line started to make their way into the store.

The store staff greeted everyone coming in with high-fives.

This view shows the competition directly across from the store.  They were looking rather empty today.

No, I don’t mean the software – that’s a different topic.

I mean the actual physical hardware.  Let’s take mobile phones, or more specifically, smartphones with increasingly large touch-based displays.  They’ve got a lot of glass (the display), and more glass (the touch panel).  Here’s an idea – in addition to all that glass on the front, let’s add glass to the back as well (Yes, I mean you, iPhone 4; check out the preliminary iPhone 4 failure rate).  Don’t you dare drop that thing onto something like a concrete sidewalk!  Apparently, when you buy a new phone, you’re supposed to immediately bury it with rubber bumpers, skins, and covers, and as a consequence, destroying the original aesthetics and design intent of the phone.

This phenomenon is even worse with iPads.  They’re unrecognizable by the time they’re festooned with covers, folios, and other protective contraptions to make them usable in the wild.

Even larger form factors like laptops?  Well, they just break.  Drop the average laptop: it likely suffers some serious damage.

Hey, manufacturers!  Devices that people hold, carry around, use on the move, and put in their pockets, bags, purses will get dropped, crushed, scraped, and bumped.  I’m old enough to remember when phones were leased, not purchased, and breaking Ma Bell’s equipment was virtually impossible.  I also remember that early generations of mobile phones were built like tanks.  Were they as thin as crackers?  No.  But dropping them was not a big deal.

A few companies have specialized in building products engineered for real-world environment, perhaps most notably, Panasonic.  We recently bought our second Toughbook, model S9.  Our first Toughbook was the R5.  The Japanese-only model R5 outlived a number of other laptops from large manufacturers that I won’t mention, and it wasn’t because it was cared for gently.  I was shoved into cramped bags, dropped, bumped, and user heavily.  It just kept working and working.  In addition, both laptops are very light (the S9 is 3.2 pounds), and both have great battery life that’s enough for all-day use (the S9 is rated for 11 hours).

Panasonic S9

Downsides of the Toughbook?  It’s very expensive, and you won’t find it any mainstream retailer.  It is the embodiment of a niche product.  Wait, a laptop with great battery life, decent performance, light weight, and robust design intended to withstand real-world use is a niche product?  Yes.  Heavy, fragile laptops that are addicted to wall outlets are the norm.  This is completely backwards, and contrary to what real people need.

If form really does follow function, then mainstream products have gone seriously off track.