So clear! So easy to read! So obvious what things you can click and what things you can't. Compared to this Windows Modern crap where it's not clear what's a button and what's a label, and every app (or part thereof) has its own idea about where things go.
Actually, it's not that consistent. Some clickable things have a 3D-shape, some are just text (top menu), and some are a picture of a folder with some text.
Frankly, I don't think modern UIs are much worse than old ones. Yes, you could add visual indicators to what is clickable and what is not (3D-effect), but it's not like that was a universal rule back then either.
Perhaps it's just that people find most intuitive what they learned first and don't like adopting new paradigms.
The menus are grey to indicate that they're button-like. In particular, there's nothing you can do as a user to edit them. Admittedly they're not that different from static labels, which would also be grey but not have the button shadow.
The middle area is white and depressed because that's the content area. It's a bit like a text document or text edit control: it's normal to be able to click in it and select stuff and edit it. It's a bit unusual in that double clicking opens a new window, but that's also true if you double click a linked embedded image in a Word document.
Also, the menu is in the same place in every app and does more or less the same thing. And it’s all text and visible at all times.
Think how far we’ve fallen. There was a time when as a result of human interface research, Apple put the menu bar across the top of the screen because it created an infinitely tall click target. Today, even in apps not designed primarily for touch use (e.g. Slack), we’ve buried the menu behind a randomly placed inscrutable hamburger icon.
> Today, even in apps not designed primarily for touch use (e.g. Slack), we’ve buried the menu behind a randomly placed inscrutable hamburger icon.
Windows 95 buried everything under the Start menu, which was hardly a paragon of usability. There was a "Documents" menu that was pretty useless, a "Programs" menu that required several small click targets to launch anything (and regularly overflowed with applications), a "Run" button that popped up a completely inscrutable dialog box, and a "Shut Down" button that made users decide whether to restart normally or to restart in DOS mode.
I think you're giving the '90s too much of a pass. Modern mobile OS's in particular are much more usable than anything back then.
> I think you're giving the '90s too much of a pass. Modern mobile OS's in particular are much more usable than anything back then
I disagree. At least Win 95 was largely discoverable to people who can read. Today’s mobile OSes are completely undiscoverable, with functionality hidden behind inconsistant gestures and iconography.
E.g. Apple Maps. How do I get directions from point A to point B? No obvious widget. Maybe search for point A. A, “directions.” But it looks like it is directions from wherever Apple thinks I am to point A. Not what I wanted. Maybe search for point B. Still directions from wherever Apple thinks I am. Maybe click it anyway. Okay, now routes. But no obvious button or text field to change the starting and ending point. Oh, I see, the “here” is blue. Maybe I can click on it. It’s hot garbage. Would it kill you to have a menu across the top with a comprehensive enumeration of functions your app supports?
And apps like Edge and Chrome are even worse for importing this lunacy into the desktop, where the screen space limitations of mobile don’t apply.
Research has consistently shown that the traditional menu bar is actually not very usable for most people. Office ran into this problem fairly quickly, even back in 1997. The menu bars quickly became overwhelming.
I will try to read those posts soon. I've only just started using the new Office (since I work for a Windows shop now, instead of Linux).
It's extremely frustrating for me - for instance, I always have to search and scan for the "copy" button only to discover it's not on the current tab and then move to another tab which - if I'm lucky - will not suddenly close my document and replace it with a crazy menu but contain the tool I want to us. And it's crazy - if you want to save or print your document, you have to use the fake-tab that closes your document! (At the very least, they could just shrink the window and put it on the side so you know where you are.)
At least the old system had my frequently used tools easily accessible, but my rarely used tools hard to find. The current setup might have rarely used tools easy to find and frequently used tools hard to find - it all depends on what I was doing three hours and two tasks ago.
The menus buried so many features I never used or knew about. Its quite reasonable for them to replace it with a new system. But hiding the tools I use all the time annoys me me, and the silly fake-tab that hides my work when I'm trying to keep it upsets me. I would really be excited to know their logic. I hope it's explained in the blogs you linked to.
Better discoverability was one of the primary intentions. Shame it led to less usability for everyone else (ie all non-beginners), with an arbitrary mix of text, text+icons and mystery find out only on rollover icons.
There was much to be said for hierarchical menus, now it's partially guesswork to even decide which tab the required choice is on.
Anecdotal for sure, but the old interface seemed to be missed most by the very users that the ribbon was intended for - those needing to discover, rather than already knowing, which feature to use. There was a lot more piracy of Office 2003 after 2007 came out.
Maybe it wasn't "very usable for most people", but it certainly was becoming usable for an increasing number of them when MS decided to change the UI completely, making it not very usable for everyone. Learning takes time, and they just threw away countless man-hours of it.
Research on non-professional users run by mediocre UI "visionaries" trying to justify change for the sake of change (I am looking at you Jensen Harris).
The justification for ribbon UI is making it easy for people with tablets, with jittery hands unable to hit a menu item, and newbies -- and making it mandatory degraded the experiance of the 95% of regular users who are none of the above.
There is a good reason why my private computers still get Office 2003 installed by default, despite of all of its limitations at age 15...
> a "Run" button that popped up a completely inscrutable dialog box
I've relied on that dialog box throughout the years, as Microsoft keeps moving things around and hiding functionality.
Hitting Win+R and typing "control userpasswords2" works identically in Windows XP, 7, 8 and 10 and is far less inscrutable than divining where they've decided to hide the settings in any particular Windows version.
The menu text is also has underlining. I think this was a learnable indicator of clickability, since you'd often see clickable things with an underline (e.g. form buttons), especially once the old web took off with underlined links.
Compared to now when you literally just guess what's clicable. "Oh, is this thing where it says 'on: What Was the Microsoft Network' a link back to the main article? No idea, but I'll hover over it and see." Thankfully at least desktop web browsers change the pointer to indicate clickability - on my phone or in desktop apps I just have to try to activate everything to find out.
Not to say there's been no improvements in design - there's now a consistent symbol to say "there's a menu behind this" which is not exactly learnable but at least it's redeployable on multiple systems. And toolbar buttons have got more pixels in them now then before (are they larger? i don't think so but maybe), and usually a bit more spacing, so misclicks are less likely.
You could probably recreate that with a web scraper and browser plugin:
Use the bloated webpage several times while a learning algorithm watches which links you click. Once it's trained, it then renders the document in an invisible frame buffer, extracts the clickable links, and presents them to you on a clean UI.
If privacy weren't a concern, the plugin could aggregate clicks to the same site among different users and learn much faster. So it watches 500 users visit yelp.com and notices none of them click on the ad or the privacy policy, so the ad link us stripped from the simplified view.
Something similar could be done with news sites. Render that bloat off screen, extract the article, render it in a 1990's style page (or just a json file that can be rendered however you like, say in lynx)
A modern, clean theme, basically the epitome of modern design (I could have chosen a better color scheme but trust me, it's the same whatever colors you pick) with a monospaced font.
The damn font doesn't look out of place at all.
It looks straight out of the 1980s, complete with non-descript icons because of the low-res screens and the whopping 16 colors available for everything. We've got super-sharp high-resolution screens where you can barely tell the pixels with a magnifying glass and which can display bazillions of colour. And yet the dominating design paradigm looks like it's dragged out of an era when 640x480 was high-resolution.
I initially wrote that it has regressed back to an era when 640x480 was high-resolution, but I suppose that's a little opinionated. To each his own; I've mostly given up trying to get modern systems to look "good" - non-flat, with detailed, suggestive and beautiful icons - it's a lot of hassle, especially when some UI toolkits (e.g. GTK) don't really cater to that sort of UI anymore. But I really hope this fashion will eventually die out and we can once again have computers whose UIs don't look like cartoons.
The actual 3D shading and such looks dated by now, but I certainly agree with you in principle. I actually kinda liked WinXP's shiny happy colored interface, especially the Win32 Common Controls v6 appearance, and Win7 looked pretty good.
You'd have had to live through the nineties to understand it. That was cool at the time; 90% of web pages had animated images like that (the other 10% had animated "under construction" gifs; and 100% had visible visitor counters and a link to a guestbook, before "web 2.0" comment threads was a thing).
By the way, the puppy and the cop Clippy were remnants of the failed experiment "Microsoft Bob", which attempted to create user-friendly environment by imitating a whole house, instead of just a "desktop with windows". Those were wild times, full of possibilities, when (almost) no one had much idea of how personal computers ought to be used.
Cool at the time? A comment often heard in the nineties was that XP was a joint venture with Fisher Price. Win 98 style UI was getting tired for sure, but there was plenty of laughter for XP's UI.
I think 90% of those under construction icons were on geocities weren't they? :)
Oh I lived through the 90s. Just used Amigas then Macs.
It was surprising that something meant for business could lookso childish. Whereas the closest the Mac got, a ‘toy’ OS that couldn’t be used for Real Work was Clarus the Dogcow.
I was thinking of classic Mac OS, in all it's Platinum glory. Hyper-skeuomorphic interfaces were all iOS 6 and below.
At that time I think Calendar was still called iCal on OS X and rejoiced in a brushed-metal finish. I agree about the spacey time machine though - the animation used to judder like crazy on my anglepoise iMac.
I miss the XP interface as well. Its cartoonish look buttons, animations, sounds brought a certain sense of play or joy to working on those computers I don't get now with OSX or Win 7.
Or maybe I was a wide eyed teen ager first exploring software development, Java 5-6 and NetBeans!
Ehh, Deepin is way too trendy-looking and visually borrows a lot from Win10 and macOS. Not to mention being ridiculously heavy. Give me KDE Plasma any day.
I do like LXQt, MATE, and Xfce as well. Cinnamon's okay. Xfce does hit a sweet spot of usability and is quite a nice environment in general, but it does seem to be falling behind in development. Last time I checked, most of it is still in GTK2.
I wanted to find something debunking the notion that userids at CompuServe were based on the PDP-10s that ran the service, because it seemed improbable that anyone was running a major online service on PDP-10s by the time the Microsoft Chicago betas were circulating, but, nope: that's real.
They were still running PDP-10-compatible systems from SC up until the original Compuserve service ended in 2009.
You can still buy a brand-new PDP-10 compatible in a certain sense... XKL, another SC-like company, still exists, and they make specialized network gear. As far as I know, all the control planes still run a TOPS-20 derivative on a PDP-10 compatible architecture, even today.
The Digital Antiquarian has a wonderfully in-depth series on CompuServe and the surrounding ecosystem of early internet firms, which includes this detail:
Very interesting but the story in that post is dated 1988, long before Windows 95. I'm calling it a "post" because I'm not sure what it is: Email? Message? File? The header:
File name: compus.txt
Date: 31-Aug-88 15:44 EDT
From: Sandy Trevor [70000,130]
Subj: PDP-10 History
TO: Joe Dempster
The date is repeated below, so that's not a typo. He then talks about how Compuserve has been using PDP-10s for the previous 17 years. Compuserve was around in 1971??
Edit: Holy shit, Compuserve was founded in 1969(!) as a subsidiary of an insurance company.
I think I read on HN recently that McDonald's restaurants' accounting ran on PDPs (10?) until the 90s. Can't find any info online to back that up though.
Wouldn't surprise me. Code that works has a tendency to stick around longer than people think. There's a lot of institutional knowledge built into that codebase. It would be like firing an employee of 20 years who works for nothing and hiring someone for millions of dollars.
Inertia is vast for the core systems that power large organizations. But these legacy systems do eventually yield as new requirements render change non-negotiable. Those ancient, messy, arcane, yet somehow supremely reliable systems tend to become frozen reflections of the organization they serve. It does eventually make sense to replace them when the organization has changed sufficiently that the reflection is no longer accurate.
New regulations, new products, new markets, new acquisitions, divestment, outsourcing... these are the real drivers of change that move an organization’s technology forward - not the new tech itself.
Because maintenance and replacement (in some way) is good practice. Quit thinking of tech as some magic thing that will never go bad and start thinking of it like any other piece of equipment. Regular maintenance is required. Replacement may be required.
Fun fact: SCO Unix started out as Microsoft Xenix, which started out as a license of Unix:
"Microsoft, which expected that Unix would be its operating system of the future when personal computers became powerful enough, purchased a license for Version 7 Unix from AT&T in 1978"
I remember watching a video about Sierra OnLine's network (later The Imagination Network which was sold to AT&T) where when they needed to expand, they just added another computer full of modems.
> You see, back in 1994 Facebook hadn't yet invented the Internet
Made me laugh. We are in such a different era. If we look at our tools from back then, we would find them savage, and our past self looking at us would see tech that is indistinguishable from magic .. although predictable .. and bloated .. and filled with ads. ... Not much has really changed has it?
>If we look at our tools from back then, we would find them savage
I don’t know how true that is... you have tools today with the UX of then (vim, web 1.0 websites) and they’re both functional, useful, and not really that different from their modern equivalent. Software-wise, its hard to say that much has changed.
The main difference is uaage between early web and modern is web-apps (not significantly different from desktop apps) and the amount of sheer content to be found. Perhaps the centralized usage of social media too, but their usage is probably unsurprising (more surprising is how much personal info people feel fine giving away)
Hardware has become radicaly different, but somewhat in an expected way: everything got more dense. The only major shift in thinking there might be touchscreens, and google/gps available anytime anywhere, but neither are particularly surprising.
After GUIs became a major thing, I would argue most of our UX went stagnant. You could pull out a 1994 pc today and have no issue using it, and someone from 1994 would likeky have little trouble with windows 10, or even an iphone (after getting past the touchscreen).
Things got faster and smaller, but not different. Certainly not the difference of something actually “savage” (like a stone tied to a stick) to something truly “modern” (like a powerdrill), that would require a totally different mindset to use
> and someone from 1994 would likeky have little trouble with windows 10, or even an iphone (after getting past the touchscreen).
The touch screen is a huge thing, that is like saying "after getting past the mouse" going from the 80s to the 90s.
Can you imagine trying to teach someone from 1994 about an iPhone? "So you need to tap and hold on some UI elements. No there isn't an indication which ones, you just sorta guess and get used to it. Yeah like right clicking, except slower, and the menu options aren't consistent. Oh some other elements you tap, but you tap HARD on. Some elements you tap and hold hard on, and here is the manual for all the ways you can tap, double tap, tap and hold, and some other gestures, for that one single button you have. If you get a newer iPhone, the button is gone and instead there are just magic swipes you do from off screen in different directions and distances to get different behaviors."
And then try explaining how the file system thing is all sorts of weird.
PalmPilots were stupid simple to use though. IMHO they were about the height of portable device usability, just enough external buttons that it was possible to pull it out of your pocket and push the desired hotkey all with one smooth motion.
I miss having my todo list on a PalmPilot. Talk about being super organized. When it comes to being organized, today's cell phones are a joke in comparison.
Samsung ALMOST got it correct with the s-pen on the note, but they unpin notes from the lock screen after 10 minutes, and you have to do tap on the note a few times to edit it, you can't just write on the lock screen note with the s-pen.
As an aside, I am incredibly irritated at how Samsung went 90% of the way to making an amazing productivity tool and then dropped the bloody ball. :/
IIRC, the creator of the Palm Pilot carried a block of wood around in his back pocket, pulling it out and imagining going through steps with it to work out the form factor and design of the applications.
Also, I knew someone who continued to use Palm's Windows software as her primary calendar and organization software years after Palm had begun fading away, and despite having never owned a Palm device! It was hard to fault her for it, since it was fast and robust.
tbh it can't be that difficult, no more so than teaching a video game without a tutorial. You have three primary inputs on an older iphone: tap, tap and hold, and home button. Additionally, zooming can be done with a "pinch" motion.
Tell them that much and toss them the phone; they'll probably figure out 80% of its usage within 10 minutes. The hard part was done with the advent of the GUI: mapping physical input to digital movement on a 2D screen. The major change is that mouse has been replaced with your finger, but UX-wise its mostly the same. The main difficulty with the game is getting used to the controller, but even then conceptually its not that big of a leap from a mouse.
Given that there is a _lot_ of room for mistakes, the fastest teaching is to just let them fuck around with it. The same as with a video game: hand them a controller and they'll figure out (most of) the rest. There's not that many things you can do with it. They'll mostly miss out on the non-essentials, as they reach a point where exploration is no longer explicitly necessary.
I can't comment on the newer iphone, as I never used it, but the general rules are the same. If I can trust someone to figure out most of a game with a controller of 16 inputs, I can probably trust someone to figure out most an iphone with some 8. Maybe not your grandmother, but otherwise.
In other words
Once you understand the mouse, you understand everything (up to today)
The gestures in modern iOS really are a bizarre hodgepodge, a "mystery meat menu" if you will, and this is from someone who usually loves and defends Apple's UX decisions.
on the iPad:
- Notification Center: Swipe down from the center of the top edge.
- Search: Swipe down anywhere on the Home Screen, away from the top edge.
- Control Center: Swipe down from the top-right edge, but in that region a difference of a few pixels will either get you the Notification Center, Search, or pull out a secondary app from the side in Slide-Over Multitasking!
- Multitasking: Slide-Over apps can be moved to the left side, but they cannot be swiped off the screen when they're on the left side; you have to throw them to the right side then swipe them off the right edge to dismiss them.
This kind of shit is what people made (make) fun of Microsoft for. There's no way to look up the information for those controls except searching on the internet or when it [randomly] pops up in the Tips app.
However, that the "file system is all sorts of weird" couldn't be further from the truth, and is only based on comparisons with document-centric systems.
Everything is generally organized by apps, which can actually be more intuitive in many ways, especially to people who have no prior experience with computers, while iCloud Drive and the Files app still let you use document-centric workflows if you want to.
I do remember taking a while to work out which things needed single clicks and which needed double clicks when I was first introduced to Windows 95. It was basically undiscoverable for me and I needed someone to tell me a) that double clicking existed and b) to start a program from the desktop I needed to double-click on the icons.
I think until then I'd only ever used a Mac desktop, which didn't do the click-to-select thing.
It might be similar to having someone from today work on an Apple Newton. Touchscreen and buttons are pretty familiar but the cursive recognition and things like the modem interface would be really weird.
Wait, does the iPhone actually ha e a manual that tells you those things? Android and Windows phone basically don't have manuals, so I assumed iPhones don't either.
Apple's ad campaigns serve as their tutorials. Rather effective ones at that.
Their ads always show off the newest interface changes, from the first iPhone ad showing off tapping and pinch to zoom to newer ads showing off force touch and swipe gestures.
win95 is still peak average GUI to me. not the full os, just the ux idioms, even if they are limited (unlike the ultra dynamic material design); for graphical ergonomy, they're the 80%. xerox may have had the 90% gui but history couldn't take that road.
I think Classic MacOS/System 7 (from the same era) holds up pretty well too. It's been a long, slow decline ever since. Hopefully the trends will come back around soon.
EDIT: I just remembered a video I found a month ago with a mock-up of a "Windows 95 Mobile" OS: https://www.youtube.com/watch?v=D0DDQumaaCg The screens that look like they were mocked up specifically for the video (as opposed to the ones that are just screenshots of Windows 9x-era desktop applications) look like they would make for quite a nice UX.
There was a lot I liked about System 7, but I always hated how Mac OS handled multitasking. And Mac OS 8 IMO was a massive improvement aesthetically. Back in the day when OS 8 was stuck in development hell, I was a diehard Aaron user just because I liked the look and feel so much.
Windows 95's taskbar was a godsend. It was IMO the first GUI approach to multitasking that worked, and on top of that you had Alt-Tab which was the icing on the cake. Mac OS tried to emulate that with Command-Tab, but it never worked as well as on Windows (IMO the only desktop to ever improve on Windows Alt-Tab was KDE, especially when they switched to vertical layout in 3.3 or so).
Now, if you started talking about merging the Win95 multitasking UI with the rest of the System 7 GUI and Mac OS 8's theme, you've got my ear.
I'd give time for a tiny bsd/linux/rtos with a minuscule gui layer with bit of macos and windows. A secure, a bit modern reincarnation of the old T.Rexes
That does look nice! The scrollbars are missing, though. I know they're ugly, but the classic scrollbar provides all kinds of useful functionality that's lacking in modern UIs.
(Also the UI elements are way too small to actually use on a touch screen without a stylus. But I wish we could get them back on our non-touchscreen PCs.)
yeah quite probably, I never owned a classic mac (only a mac mini but ran headless)
As a teen I despised this gui paradigm, I found it too limited, I actually wanted something like a smalltalk image or an html/js thing where you could extend all the things at will. But with age, I learned that it's rarely necessary.
> or even an iphone (after getting past the touchscreen).
Indeed. The iPhone's home screen is basically a nicer looking Program Manager. Mac fans laughed at Windows users during the 90s for having only Program and File Manager instead of a human-friendly spatial desktop.
Microsoft Network was the first online service my family subscribed to in fall 1995. I recall our problem being that you had to dial in to different phone numbers for the walled garden and for the web. And we only had local phone numbers for the walled garden. So while the UI was pretty sweet, there wasn't much there and I recall being angry that I couldn't use Internet Explorer and access espnet.sportszone.com without dialing long distance. When we switched to AOL in 96 they had far more stuff and it was easier to get on the web.
Although in a bit of a haze by now, I have such fond memories of the MSN. In my early twenties, I had gotten a beta CD of Win95 from dad who ran an IT shop. I stumbled upon the MSN, which had an access point in Norway, and before I knew it I was on the Frog Pond chatting away with really cool people - probably quite a few from Microsoft. Can only remember one name of the group: Pam!
Win95 was also a giant step up from 3.1 and this being a year or two before I got on the Internet it was indeed very exciting stuff. I had left my Amiga a couple of years before, and finally the PC was getting similar level good! I somehow started receiving interim builds by FedEx as well, I must have been providing some feedback that was useful to them.
Back then, at my age, that felt like a huge deal, like I was really part of an insider group. :-)
I've always liked the everything-is-an-object paradigm. This makes me realize how close we are to a universal standard for them.
Imagine an executable zip file with a few default files that define things like metadata, or actual data. Then imagine another zip file that can inherit this first zip file as defaults, and override them. Imagine the second one can be included in the first. Now imagine that changes to any of these files creates a new zip with just the deltas. And finally, an API wraps the zip files so that when you operate on the last one, it transparently maps the previous changes, copy-on-write style.
I think this should be an operating system extension, but the great part is, you can implement it all as libraries and ship it to any OS. You then have all the features of zip (archive, compression, encryption, etc), the simplicity of simple key-value data objects in JSON, a copy-on-write mechanism, and since it's both an executable and an archive, it can provide all its own dependencies.
This would effectively solve the need for Docker, and not impose the problems of statically compiled single-platform binaries, because everything is in a ZIP file, so you can provide all dependencies (including for multiple platforms), version them, and write changes to new versioned objects. And self-extracting ZIP files already exist.
Tie this to HTTP so you can pull objects from a remote store and you've mostly solved application deployment.
That zip system you describe isn't far from how most of the App Stores work today. Android's APK and Windows' MSIX (fka APPX) are "just" zip files with particular metadata. Updates are often delivered as delta zip files. MSIX even uses a copy-on-write methodology to support in-place updates, modifications, and extensions. (Yes, there are tools to modify MSIX installs to extend apps or customize them. Most of them are targeted to corporate needs, but some of them are already used for games, too, thanks to the Xbox.)
The Amiga had datatypes, which allowed an application program to install a reader and writer for its peculiar file format, OS wide. So say you had Photoshop for the Amiga. Not sure if that was a thing, but bear with. It would install a datatype for .psd files that would allow you to view such files in your favorite image viewer, or even a web browser.
For the most part it was done via datatypes library, so had that survived into an era of internet security most of the fixes would have been in the shared libraries.
Same went for a lot of shared common features. Update the OS and every program would update look and file requesters etc. None of the strange Windows mix of software that clearly looks of the previous release, or sometimes of the next. Or half a dozen legacy file requester dialogues.
Java JARs are zips that contain files defining code and data. Other JARs can sub-class that code and change how the data is processed or returned.
Maven is (amongst other things) a protocol for exporting versioned sets of JARs that can express dependencies on each other via a filesystem accessible using HTTP. You can specify a versioned zip with a simple coordinate like "com.example:zipname:LATEST" and it'll download and cache the zips with all their dependencies. And it's all cross platform and standardised, with multiple implementations of all these components.
It's an extreme case of inventing a technology before the support infrastructure is ready for it, like a car with no refining. In the case of COM and OLE, building an inter-application object system without a viable object oriented programming language to do it in. So developers had to build machinery by hand in C in each COM hosting application.
Before going to Microsoft, I had used COM and OLE via VB, FoxPro, and probably a few other higher-level abstractions. Then I went to Microsoft and worked on OLE DB, using C. O...M...G. CreateObject() suddenly turned into (what seemed like) a 1000 lines of almost-boilerplate. That ADO AddRecord() that was so easy in classic VB? Ugh, if only you knew what was behind that. COM was a clever idea to solve a hard problem, but if one had to work under the covers it was some sausage you were definitely better off not knowing the ingredients of. I mean, COM mostly worked and wasn't bad once you got the hang of it, just like pig snout in your sausage isn't bad for you. It's just kind of unappealing at first.
Backward compatibility is easier if the OS APIs are as terse and featureless as possible. Higher-level frameworks (VB, .NET, VCL, etc.) can then be distributed independent of OS releases with multiple versions installed simultaneously.
UWP is the future, Win32's long term roadmap is to join Win16.
In any case, one of the points of April 2018 release and the upcoming Redstone 5 one is that those models are being merged.
And yes, the next Office version for Windows 10 is store only and as usual in Office tradition, is the one responsible for many new UI controls (UWP Fluent Design) in the upcoming Redstone 5 version.
Ironically, we've gone backwards from COM/OLE. After all the object/component hype of the 1990s, that's seemed to have completely died out. I recently discovered that not only does Microsoft Edge bundle an incredibly shitty PDF viewer with broken text search (and a broken UI for it), but that it has no component model and can't embed a proper PDF viewer either! This is technology that worked fine for almost two decades!
And now you can't even embed Edge into your own applications.
Microsoft recently released a new web-view control for desktop programs which actually works by hosting Edge in its own process and blitting the page render back into your application.
> building an inter-application object system without a viable object oriented programming language to do it in.
Objective-C + Cocoa and their predecessors on NeXT seem like they were pretty ahead of their time too, except they actually worked well and still often feel more intuitive than Microsoft's labyrinthine APIs (though I'm told MS has been improving in that area lately, usually by pjmlp :)
I still get surprised by how modular, extensible and hackable Cocoa/Obj-C is.
Check out [0], not the most upstanding example, but something fun nonetheless that I discovered a few days ago.
Well, I am been doing Windows stuff since 3.0. Not that I have been always happy with how MS has been at it. :)
Yes, NeXT/OS X was/is the only UNIX that doesn't suck as desktop developer experience.
And I had the pleasure to get to know the NeXT Cube, when doing my final assignment, rescuing my supervisors graphics work from NeXT into Windows 95, before the acquisition came to be.
However many here don't know Objective-C + Cocoa, as they use it as pretty Linux instead, running Electron apps.
The COM side rant is still relevant today. All of the UWP framework for "modern Windows Store apps" is built on top of COM. It's like a time warp. The JavaScript and .NET SDKs for UWP apps are just wrappers around what is really a COM API. Often you are dealing with low level `COMException` exceptions (in .NET land) for things that the .NET SDK hasn't got a contingency for.
For those who missed it: "Microsoft promotional video, where Chandler and Rachel from Friends learn how to look at cat pictures on The Microsoft Network"
I've never seen this video so far in. Made me realise how terrible hierarchal things are, especially with a mouse. Click, Click, Click, Click, Click, Click.
Give me "locate", or Mac's spotlight, any day of the week.
Hierarchies are great if you get to manage them and stay on top of it. The Windows Start Menu search is frustrating, since you're at the mercy of an algorithm. The other day I hit the Windows key, typed the first few letters of an application I regularly used, hit the enter key, and it launched the "Uninstall X" item. It just decided to sort that one higher on that day. Shrug
Good article, but the critique of the shell is misplaced. Shell provides a universal and extensible way to browse anything that makes sense to browse as a file system, without resorting to drivers. Yes, there's overhead that comes with that, but it allows to plug new things into Explorer, without waiting for Microsoft to do anything about it.
Also, FWIW, GetOpenFileName [1] turns the spaghetti from the article into just few lines of code.
GetOpenFileName is mentioned in the article as being the deprecated API that opens the old common dialog boxes. The first sentence in the documentation you linked to says to use the new COM API instead.
Yes, and upon clicking to the new COM API page, you are presented with the 11-deep nested if() "Basic Usage" monstrosity whose screenshot is also in the article.
I bet a lot of programmers just said "lol no. 'deprecated' my arse." and stayed with GetOpenFileName, as I did. MS isn't going to break countless apps by removing it.
Nah, the Chicago shell was a rubbish implementation of a common (for the time) concept. I tried to use it and extend it back in the 90s. What a disaster.
Firstly, it used binary paths with a complex and opaque structure. In fact there was no way to build a parser for such paths because each component was effectively a serialised object provided by each shell extension.
Now, I'm partial to binary structures myself. But as the article notes, this meant you couldn't do basic things like access it from a command line, or copy and paste a path into a text document (like, say, a SCRIPT!). And of course no existing software that used normal file APIs could understand such paths because they were all built on the expectation that paths are strings. So basically your shell extension was browseable by nothing except Explorer.
In theory you could write COM objects that would embed into Explorer and make your custom shell folders browseable. But there was virtually no documentation on how to do this, and it was extremely easy to completely screw up your entire Windows install this way because isolation in Windows 9x wasn't that great. I managed to hose my Explorer on more than one occasion trying to make this work.
Even if you got it working, what that let you do is replace the entire file viewer screen. But what if you wanted to express your plugin as a set of more files and folders? Oh, well then you needed to do even more work to re-use the standard shell views. You didn't get it by default. And of course, again, no useful examples or docs of any sort ... all in a pre-StackOverflow era.
I was actually quite paranoid in those days that Microsoft was trying to take over computing in its entirety. We were looking at a future where every computer would run Windows and only be able to connect to a network owned by Microsoft. All the protocols and document formats would be proprietary.
But was it really paranoia if it's actually what Microsoft was trying to achieve? I was quite surprised when they were defeated by the Internet, and they had to scramble to buy a web browser. That lead to the "Best viewed with Internet Explorer" days, but that's another story.
all i remember about the microsoft network, is that it would kill my compuserve setup by putting a new winsock.dll file in place. it was the only file that windows 95 would detect as "corrupt" when i reinstalled the compuserve one.
The conclusion of the article is spot-on. Our current large platforms are only marginally better than the walled-gardens of online systems in the 1990's.
Okay I really enjoyed this. Well written and entertaining. I was at the time on the other end of this, with OS/2 where everything really was an object you could do stuff with. Need a printer, open templates and drag off a printer one, fill in some info. I really liked the WPS in OS/2. Some of the stuff you cannot do even today. Moved on to Solaris and IRIX (I know, I was doing that internet thing at an ISP) and then NeXTStep. Hey Apple I want the damn NeXT shelf back!
Very interesting article, didn't even know this existed.
My first "proper" computer that my parents bought was running Windows 98, so will have missed this, but I remember using Windows 95 at school.
We also missed the whole walled garden thing (I'm from the UK, I think AOL tried to break into the market here but not sure if it succeeded) and made our first forays onto the internet using dialup via an ISP.
Interesting to see what a few years before my first time on the web was like.
The "everything is an object" thing sort of reminds me of Plan9 with a bit of BeOS too.
I'm from the UK too: we had a Dell XPS Dimension back in 1996, it came with the OSR-1 release which still had the original shell-based MSN client, so we signed up for the free 30-day trial included in the box so we could get on the Internet as well as to check-out the MSN stuff. I have memories of clicking around the original Explorer-shell based Internet Gaming Zone (zone.com would launch later) - then the free trial expired and we switched to CompuServ and Netscape.
Only AOL was able to eke out a market in the UK - often the exclusive content they had was too American-centric to be of interest to Europeans, though I remember CompuServ was popular in academia. When we were in CompuServ we only used it to get on the Internet - so after subscription-free ISPs like FreeServe and LibertySurf were launched we instantly ditched CompuServe.
“This is what makes the whole COM system really really stupid. It's like they don't want you to be able to work with it. They want to make it hard for you to access and share data, because code that were to do something wreckless like call fopen is code that could run on anyone's operating system, not just theirs.”
Well doh !
Bill Gates 1987: “OS/2 is the platform for the nineties”
I had the chance to be part of a beta tester group for The Microsoft Network in my country for roughly two years. while the network's content itself wasn't that attractive, it could also be used to access other internet services, similar as AOL provided.
beta testers were provided access through a toll-free number. exciting times for teenage me, after years of fights due to BBS-inflated monthly family phone bills.
Interesting piece up until the COM rant tangent. COM was brilliant. It was complex because what it did was complex. It provided a binary contact for unrelated software packages to communicate. It was an absolute breeze to work with in VB, something I didn't appreciate in my C++ snob days.
The problem with COM wasn't COM, it was the dominant 'serious' programming language of the day.
Hackers can't deal with words like Hell. Should have been Heck. In a couple of decades, on a wholly new interconnected network, we'll see an article titled, "What the Hell Was a Hacker?"
I think it’s less about being a “bad” word and more about not injecting a bit of bias in before the reader even gets to the article. HN is big on “sterile” titles. I believe this falls under “gratuitous adjectives” in the guidelines.
Active Desktop - not sure what Windows version that shipped with, but the long term strategies and ideas were kinda cool - oh, nostalgia. I believe Microsoft soon after shipping 95 and threatened by Netscape realized that the Internet is the future, and the browser the OS. The rest is an antitrust lawsuit settled more then a decade later.
Shipped with 98,although you could add it to windows 95 by installing IE5.
This also gave 95 the quick start bar with "show desktop" next to Start, if I remember correctly.
"You see, back in 1994 Facebook hadn't yet invented the Internet, so if you wanted to go online and let your computer transfer viruses to and from other computers, you had to use one of the many private networks."
Wrong, the Internet was well and alive and anybody who had a clue was looking at it. Microsoft, with its typical arrogance thought they could build their own and supersede the Internet. It was a monumental failure and Microsoft Network became a joke and embarrassment.
This view is not even from the US, but from the outside, a place where the whole country's internet connection went via a single link.
Active Desktop came with IE4, worked just as badly on Win95.
As a concept it wasn't terrible, but with machines of the day it was massively unstable and a resource hog, and almost every home user was on dialup, which meant that there was no access unless you were actively using the internet.
So clear! So easy to read! So obvious what things you can click and what things you can't. Compared to this Windows Modern crap where it's not clear what's a button and what's a label, and every app (or part thereof) has its own idea about where things go.