How Microsoft turns an obsession with detail into micron-optimized keyboards

Source: Microsoft more

Nestled among the many indistinguishable buildings of Microsoft’s Redmond campus, a multi-disciplinary team sharing an attention to detail that borders on fanatical is designing a keyboard… again and again and again. And one more time for good measure. Their dogged and ever-evolving dedication to “human factors” shows the amount of work that goes into making any piece of hardware truly ergonomic.

Microsoft may be known primarily for its software and services, but cast your mind back a bit and you’ll find a series of hardware advances that have redefine their respective categories:

The original Natural Keyboard was the first split-key, ergonomic keyboard, the fundamentals of which have only ever been slightly improved upon.

The Intellimouse Optical not only made the first truly popular leap away from ball-based mice, but did so in such a way that its shape and buttons still make its descendants among the best all-purpose mice on the market.

Remember me?

Although the Zune is remembered more for being a colossal boondoggle than a great music player, it was very much the latter, and I still use and marvel at the usability of my Zune HD. Yes, seriously. (Microsoft, open source the software!)

More recently, the Surface series of convertible notebooks have made bold and welcome changes to a form factor that had stagnated in the wake of Apple’s influential mid-2000s MacBook Pro designs.

Microsoft is still making hardware, of course, and in fact it has doubled down on its ability to do so with a revamped hardware lab filled with dedicated, extremely detail-oriented people who are given the tools they need to get as weird as they want — as long as it makes something better.

You don’t get something like this by aping the competition.

First, a disclosure: I may as well say at the outset that this piece was done essentially at the invitation (but not direction) of Microsoft, which offered the opportunity to visit their hardware labs in Building 87 and meet the team. I’d actually been there before a few times, but it had always been off-record and rather sanitized.

Knowing how interesting I’d found the place before, I decided I wanted to take part and share it at the risk of seeming promotional. They call this sort of thing “access journalism,” but the second part is kind of a stretch. I really just think this stuff is really cool, and companies seldom expose their design processes in the open like this. Microsoft obviously isn’t the only company to have hardware labs and facilities like this, but they’ve been in the game for a long time and have an interesting and almost too detailed process they’ve decided to be open about.

Although I spoke with perhaps a dozen Microsoft Devices people during the tour (which was still rigidly structured), only two were permitted to be on record: Edie Adams, Chief Ergonomist, and Yi-Min Huang, Principal Design and Experience Lead. But the other folks in the labs were very obliging in answering questions and happy to talk about their work. I was genuinely surprised and pleased to find people occupying niches so suited to their specialities and inclinations.

Generally speaking the work I got to see fell into three general spaces: the Human Factors Lab, focused on very exacting measurements of people themselves and how they interact with a piece of hardware; the anechoic chamber, where the sound of devices is obsessively analyzed and adjusted; and the Advanced Prototype Center, where devices and materials can go from idea to reality in minutes or hours.

The science of anthropometry

microsoft building87 7100095Inside the Human Factors lab, human thumbs litter the table. No, it isn’t a torture chamber — not for humans, anyway. Here the company puts its hardware to the test by measuring how human beings use it, recording not just simple metrics like words per minute on a keyboard, but high-speed stereo footage that analyzes how the skin of the hand stretches when it reaches for a mouse button down to a fraction of a millimeter.

The trend here, as elsewhere in the design process and labs, is that you can’t count anything out as a factor that increases or decreases comfort; the little things really do make a difference, and sometimes the microscopic ones.

“Feats of engineering heroics are great,” said Adams, “but they have to meet a human need. We try to cover the physical, cognitive, and emotional interactions with our products.”

(Perhaps you take this, as I did, as — in addition to a statement of purpose — a veiled reference to a certain other company whose keyboards have been in the news for other reasons. Of this later.)

The lab is a space perhaps comparable to a medium-sized restaurant, with enough room for a dozen or so people to work in the various sub-spaces set aside for different highly specific measurements. Various models of body parts have been set out on work surfaces, I suspect for my benefit.

microsoft building87 7100099Among them are that set of thumbs, in little cases looking like oversized lipsticks, each with a disturbing surprise inside. These are all cast from real people, ranging from the small thumb of a child to a monster that, should it have started a war with mine, I would surrender unconditionally.

Next door is a collection of ears, not only rendered in extreme detail but with different materials simulating a variety of rigidities. Some people have soft ears, you know. And next door to those is a variety of noses, eyes, and temples, each representing a different facial structure or interpupillary distance.

This menagerie of parts represents not just a continuum of sizes but a variety of backgrounds and ages. All of them come into play when creating and testing a new piece of hardware.

microsoft building87 7100104 1“We want to make sure that we have a diverse population we can draw on when we develop our products,” said Adams. When you distribute globally it is embarrassing to find that some group or another, with wider-set eyes or smaller hands, finds your product difficult to use. Inclusivity is a many-faceted gem, indeed it has as many facets as you are willing to cut. (The Xbox Adaptive Controller, for instance, is a new and welcome one.)

In one corner stands an enormous pod that looks like Darth Vader should emerge from it. This chamber, equipped with 36 DSLR cameras, produces an unforgivingly exact reproduction of one’s head. I didn’t do it myself, but many on the team had; in fact, one eyes-and-nose combo belonged to Adams. The fellow you see pictured there also works in the lab; that was the first such 3D portrait they took with the rig.

With this they can quickly and easily scan in dozens or hundreds of heads, collecting metrics on all manner of physiognomical features and creating an enviable database of both average and outlier heads. My head is big, if you want to know, and my hand was on the upper range too. But well within a couple standard deviations.

So much for static study — getting reads on the landscape of humanity, as it were. Anthropometry, they call it. But there are dynamic elements as well, some of which they collect in the lab, some elsewhere.

“When we’re evaluating keyboards, we have people come into the lab. We try to put them in the most neutral position possible,” explained Adams.

It should be explained that by neutral, she means specifically with regard to the neutral positions of the joints in the body, which have certain minima and maxima it is well to observe. How can you get a good read on how easy it is to type on a given keyboard if the chair and desk the tester is sitting at are uncomfortable?

Here as elsewhere the team strives to collect both objective data and subjective data; people will say they think a keyboard, or mouse, or headset is too this or too that, but not knowing the jargon they can’t get more specific. By listening to subjective evaluations and simultaneously looking at objective measurements, you can align the two and discover practical measures to take.

microsoft building87 7100096One such objective measure involved motion capture beads attached to the hand while an electromyographic bracelet tracks the activation of muscles in the arm. Imagine if you will a person whose typing appears normal and of uniform speed — but in reality they are putting more force on their middle fingers than the others because of the shape of the keys or rest. They might not be able to tell you they’re doing so, though it will lead to uneven hand fatigue, but this combo of tools could reveal the fact.

“We also look at a range of locations,” added Huang. “Typing on a couch is very different from typing on a desk.”

One case, such as a wireless Surface keyboard, might require more of what Huang called “lapability,” (sp?) while the other perhaps needs to accommodate a different posture and can abandon lapability altogether.

A final measurement technique that is quite new to my knowledge involves a pair of high-resolution, high-speed black and white cameras that can be focused narrowly on a region of the body. They’re on the right, below, with colors and arrows representing motion vectors.

microsoft building87 7100106

A display showing various anthropometric measurements.

These produce a very detailed depth map by closely tracking the features of the skin; one little patch might move further than the other when a person puts on a headset, suggesting it’s stretching the skin on the temple more than it is on the forehead. The team said they can see movements as small as ten microns, or micrometers (therefore you see that my headline was only light hyperbole).

You might be thinking that this is overkill. And in a way it most certainly is. But it is also true that by looking closer they can make the small changes that cause a keyboard to be comfortable for five hours rather than four, or to reduce error rates or wrist pain by noticeable amounts — features you can’t really even put on the box, but which make a difference in the long run. The returns may diminish, but we’re not so far along the asymptote approaching perfection that there’s no point to making further improvements.

The quietest place in the world

microsoft building87 7100109Down the hall from the Human Factors lab is the quietest place in the world. That’s not a colloquial exaggeration — the main anechoic chamber in Building 87 at Microsoft is in the record books as the quietest place on Earth, with an official ambient noise rating of negative 20.3 decibels.

You enter the room through a series of heavy doors and the quietness, though a void, feels like a physical medium that you pass into. And so it is, in fact — a near-total lack of vibrations in the air that feels as solid as the nested concrete boxes inside which the chamber rests.

I’ve been in here a couple times before, and Hundraj Gopal, the jovial and highly expert proprietor of quietude here, skips the usual tales of Guinness coming to test it and so on. Instead we talk about the value of sound to the consumer, though they may not even realize they do value it.

Naturally if you’re going to make a keyboard, you’re going to want to control how it sounds. But this is a surprisingly complex process, especially if, like the team at Microsoft, you’re really going to town on the details.

The sounds of consumer products are very deliberately designed, they explained. The sound your car door makes when it shuts gives a sense of security — being sealed in when you’re entering, and being securely shut out when you’re leaving it. It’s the same for a laptop — you don’t want to hear a clank when you close it, or a scraping noise when you open it. These are the kinds of things that set apart “premium” devices (and cars, and controllers, and furniture, etc) and they do not come about by accident.

microsoft building87 7100113Keyboards are no exception. And part of designing the sound is understanding that there’s more to it than loudness or even tone. Some sounds just sound louder, though they may not register as high in decibels. And some sounds are just more annoying, though they might be quiet. The study and understanding of this is what’s known as psychoacoustics.

There are known patterns to pursue, certain combinations of sounds that are near-universally liked or disliked, but you can’t rely on that kind of thing when you’re, say, building a new keyboard from the ground up. And obviously when you create a new machine like the Surface and its family they need new keyboards, not something off the shelf. So this is a process that has to be done from scratch over and over.

As part of designing the keyboard — and keep in mind, this is in tandem with the human factors mentioned above and the rapid prototyping we’ll touch on below — the device has to come into the anechoic chamber and have a variety of tests performed.

microsoft building87 7100116

A standard head model used to simulate how humans might hear certain sounds. The team gave it a bit of a makeover.

These tests can be painstakingly objective, like a robotic arm pressing each key one by one while a high-end microphone records the sound in perfect fidelity and analysts pore over the spectrogram. But they can also be highly subjective: They bring in trained listeners — “golden ears” — to give their expert opinions, but also have the “gen pop” everyday users try the keyboards while experiencing calibrated ambient noise recorded in coffee shops and offices. One click sound may be lost in the broad-spectrum hubbub in a crowded cafe but annoying when it’s across the desk from you.

This feedback goes both directions, to human factors and prototyping, and they iterate and bring it back for more. This progresses sometimes through multiple phases of hardware, such as the keyswitch assembly alone; the keys built into their metal enclosure; the keys in the final near-shipping product before they finalize the keytop material, and so on.

Indeed, it seems like the process really could go on forever if someone didn’t stop them from refining the design further.

“It’s amazing that we ever ship a product,” quipped Adams. They can probably thank the Advanced Prototype Center for that.

Rapid turnaround is fair play

If you’re going to be obsessive about the details of the devices you’re designing, it doesn’t make a lot of sense to have to send off a CAD file to some factory somewhere, wait a few days for it to come back, then inspect for quality, send a revised file, and so on. So Microsoft (and of course other hardware makers of any size) now use rapid prototyping to turn designs around in hours rather than days or weeks.

This wasn’t always possible even with the best equipment. 3D printing has come a long way over the last decade, and continues to advance, but not long ago there was a huge difference between a printed prototype and the hardware that a user would actually hold.

microsoft building87 7100128Multi-axis CNC mills have been around for longer, but they’re slower and more difficult to operate. And subtractive manufacturing (i.e. taking a block and whittling it down to a mouse) is inefficient and has certain limitations as far as the structures it can create.

Of course you could carve it yourself out of wood or soap, but that’s a bit old-fashioned.

So when Building 87 was redesigned from the ground up some years back, it was loaded with the latest and greatest of both additive and subtractive rapid manufacturing methods, and the state of the art has been continually rolling through ever since. Even as I passed through they were installing some new machines (desk-sized things that had slots for both extrusion materials and ordinary printer ink cartridges, a fact that for some reason I found hilarious).

The additive machines are in constant use as designers and engineers propose new device shapes and styles that sound great in theory but must be tested in person. Having a bunch of these things, each able to produce multiple items per print, lets you for instance test out a thumb scoop on a mouse with 16 slightly different widths. Maybe you take those over to Human Factors and see which can be eliminated for over-stressing a joint, then compare comfort on the surviving 6 and move on to a new iteration. That could all take place over a day or two.

microsoft building87 7100092

Ever wonder what an Xbox controller feels like to a child? Just print a giant one in the lab.

Softer materials have become increasingly important as designers have found that they can be integrated into products from the start. For instance, a wrist wrest for a new keyboard might have foam padding built in.

But how much foam is too much, or too little? As with the 3D printers, flat materials like foam and cloth can be customized and systematically tested as well. Using a machine called a skiver, foam can be split into thicknesses only half a millimeter apart. It doesn’t sound like much — and it isn’t — but when you’re creating an object that will be handled for hours at a time by the sensitive hands of humans, the difference can be subtle but substantial.

For more heavy-duty prototyping of things that need to be made out of metal — hinges, laptop frames, and so on — there is bank after bank of 5-axis CNC machines, lathes, and more exotic tools, like a system that performs extremely precise cuts using a charged wire.

The engineers operating these things work collaboratively the designers and researchers, and it was important to the people I talked to that this wasn’t a “here, print this” situation. A true collaboration has input from both sides, and that is what seems to be happening here. Someone inspecting a 3D model for printability before popping it into the 5-axis might say to the designer, you know, these pieces could fit together more closely if we did so-and-so, and it would actually add strength to the assembly. (Can you tell I’m not an engineer?) Making stuff, and making stuff better, is a passion among the crew and that’s a fundamentally creative drive.

Making fresh hells for keyboards

If any keyboard has dominated the headlines for the last year or so, it’s been Apple’s ill-fated butterfly switch keyboard on the latest MacBook Pros. While being in my opinion quite unpleasant to type on, they appeared to fail at an astonishing rate judging by the proportion of users I saw personally reporting problems, and are quite expensive to replace. How, I wondered, did a company with Apple’s design resources create such a dog?

microsoft building87 7100129

Here’s a piece of hardware you won’t break any time soon.

I mentioned the subject to the group towards the end of the tour but, predictably and understandably, it wasn’t really something they wanted to talk about. But a short time later I spoke with one of the people in charge of Microsoft’s reliability managers. They too demurred on the topic of Apple’s failures, opting instead to describe at length the measures Microsoft takes to ensure that their own keyboards don’t suffer a similar fate.

The philosophy is essentially to simulate everything about the expected 3-5 year life of the keyboard. I’ve seen the “torture chambers” where devices are beaten on by robots (I’ve seen these personally, years ago — they’re brutal), but there’s more to it than that. Keyboards are everyday objects, and they face everyday threats; so that’s what the team tests, with things falling into three general categories:

Environmental: This includes cycling the temperature from very low to very high, exposing the keyboard to dust and UV. This differs for each product, since some will obviously be used outside more than others. Does it break? Does it discolor? Where does the dust go?

Mechanical: Every keyboard undergoes key tests to make sure that keys can withstand however many million presses without failing. But that’s not the only thing that keyboards undergo. They get dropped and things get dropped on them, of course, or left upside-down, or have their keys pressed and held at weird angles. All these things are tested, and when a keyboard fails because of a test they don’t have, they add it.

Chemical. I found this very interesting. The team now has more than 30 chemicals that it exposes its hardware to, including: lotion, Coke, coffee, chips, mustard, ketchup, and Clorox. The team is constantly adding to the list as new chemicals enter frequent usage or new markets open up. Hospitals, for instance, need to test a variety of harsh disinfectants that an ordinary home wouldn’t have. (Note: Burt’s Bees is apparently bad news for keyboards.)

Testing is ongoing, with new batches being evaluated continuously as time allows.

To be honest it’s hard to imagine that Apple’s disappointing keyboard actually underwent this kind of testing, or if it did, that it was modified to survive it. The number and severity of problems I’ve heard of with them suggest the “feats of engineering heroics” of which Adams spoke, but directed singlemindedly in the direction of compactness. Perhaps more torture chambers are required at Apple HQ.

7 factors and the unfactorable

All the above are more tools for executing a design and not or creating one to begin with. That’s a whole other kettle of fish, and one not so easily described.

Adams told me: “When computers were on every desk the same way, it was okay to only have one or two kinds of keyboard. But now that there are so many kinds of computing, it’s okay to have a choice. What kind of work do you do? Where do you do it? I mean, what do we all type on now? Phones. So it’s entirely context dependent.”

microsoft building87 7100120

Is this the right curve? Or should it be six millimeters higher? Let’s try both.

Yet even in the great variety of all possible keyboards there are metrics that must be considered if that keyboard is to succeed in its role. The team boiled it down to seven critical points:

  • Key travel: How far a key goes until it bottoms out. Neither shallow nor deep is necessarily good, but serve different purposes.
  • Key spacing: Distance between the center of one key and the next. How far can you differ from “full-size” before it becomes uncomfortable?
  • Key pitch: On many keyboards the keys do not all “face” the same direction, but are subtly pointed towards the home row, because that’s the direction your fingers hit them from. How much is too much? How little is too little?
  • Key dish: The shape of the keytop limits your fingers’ motion, captures them when they travel or return, and provides a comfortable home — if it’s done right.
  • Key texture: Too slick and fingers will slide off. Too rough and it’ll be uncomfortable. Can it be fabric? Textured plastic? Metal?
  • Key Sound: As described above the sound indicates a number of things and has to be carefully engineered.
  • Force to fire: How much actual force does it take to drive a given key to its actuation point? Keep in mind this can and perhaps should differ from key to key.

In addition to these core concepts there are many secondary ones that pop up for consideration: Wobble, or the amount a key moves laterally (yes, this is deliberate), snap ratio, involving the feedback from actuation. Drop angle, off-axis actuation, key gap for chiclet boards… and of course the inevitable switch debate.

Keyboard switches, the actual mechanism under the key, have become a major sub-industry as many companies started making their own at the expiration of a few important patents. Hence there’s been a proliferation of new key switches with a variety of aspects, especially on the mechanical side. Microsoft does make mechanical keyboards, and scissor-switch keyboards, and membrane as well, and perhaps even some more exotic ones (though the original touch-sensitive Surface cover keyboard was a bit of a flop).

“When we look at switches, whether it’s for a mouse, QWERTY, or other keys, we think about what they’re for,” said Adams. “We’re not going to say we’re scissor switch all the time or something — we have all kinds. It’s about durability, reliability, cost, supply, and so on. And the sound and tactile experience is so important.”

As for the shape itself, there is generally the divided Natural style, the flat full style, and the flat chiclet style. But with design trends, new materials, new devices, and changes to people and desk styles (you better believe a standing desk needs a different keyboard than a sitting one), it’s a new challenge every time.

They collected a menagerie of keyboards and prototypes in various stages of experimentation. Some were obviously never meant for real use — one had the keys pitched so far that it was like a little cave for the home row. Another was an experiment in how much a design could be shrunk until it was no longer usable. A handful showed different curves a la Natural — which is the right one? Although you can theorize, the only way to be sure is to lay hands on it. So tell rapid prototyping to make variants 1-10, then send them over to Human Factors and text the stress and posture resulting from each one.

“Sure, we know the gable slope should be between 10-15 degrees and blah blah blah,” said Adams, who is actually on the patent for the original Natural Keyboard, and so is about as familiar as you can get with the design. “But what else? What is it we’re trying to do, and how are we achieving that through engineering? It’s super fun bringing all we know about the human body and bringing that into the industrial design.”

Although the comparison is rather grandiose, I was reminded of an orchestra — but not in full swing. Rather, in the minutes before a symphony begins, and all the players are tuning their instruments. It’s a cacophony in a way, but they are all tuning towards a certain key, and the din gradually makes its way to a pleasant sort of hum. So it is that a group of specialists all tending their sciences and creeping towards greater precision seem to cohere a product out of the ether that is human-centric in all its parts.


How Microsoft turns an obsession with detail into micron-optimized keyboards

Agolo attracts Microsoft and Google funding with AI-powered summarization tools

Source: Microsoft more

As the ways we consume content multiply and change, media creators are hard pressed to adapt their methods to take advantage. Short-form audio and video news is one growing but labor-intensive niche — and Agolo aims to help automate the process, pulling in the AP as a client and Microsoft, Google, and Tensility as investors.

Agolo is an AI startup focused on natural language processing, and specifically how to take a long article, like this one, and boil it down to its most important parts (assuming there are any). Summarization is the name of the process, as it is when you or I do it, and other bots and services do it as well. Agolo’s claim is to be able to summarize quickly and accurately, producing something of a quality worthy of broadcast or official documentation. Its deal with the AP provides an interesting example of how this works, and why it isn’t as simple as picking a few representative sentences.

AgoloThe AP is, of course, a huge news organization and a fast-moving one. But its stories, while spare as a rule, are rarely concise enough to be read aloud by a virtual assistant when its user asks “what’s the big news this morning?” As a result, AP editors and writers manually put together scores or hundreds of short versions of stories every day specifically for audio consumption and other short-form contexts.

Since this isn’t a situation where creative input is necessarily required, and it must be done quickly and systematically, it’s a good fit for an AI agent trained in natural language. Even so it isn’t as easy as it sounds, explained Agolo co-founder and CEO Sage Wohns.

“The way that we have things read to us is different from the way we read them. So the algorithm understanding that and reproducing it is important,” he said. And that’s without reckoning with the AP’s famous style guide.

“This is one of the most important points that we worked on with them,” Wohns said. “The AP has their style bible, and it’s a brick. We have a hybrid model that has algorithms pointed at each of those rules. We never want to change the language, but we can shorten the sentence.”

Agolo Listenable 1

That’s a risk with algorithmic summarizing, of course: that in “summarizing” a sentence you change its meaning. That’s incredibly important in the news, where the difference between a simple statement of fact and an egregious error can easily be in a single word or phrase. So the system is careful to preserve meaning if not necessarily the exact wording.

While the AP may not be given, as I am, to circumlocutions, it may still be beneficial to shift things a bit, though. Agolo worked closely with the news organization to figure out what’s acceptable and what’s not. A simple example would be changing something like “Statement,” said the source to The source said “Statement.” That doesn’t save any space, but you get the idea: essentially lossless compression of language.

If the AP team can trust the algorithm to produce a well-worded summary that follows their rules and only takes a quick polish by an editor, they could serve and even grow the demand for short-form content. “The goal is to enable them to create more content than was humanly possible before,” said Wohns.

The investment from and collaboration with Google falls along these lines as well, though not as laser-focused on turning news stories into sound bites.

“What we’re working on with them is making the web listenable,” said Wohns. “Right now you can ask Google a question but it often doesn’t have an answer it can read back to you.”

It’s primarily a bid to extend the company’s Assistant product as it continues its combat with Alexa and Siri, but may also have the extremely desirable side effect of making the data Google indexes more accessible to blind users.

The scope of Google’s data (Agolo is probably now getting the full firehose of Google News, among other things) means that the AI model being used has to be lightweight and quick. Even if it takes only ten seconds to summarize every article, that gets multiplied thousands of times in the complex workings of sorting and displaying news all over the world. So Agolo has been very focused on improving the performance of its models until they are able to turn things around very quickly and enable an essentially real-time summary service.

Agolo Research Application

This has a secondary application in large enterprises and companies with large backlogs of data like documentation and analysis. Microsoft is a good example of this: After decades of running an immense software and services empire, the number of support docs, studies, how-tos, and so on are likely choking its intranet and search may or may not be effective on such a corpus.

NLP-based agents are useful for summarizing, but part of that process is, in a way, understanding the content. So the agent should be able to produce a shorter version of something, but also tell you that it’s by this person (useful for attribution); it’s about this topic; it’s from this date range; it applies to these version numbers; its main findings are these; and so on and so forth.

Not all this information is useful in all cases, of course, but it sure is if you want to digest 30 years of internal documentation and be able to search and sort it efficiently. This is what Microsoft is using it for internally, and no doubt what it intends to apply it to as part of future product offerings or partnerships. (Semantic Scholar has applied a similar approach to journals and academic papers.)

It would also be helpful for, say, an investment bank analyst or other researcher, who can use Agolo’s timeline to assemble all the relevant documents in order, grouped by author or topic, with the salient information surfaced and glanceable. One pictures this as useful for Google News as well in browsing coverage of a specific event or developing story.

The new (undisclosed amount of) funding has Microsoft (M12 specifically) returning, with Google (Assistant Investment Group specifically) and Tensility Venture Partners joining for the first time. The cash will be used in the expected fashion of a growing startup: chasing sales and a few key hires.

“It’s about building out the go-to-market side, and the core NLP abilities of the team, specifically in New York and Cairo,” said Wohns. “Right now we’re about a 90 percent technical team, so we need to build out the sales side.”

Agolo’s service seems like a useful tool for many an application — anywhere you have to reduce a large amount of written content to a smaller amount. Certainly that’s common enough — but Agolo will need to prove that it can do so as non-destructively and accurately as it claims with a wide variety of datasets, and that this process contributes to the bottom line more than the time-tested method of hiring another intern or grad student to perform the drudgery.


Agolo attracts Microsoft and Google funding with AI-powered summarization tools

Destiny 2 goes free to play and gains cross-saving on all platforms

Source: Microsoft more

Bungie aims to fortify the popular but flagging Destiny 2 with an expanded free-to-play plan and universal cross-platform saving, the company announced today. It’s an interesting and player-friendly evolution of the “games as a service” model, and other companies should take note.

The base game, which is to say the original campaign and the first year of updates, will be available on PC, Xbox One, PlayStation 4, and Google Stadia. You can play as much as you want, and your progress will be synced to your account, so you can do some easy patrols on console and then switch to your PC’s mouse and keyboard for the more difficult raids.

The PS4 cross-save ability is a surprise, since Sony has resisted this sort of thing in the past and rumors had it before the announcement that they would be left out of the bargain. It’s heartening to see this level of cooperation, if that’s what it is, in the new gaming economy.

As part of Bungie’s separation from Activision, which published Destiny 2 to begin with, the game is now switching over to Steam on the PC. That’s probably a good thing for most, and you won’t lose any progress. It’s also being renamed “Destiny: New Light,” because why not?

Importantly, no platform will have any content advantage over another — no Xbox-specific guns or PC-specific levels. At a time when consoles are fighting one another on the basis of exclusives, this is a breath of fresh air.

The news was announced in a stream this morning, though players got a sneak peak when a publication I shall not name posted it slightly early. But we also learned more ahead of Bungie’s announcement when Google’s Stadia event showed the game coming to the streaming service in free form.

Destiny 2 came out two years ago and has had a number of expansions — and has also been free for limited times or platforms a handful of times. The base game was really a bit threadbare and honestly may not convince new players that it’s worth it to pay. But the price is right and if you like the basic gameplay the expansions, which improved considerably on the game and added a lot of contents, can be bought year by year.

The move is obviously meant to help Destiny 2 compete with other games-as-services, such as the constantly improving Warframe and youth-devouring Fortnite. And it’s a good test bed for the new cross-platform economy that gamers are beginning to demand. You’ll be able to test it out for yourself on September 17, when the switchover is set to take effect — more details should be available well ahead of the relaunch.


Destiny 2 goes free to play and gains cross-saving on all platforms

Minecraft Earth makes the whole real world your very own blocky realm

Source: Microsoft more

When your game tops a hundred million players, your thoughts naturally turn to doubling that number. That’s the case with the creators, or rather stewards, of Minecraft at Microsoft, where the game has become a product category unto itself. And now it is making its biggest leap yet — to a real-world augmented reality game in the vein of Pokemon GO, called Minecraft Earth.

Announced today but not playable until summer (on iOS and Android) or later, MCE (as I’ll call it) is full-on Minecraft, reimagined to be mobile and AR-first. So what is it? As executive producer Jesse Merriam put it succinctly: “Everywhere you go, you see Minecraft. And everywhere you go, you can play Minecraft.”

Yes, yes — but what is it? Less succinctly put, MCE is like other real-world based AR games in that it lets you travel around a virtual version of your area, collecting items and participating in mini-games. Where it’s unlike other such games is that it’s built on top of Minecraft: Bedrock Edition, meaning it’s not some offshoot or mobile cash-in; this is straight-up Minecraft, with all the blocks, monsters, and redstone switches you desire, but in AR format. You collect stuff so you can build with it and share your tiny, blocky worlds with friends.

That introduces some fun opportunities and a few non-trivial limitations. Let’s run down what MCE looks like — verbally, at least, since Microsoft is being exceedingly stingy with real in-game assets.

There’s a map, of course

Because it’s Minecraft Earth, you’ll inhabit a special Minecraftified version of the real world, just as Pokemon GO and Harry Potter: Wizards Unite put a layer atop existing streets and landmarks.

The look is blocky to be sure but not so far off the normal look that you won’t recognize it. It uses OpenStreetMaps data, including annotated and inferred information about districts, private property, safe and unsafe places, and so on — which will be important later.

The fantasy map is filled with things to tap on, unsurprisingly called tappables. These can be a number of things: resources in the form of treasure chests, mobs, and adventures.

Chests are filled with blocks, naturally, adding to your reserves of cobblestone, brick, and so on, all the different varieties appearing with appropriate rarity.

A pig from Minecraft showing in the real world via augmented reality.Mobs are animals like those you might normally run across in the Minecraft wilderness: pigs, chickens, squid, and so on. You snag them like items, and they too have rarities, and not just cosmetic ones. The team highlighted a favorite of theirs, the muddy pig, which when placed down will stop at nothing to get to mud and never wants to leave, or a cave chicken that lays mushrooms instead of eggs. Yes, you can breed them.

Last are adventures, which are tiny AR instances that let you collect a resource, fight some monsters, and so on. For example you might find a crack in the ground that, when mined, vomits forth a volume of lava you’ll have to get away from, and then inside the resulting cave are some skeletons guarding a treasure chest. The team said they’re designing a huge number of these encounters.

Importantly, all these things, chests, mobs, and encounters, are shared between friends. If I see a chest, you see a chest — and the chest will have the same items. And in an AR encounter, all nearby players are brought in, and can contribute and collect the reward in shared fashion.

And it’s in these AR experiences and the “build plates” you’re doing it all for that the game really shines.

The AR part

“If you want to play Minecraft Earth without AR, you have to turn it off,” said Torfi Olafsson, the game’s director. This is not AR-optional, as with Niantic’s games. This is AR-native, and for good and ill the only way you can really play is by using your phone as a window into another world. Fortunately it works really well.

First, though, let me explain the whole build plate thing. You may have been wondering how these collectibles and mini-games amount to Minecraft. They don’t — they’re just the raw materials for it.

Whenever you feel like it, you can bring out what the team calls a build plate, which is a special item, a flat square that you virtually put down somewhere in the real world — on a surface like the table or floor, for instance — and it transforms into a small, but totally functional, Minecraft world.

In this little world you can build whatever you want, or dig into the ground, build an inverted palace for your cave chickens or create a paradise for your mud-loving pigs — whatever you want. Like Minecraft itself, each build plate is completely open-ended. Well, perhaps that’s the wrong phrase — they’re actually quite closely bounded, since the world only exists out to the edge of the plate. But they’re certainly yours to play with however you want.

Notably all the usual Minecraft rules are present — this isn’t Minecraft Lite, just a small game world. Water and lava flow how they should, blocks have all the qualities they should, and mobs all act as they normally would.

The magic part comes when you find that you can instantly convert your build plate from miniature to life-size. Now the castle you’ve been building on the table is three stories tall in the park. Your pigs regard you silently as you walk through the halls and admire the care and attention to detail with which you no doubt assembled them. It really is a trip.

It doesn’t really look like this but you get the idea.

In the demo, I played with a few other members of the press, we got to experience a couple build plates and adventures at life-size (technically actually 3/4 life size — the 1 block to 1 meter scale turned out to be a little daunting in testing). It was absolute chaos, really, everyone placing blocks and destroying them and flooding the area and putting down chickens. But it totally worked.

The system uses Microsoft’s new Azure Spatial Anchor system, which quickly and continuously fixed our locations in virtual space. It updated remarkably quickly, with no lag, showing the location and orientation of the other players in real time. Meanwhile the game world itself was rock-solid in space, smooth to enter and explore, and rarely bugging out (and that only in understandable circumstances). That’s great news considering how heavily the game leans on the multiplayer experience.

The team said they’d tested up to 10 players at once in an AR instance, and while there’s technically no limit, there’s sort of a physical limit in how many people can fit in the small space allocated to an adventure or around a tabletop. Don’t expect any giant 64-player raids, but do expect to take down hordes of spiders with three or four friends.

Pick(ax)ing their battles

In choosing to make the game the way they’ve made it, the team naturally created certain limitations and risks. You Wouldn’t want, for example, an adventure icon to pop up in the middle of the highway.

For exactly that reason the team spent a lot of work making the map metadata extremely robust. Adventures won’t spawn in areas like private residences or yards, though of course simple collectibles might. But because you’re able to reach things up to 70 meters away, it’s unlikely you’ll have to knock on someone’s door and say there’s a cave chicken in their pool and you’d like to touch it, please.

Furthermore adventures will not spawn in areas like streets or difficult to reach areas. The team said they worked very hard making it possible for the engine to recognize places that are not only publicly accessible, but safe and easy to access. Think sidewalks and parks.

Another limitation is that, as an AR game, you move around the real world. But in Minecraft verticality is an important part of the gameplay. Unfortunately the simple truth is that in the real world you can’t climb virtual stairs or descend into a virtual cave. You as a player exist on a 2D plane, and can interact with but not visit places above and below that plane. (An exception of course is on a build plate, where in miniature you can fly around it freely by moving your phone).

That’s a shame for people who can’t move around easily, though you can pick up and rotate the build plate to access different sides. Weapons and tools also have infinite range, eliminating a potential barrier to fun and accessibility.

What will keep people playing?

In Pokemon GO, there’s the drive to catch ’em all. In Wizards Unite, you’ll want to advance the story and your skills. What’s the draw with Minecraft Earth? Well, what’s the draw in Minecraft? You can build stuff. And now you can build stuff in AR on your phone.

The game isn’t narrative-driven, and although there is some (unspecified) character progression, for the most part the focus is on just having fun doing and making stuff in Minecraft. Like a set of LEGO blocks, a build plate and your persistent inventory simply make for a lively sandbox.

Admittedly that doesn’t sound like it carries the same addictive draw of Pokemon, but the truth is Minecraft kind of breaks the rules like that. Millions of people play this game all the time just to make stuff and show that stuff to other people. Although you’ll be limited in how you can share to start, there will surely be ways to explore popular builds in the future.

And how will it make money? The team basically punted on that question — they’re fortunately in a position where they don’t have to worry about that yet. Minecraft is one of the biggest games of all time and a big money-maker — it’s probably worth the cost just to keep people engaged with the world and community.

MCE seems to me like a delightful thing but one that must be appreciated on its own merits. A lack of screenshots and gameplay video isn’t doing a lot to help you here, I admit. Trust me when I say it looks great, plays well, and seems fundamentally like a good time for all ages.

A few other stray facts I picked up:

  • Regions will roll out gradually but it will be available in all the same languages as Vanilla at launch
  • Yes, there will be skins (and they’ll carry over from your existing account)
  • There will be different sizes and types of build plates
  • There’s crafting, but no 3×3 crafting grid (?!)
  • You can report griefers and so on, but the way the game is structured it should be an issue
  • The AR engine creates and uses a point cloud but doesn’t like take pictures of your bedroom
  • Content is added to the map dynamically, and there will be hot spots but emptier areas will fill up if you’re there
  • It leverages AR Core and AR Kit, naturally
  • The Hololens version of Minecraft we saw a while back is a predecessor “more spiritually than technically”
  • Adventures that could be scary to kids have a special sign
  • “Friends” can steal blocks from your build plate if you’re playing together (or donate them)

Sound fun? Sign up for the beta here.


Minecraft Earth makes the whole real world your very own blocky realm

Rivals in gaming, Microsoft and Sony team up on cloud services

Source: Microsoft more

For the last two decades, Sony and Microsoft’s gaming divisions have been locked in all-out war against one another: on price, on hardware, on franchises, on exclusives… you name it. But it seems they’ve set their enmity aside temporarily that they might better prevent that filthy casual, Google, from joining the fray.

The official team-up, documented in a memorandum of understanding, was announced today, though details are few. But this is clear enough:

The two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. In addition, the two companies will explore the use of current Microsoft Azure datacenter-based solutions for Sony’s game and content-streaming services.

Of course there is no doubt that Sony could have gone with a number of other cloud services for its gaming on demand services. It already runs one, Playstation Now, but the market is expected to expand over the next few years much like cord cutters have driven traditional TV and movie watchers to Netflix and other streaming services. Expansion would surely prove expensive and complicated.

The most salient challenger is likely Google and its new Stadia game straming service, which of course has a huge advantage in its global presence, brand recognition, and unique entry points: search and YouTube. The possibility of searching for a game and being able to play it literally five seconds later is an amazing one, and really only something Google can pull off right now.

That makes Google a threat. And Microsoft and Sony have enough threats already, what with the two of them making every exclusive and chip partnership count, the resurgence of Nintendo with the immensely popular Switch, and the complex new PC-and-mobile-focused gaming market making consoles look outdated. Apple Arcade exists, too, but I don’t know that anyone is worried about it, exactly.

Perhaps there was a call made on the special direct line each has to the other, where they just said “truce… until we reduce Google Stadia to rubble and salt the earth. Also Nvidia maybe.”

We don’t actually have to imagine, though. As Sony President and CEO Kenichiro Yoshida noted in the announcement: “For many years, Microsoft has been a key business partner for us, though of course the two companies have also been competing in some areas. I believe that our joint development of future cloud solutions will contribute greatly to the advancement of interactive content.”

Sony doesn’t lack technical chops, or the software necessary to pull off a streaming service — but it may simply make more sense to deploy via Microsoft’s Azure than bring its own distribution systems up to par. No doubt Microsoft is happy to welcome a customer as large as Sony to its stable, and any awkwardness from the two competing elsewhere is secondary to that. Google is a more existential competitor in many ways, so it makes sense that Microsoft would favor partnering with a partial rival against it.

Sony has long been in this boat itself. Its image sensors and camera technology can be found in phones and DSLRs that compete with its own products — but the revenue and feedback it has built up as a result have let it maintain its dominance.

Speaking of which, the two companies also plan to collaborate on imaging, combining Sony’s sensor tech with Microsoft’s AI work. This is bound to find its way to applications in robotics and autonomous vehicles, though competition is fierce there and neither company has a real branded presence. Perhaps they aim to change that… together.


Rivals in gaming, Microsoft and Sony team up on cloud services

7 accessibility-focused startups snag grants from Microsoft

Source: Microsoft more

Microsoft has selected seven lucky startups to receive grants from its AI for Accessibility program. The growing companies aim to empower people with disabilities to take part in tech and the internet economy, from improving job searches to predicting seizures.

Each of the seven companies receives professional-level Azure AI resources and support, cash to cover the cost of data collection and handling, and access to Microsoft’s experts in AI, project management, and accessibility.

Companies apply online and a team of accessibility and market experts at Microsoft evaluate them on their potential impact, data policies, feasibility, and so on. The five-year, $25 million program started in 2018, and evaluation is a rolling process with grants coming out multiple times per year. This one happens to be on Global Accessibility Awareness Day. So be aware!

Among this round’s grantees is Our Ability, a company started by John Robinson, who was born without complete limbs, and all his life has faced serious challenges getting and keeping a job. The unemployment rate for people with disabilities is twice that of people without, and some disabilities nearly preclude full-time employment altogether.

Yet there are still opportunities for such people, who are just as likely to have a head for project management or a knack for coding as anyone else — but they can be difficult to find. Robinson is working on a site that connects companies with jobs suited to disabled applicants to likely candidates.

“Our goal is to empower employers to understand and leverage the increasingly valuable employment population of people with disabilities, proven to lower job turnover rates and boost morale and productivity – because a commitment to an inclusive workplace culture begins within,” Robinson wrote in an email to TechCrunch. “Employers have previously been at a disadvantage to accelerate in this regard, because many job-seeking tools are not designed with people with disabilities in mind.”

CEO of Our Ability John Robinson sitting, smiling and in front of his laptop

John Robinson of Our Ability.

The plan that attracted Microsoft is Robinson’s idea to make a chatbot to help collect critical data from disabled applicants. And before you say “chatbot? What year is this?” remember that while chatbots may be passé for those of us able to navigate forms and websites easily, that’s not the case with people who can’t do so. A chat-based interface is simple and accessible, requiring little on the user’s end except basic text input.

Pison is another grantee whose technology would come in handy here. People with physical disabilities often can’t use a mouse or trackpad the way others do. Founder Dexter Ang saw this happen in person as his late mother’s physical abilities deteriorated under the effects of ALS.

His solution is to use a electromyography armband (you might be familiar with Myo) to detect the limited movements of someone with that type of affliction and convert those into mouse movements. The company was started a couple years ago and has been undergoing ongoing development and testing among ALS patients, who have shown that they can use the tech successfully after just a few minutes.

Voiceitt is a speech recognition engine that focuses on people with nonstandard voices. Disabilities and events like strokes can make a person’s voice difficult for friends and family to understand, let alone the comparatively picky processes of speech recognition.

Google recently took on this same problem with Project Euphonia, which along with the company’s other efforts towards accessibility was given an impressive amount of stage time at I/O last week.

Here are the remainder of the grantees (descriptions are from Microsoft):

 

  • University of Sydney (Australia) researchers are developing a wearable sensory warning system to help the 75 million people living with epilepsy better predict and manage seizures to live more independently.
  • Birmingham City University (United Kingdom) researchers are developing a system that enables people with limited mobility to control digital platforms using voice commands and the movement of their eyes.
  • Massachusetts Eye and Ear (Boston, MA) researchers are working on a vision assistance mobile app that offers enhanced location and navigation services for people who are blind or have low vision.
  • University of California Berkley (Berkley, CA) researchers are creating a mobile app for individuals who are blind or have low-vision to provide captions and audio descriptions of their surroundings.

The image up top, by the way, is from iTherapy’s InnerVoice, an app that provides AI-powered descriptions of images taken by kids who have trouble communicating. It’s a great example of cutting-edge tech being applied in a niche that helps a small population a lot rather than a large population a little.

Microsoft has been a good steward of accessibility for years and it seems to be leaning into that, as well it should. President Brad Smith had a lot to say about it in a blog post last year, and the commitment seems strong going into this one.

 


7 accessibility-focused startups snag grants from Microsoft