IBM is moving OpenPower Foundation to The Linux Foundation

Source: Tech News – Enterprise

IBM makes the Power Series chips, and as part of that has open sourced some of the underlying technologies to encourage wider use of these chips. The open source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.


IBM is moving OpenPower Foundation to The Linux Foundation

The five technical challenges Cerebras overcame in building the first trillion transistor chip

Source: Tech News – Enterprise

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

CS Wafer Keyboard Comparison

Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems)

It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years with $112 million in venture capital funding from Benchmark and others.

Going big means nothing but challenges

First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.

One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.

Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.

CS Wafer Sean

Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million. (Via Cerebras Systems)

The first challenge the team ran into according to Feldman was handling communication across the “scribe lines.” While Cerebras chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion plus transistors.

The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.

Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole wafer silicon chip viable.

Entering uncharted territory in chip design

Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve that expected by re-approaching them using modern tools.

He likens the challenge though to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’”

And indeed, the toughest challenges according to Feldman for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.

The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate lest cracks develop between the two.

Feldman said that “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”

Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: absolutely nothing on the market is designed to handle a whole-wafer chip.

CS Wafer Inspection

Cerebras designed its own testing and packaging system to handle its chip (Via Cerebras Systems)

“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.

Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.

It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.

And so, those were the next three challenges — thermal expansion, packaging, and power/cooling — that the company has worked around-the-clock to deliver these past few years.

From theory to reality

Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers according to reports. The big challenge though as with all new chips is scaling production to meet customer demand.

For Cerebras, the situation is a bit unusual. Since it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.

Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.


The five technical challenges Cerebras overcame in building the first trillion transistor chip

Why chipmaker Broadcom is spending big bucks for aging enterprise software companies

Source: Tech News – Enterprise

Last year Broadcom, a chipmaker, raised eyebrows when it acquired CA Technologies, an enterprise software company with a broad portfolio of products, including a sizable mainframe software tools business. It paid close to $19 billion for the privilege.

Then last week, the company opened up its wallet again and forked over $10.7 billion for Symantec’s enterprise security business. That’s almost $30 billion for two aging enterprise software companies. There has to be some sound strategy behind these purchases, right? Maybe.

Here’s the thing about older software companies. They may not out-innovate the competition anymore, but what they have going for them is a backlog of licensing revenue that appears to have value.

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Source: Tech News – Enterprise

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.


Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

How Microsoft turns an obsession with detail into micron-optimized keyboards

Source: Microsoft more

Nestled among the many indistinguishable buildings of Microsoft’s Redmond campus, a multi-disciplinary team sharing an attention to detail that borders on fanatical is designing a keyboard… again and again and again. And one more time for good measure. Their dogged and ever-evolving dedication to “human factors” shows the amount of work that goes into making any piece of hardware truly ergonomic.

Microsoft may be known primarily for its software and services, but cast your mind back a bit and you’ll find a series of hardware advances that have redefine their respective categories:

The original Natural Keyboard was the first split-key, ergonomic keyboard, the fundamentals of which have only ever been slightly improved upon.

The Intellimouse Optical not only made the first truly popular leap away from ball-based mice, but did so in such a way that its shape and buttons still make its descendants among the best all-purpose mice on the market.

Remember me?

Although the Zune is remembered more for being a colossal boondoggle than a great music player, it was very much the latter, and I still use and marvel at the usability of my Zune HD. Yes, seriously. (Microsoft, open source the software!)

More recently, the Surface series of convertible notebooks have made bold and welcome changes to a form factor that had stagnated in the wake of Apple’s influential mid-2000s MacBook Pro designs.

Microsoft is still making hardware, of course, and in fact it has doubled down on its ability to do so with a revamped hardware lab filled with dedicated, extremely detail-oriented people who are given the tools they need to get as weird as they want — as long as it makes something better.

You don’t get something like this by aping the competition.

First, a disclosure: I may as well say at the outset that this piece was done essentially at the invitation (but not direction) of Microsoft, which offered the opportunity to visit their hardware labs in Building 87 and meet the team. I’d actually been there before a few times, but it had always been off-record and rather sanitized.

Knowing how interesting I’d found the place before, I decided I wanted to take part and share it at the risk of seeming promotional. They call this sort of thing “access journalism,” but the second part is kind of a stretch. I really just think this stuff is really cool, and companies seldom expose their design processes in the open like this. Microsoft obviously isn’t the only company to have hardware labs and facilities like this, but they’ve been in the game for a long time and have an interesting and almost too detailed process they’ve decided to be open about.

Although I spoke with perhaps a dozen Microsoft Devices people during the tour (which was still rigidly structured), only two were permitted to be on record: Edie Adams, Chief Ergonomist, and Yi-Min Huang, Principal Design and Experience Lead. But the other folks in the labs were very obliging in answering questions and happy to talk about their work. I was genuinely surprised and pleased to find people occupying niches so suited to their specialities and inclinations.

Generally speaking the work I got to see fell into three general spaces: the Human Factors Lab, focused on very exacting measurements of people themselves and how they interact with a piece of hardware; the anechoic chamber, where the sound of devices is obsessively analyzed and adjusted; and the Advanced Prototype Center, where devices and materials can go from idea to reality in minutes or hours.

The science of anthropometry

microsoft building87 7100095Inside the Human Factors lab, human thumbs litter the table. No, it isn’t a torture chamber — not for humans, anyway. Here the company puts its hardware to the test by measuring how human beings use it, recording not just simple metrics like words per minute on a keyboard, but high-speed stereo footage that analyzes how the skin of the hand stretches when it reaches for a mouse button down to a fraction of a millimeter.

The trend here, as elsewhere in the design process and labs, is that you can’t count anything out as a factor that increases or decreases comfort; the little things really do make a difference, and sometimes the microscopic ones.

“Feats of engineering heroics are great,” said Adams, “but they have to meet a human need. We try to cover the physical, cognitive, and emotional interactions with our products.”

(Perhaps you take this, as I did, as — in addition to a statement of purpose — a veiled reference to a certain other company whose keyboards have been in the news for other reasons. Of this later.)

The lab is a space perhaps comparable to a medium-sized restaurant, with enough room for a dozen or so people to work in the various sub-spaces set aside for different highly specific measurements. Various models of body parts have been set out on work surfaces, I suspect for my benefit.

microsoft building87 7100099Among them are that set of thumbs, in little cases looking like oversized lipsticks, each with a disturbing surprise inside. These are all cast from real people, ranging from the small thumb of a child to a monster that, should it have started a war with mine, I would surrender unconditionally.

Next door is a collection of ears, not only rendered in extreme detail but with different materials simulating a variety of rigidities. Some people have soft ears, you know. And next door to those is a variety of noses, eyes, and temples, each representing a different facial structure or interpupillary distance.

This menagerie of parts represents not just a continuum of sizes but a variety of backgrounds and ages. All of them come into play when creating and testing a new piece of hardware.

microsoft building87 7100104 1“We want to make sure that we have a diverse population we can draw on when we develop our products,” said Adams. When you distribute globally it is embarrassing to find that some group or another, with wider-set eyes or smaller hands, finds your product difficult to use. Inclusivity is a many-faceted gem, indeed it has as many facets as you are willing to cut. (The Xbox Adaptive Controller, for instance, is a new and welcome one.)

In one corner stands an enormous pod that looks like Darth Vader should emerge from it. This chamber, equipped with 36 DSLR cameras, produces an unforgivingly exact reproduction of one’s head. I didn’t do it myself, but many on the team had; in fact, one eyes-and-nose combo belonged to Adams. The fellow you see pictured there also works in the lab; that was the first such 3D portrait they took with the rig.

With this they can quickly and easily scan in dozens or hundreds of heads, collecting metrics on all manner of physiognomical features and creating an enviable database of both average and outlier heads. My head is big, if you want to know, and my hand was on the upper range too. But well within a couple standard deviations.

So much for static study — getting reads on the landscape of humanity, as it were. Anthropometry, they call it. But there are dynamic elements as well, some of which they collect in the lab, some elsewhere.

“When we’re evaluating keyboards, we have people come into the lab. We try to put them in the most neutral position possible,” explained Adams.

It should be explained that by neutral, she means specifically with regard to the neutral positions of the joints in the body, which have certain minima and maxima it is well to observe. How can you get a good read on how easy it is to type on a given keyboard if the chair and desk the tester is sitting at are uncomfortable?

Here as elsewhere the team strives to collect both objective data and subjective data; people will say they think a keyboard, or mouse, or headset is too this or too that, but not knowing the jargon they can’t get more specific. By listening to subjective evaluations and simultaneously looking at objective measurements, you can align the two and discover practical measures to take.

microsoft building87 7100096One such objective measure involved motion capture beads attached to the hand while an electromyographic bracelet tracks the activation of muscles in the arm. Imagine if you will a person whose typing appears normal and of uniform speed — but in reality they are putting more force on their middle fingers than the others because of the shape of the keys or rest. They might not be able to tell you they’re doing so, though it will lead to uneven hand fatigue, but this combo of tools could reveal the fact.

“We also look at a range of locations,” added Huang. “Typing on a couch is very different from typing on a desk.”

One case, such as a wireless Surface keyboard, might require more of what Huang called “lapability,” (sp?) while the other perhaps needs to accommodate a different posture and can abandon lapability altogether.

A final measurement technique that is quite new to my knowledge involves a pair of high-resolution, high-speed black and white cameras that can be focused narrowly on a region of the body. They’re on the right, below, with colors and arrows representing motion vectors.

microsoft building87 7100106

A display showing various anthropometric measurements.

These produce a very detailed depth map by closely tracking the features of the skin; one little patch might move further than the other when a person puts on a headset, suggesting it’s stretching the skin on the temple more than it is on the forehead. The team said they can see movements as small as ten microns, or micrometers (therefore you see that my headline was only light hyperbole).

You might be thinking that this is overkill. And in a way it most certainly is. But it is also true that by looking closer they can make the small changes that cause a keyboard to be comfortable for five hours rather than four, or to reduce error rates or wrist pain by noticeable amounts — features you can’t really even put on the box, but which make a difference in the long run. The returns may diminish, but we’re not so far along the asymptote approaching perfection that there’s no point to making further improvements.

The quietest place in the world

microsoft building87 7100109Down the hall from the Human Factors lab is the quietest place in the world. That’s not a colloquial exaggeration — the main anechoic chamber in Building 87 at Microsoft is in the record books as the quietest place on Earth, with an official ambient noise rating of negative 20.3 decibels.

You enter the room through a series of heavy doors and the quietness, though a void, feels like a physical medium that you pass into. And so it is, in fact — a near-total lack of vibrations in the air that feels as solid as the nested concrete boxes inside which the chamber rests.

I’ve been in here a couple times before, and Hundraj Gopal, the jovial and highly expert proprietor of quietude here, skips the usual tales of Guinness coming to test it and so on. Instead we talk about the value of sound to the consumer, though they may not even realize they do value it.

Naturally if you’re going to make a keyboard, you’re going to want to control how it sounds. But this is a surprisingly complex process, especially if, like the team at Microsoft, you’re really going to town on the details.

The sounds of consumer products are very deliberately designed, they explained. The sound your car door makes when it shuts gives a sense of security — being sealed in when you’re entering, and being securely shut out when you’re leaving it. It’s the same for a laptop — you don’t want to hear a clank when you close it, or a scraping noise when you open it. These are the kinds of things that set apart “premium” devices (and cars, and controllers, and furniture, etc) and they do not come about by accident.

microsoft building87 7100113Keyboards are no exception. And part of designing the sound is understanding that there’s more to it than loudness or even tone. Some sounds just sound louder, though they may not register as high in decibels. And some sounds are just more annoying, though they might be quiet. The study and understanding of this is what’s known as psychoacoustics.

There are known patterns to pursue, certain combinations of sounds that are near-universally liked or disliked, but you can’t rely on that kind of thing when you’re, say, building a new keyboard from the ground up. And obviously when you create a new machine like the Surface and its family they need new keyboards, not something off the shelf. So this is a process that has to be done from scratch over and over.

As part of designing the keyboard — and keep in mind, this is in tandem with the human factors mentioned above and the rapid prototyping we’ll touch on below — the device has to come into the anechoic chamber and have a variety of tests performed.

microsoft building87 7100116

A standard head model used to simulate how humans might hear certain sounds. The team gave it a bit of a makeover.

These tests can be painstakingly objective, like a robotic arm pressing each key one by one while a high-end microphone records the sound in perfect fidelity and analysts pore over the spectrogram. But they can also be highly subjective: They bring in trained listeners — “golden ears” — to give their expert opinions, but also have the “gen pop” everyday users try the keyboards while experiencing calibrated ambient noise recorded in coffee shops and offices. One click sound may be lost in the broad-spectrum hubbub in a crowded cafe but annoying when it’s across the desk from you.

This feedback goes both directions, to human factors and prototyping, and they iterate and bring it back for more. This progresses sometimes through multiple phases of hardware, such as the keyswitch assembly alone; the keys built into their metal enclosure; the keys in the final near-shipping product before they finalize the keytop material, and so on.

Indeed, it seems like the process really could go on forever if someone didn’t stop them from refining the design further.

“It’s amazing that we ever ship a product,” quipped Adams. They can probably thank the Advanced Prototype Center for that.

Rapid turnaround is fair play

If you’re going to be obsessive about the details of the devices you’re designing, it doesn’t make a lot of sense to have to send off a CAD file to some factory somewhere, wait a few days for it to come back, then inspect for quality, send a revised file, and so on. So Microsoft (and of course other hardware makers of any size) now use rapid prototyping to turn designs around in hours rather than days or weeks.

This wasn’t always possible even with the best equipment. 3D printing has come a long way over the last decade, and continues to advance, but not long ago there was a huge difference between a printed prototype and the hardware that a user would actually hold.

microsoft building87 7100128Multi-axis CNC mills have been around for longer, but they’re slower and more difficult to operate. And subtractive manufacturing (i.e. taking a block and whittling it down to a mouse) is inefficient and has certain limitations as far as the structures it can create.

Of course you could carve it yourself out of wood or soap, but that’s a bit old-fashioned.

So when Building 87 was redesigned from the ground up some years back, it was loaded with the latest and greatest of both additive and subtractive rapid manufacturing methods, and the state of the art has been continually rolling through ever since. Even as I passed through they were installing some new machines (desk-sized things that had slots for both extrusion materials and ordinary printer ink cartridges, a fact that for some reason I found hilarious).

The additive machines are in constant use as designers and engineers propose new device shapes and styles that sound great in theory but must be tested in person. Having a bunch of these things, each able to produce multiple items per print, lets you for instance test out a thumb scoop on a mouse with 16 slightly different widths. Maybe you take those over to Human Factors and see which can be eliminated for over-stressing a joint, then compare comfort on the surviving 6 and move on to a new iteration. That could all take place over a day or two.

microsoft building87 7100092

Ever wonder what an Xbox controller feels like to a child? Just print a giant one in the lab.

Softer materials have become increasingly important as designers have found that they can be integrated into products from the start. For instance, a wrist wrest for a new keyboard might have foam padding built in.

But how much foam is too much, or too little? As with the 3D printers, flat materials like foam and cloth can be customized and systematically tested as well. Using a machine called a skiver, foam can be split into thicknesses only half a millimeter apart. It doesn’t sound like much — and it isn’t — but when you’re creating an object that will be handled for hours at a time by the sensitive hands of humans, the difference can be subtle but substantial.

For more heavy-duty prototyping of things that need to be made out of metal — hinges, laptop frames, and so on — there is bank after bank of 5-axis CNC machines, lathes, and more exotic tools, like a system that performs extremely precise cuts using a charged wire.

The engineers operating these things work collaboratively the designers and researchers, and it was important to the people I talked to that this wasn’t a “here, print this” situation. A true collaboration has input from both sides, and that is what seems to be happening here. Someone inspecting a 3D model for printability before popping it into the 5-axis might say to the designer, you know, these pieces could fit together more closely if we did so-and-so, and it would actually add strength to the assembly. (Can you tell I’m not an engineer?) Making stuff, and making stuff better, is a passion among the crew and that’s a fundamentally creative drive.

Making fresh hells for keyboards

If any keyboard has dominated the headlines for the last year or so, it’s been Apple’s ill-fated butterfly switch keyboard on the latest MacBook Pros. While being in my opinion quite unpleasant to type on, they appeared to fail at an astonishing rate judging by the proportion of users I saw personally reporting problems, and are quite expensive to replace. How, I wondered, did a company with Apple’s design resources create such a dog?

microsoft building87 7100129

Here’s a piece of hardware you won’t break any time soon.

I mentioned the subject to the group towards the end of the tour but, predictably and understandably, it wasn’t really something they wanted to talk about. But a short time later I spoke with one of the people in charge of Microsoft’s reliability managers. They too demurred on the topic of Apple’s failures, opting instead to describe at length the measures Microsoft takes to ensure that their own keyboards don’t suffer a similar fate.

The philosophy is essentially to simulate everything about the expected 3-5 year life of the keyboard. I’ve seen the “torture chambers” where devices are beaten on by robots (I’ve seen these personally, years ago — they’re brutal), but there’s more to it than that. Keyboards are everyday objects, and they face everyday threats; so that’s what the team tests, with things falling into three general categories:

Environmental: This includes cycling the temperature from very low to very high, exposing the keyboard to dust and UV. This differs for each product, since some will obviously be used outside more than others. Does it break? Does it discolor? Where does the dust go?

Mechanical: Every keyboard undergoes key tests to make sure that keys can withstand however many million presses without failing. But that’s not the only thing that keyboards undergo. They get dropped and things get dropped on them, of course, or left upside-down, or have their keys pressed and held at weird angles. All these things are tested, and when a keyboard fails because of a test they don’t have, they add it.

Chemical. I found this very interesting. The team now has more than 30 chemicals that it exposes its hardware to, including: lotion, Coke, coffee, chips, mustard, ketchup, and Clorox. The team is constantly adding to the list as new chemicals enter frequent usage or new markets open up. Hospitals, for instance, need to test a variety of harsh disinfectants that an ordinary home wouldn’t have. (Note: Burt’s Bees is apparently bad news for keyboards.)

Testing is ongoing, with new batches being evaluated continuously as time allows.

To be honest it’s hard to imagine that Apple’s disappointing keyboard actually underwent this kind of testing, or if it did, that it was modified to survive it. The number and severity of problems I’ve heard of with them suggest the “feats of engineering heroics” of which Adams spoke, but directed singlemindedly in the direction of compactness. Perhaps more torture chambers are required at Apple HQ.

7 factors and the unfactorable

All the above are more tools for executing a design and not or creating one to begin with. That’s a whole other kettle of fish, and one not so easily described.

Adams told me: “When computers were on every desk the same way, it was okay to only have one or two kinds of keyboard. But now that there are so many kinds of computing, it’s okay to have a choice. What kind of work do you do? Where do you do it? I mean, what do we all type on now? Phones. So it’s entirely context dependent.”

microsoft building87 7100120

Is this the right curve? Or should it be six millimeters higher? Let’s try both.

Yet even in the great variety of all possible keyboards there are metrics that must be considered if that keyboard is to succeed in its role. The team boiled it down to seven critical points:

  • Key travel: How far a key goes until it bottoms out. Neither shallow nor deep is necessarily good, but serve different purposes.
  • Key spacing: Distance between the center of one key and the next. How far can you differ from “full-size” before it becomes uncomfortable?
  • Key pitch: On many keyboards the keys do not all “face” the same direction, but are subtly pointed towards the home row, because that’s the direction your fingers hit them from. How much is too much? How little is too little?
  • Key dish: The shape of the keytop limits your fingers’ motion, captures them when they travel or return, and provides a comfortable home — if it’s done right.
  • Key texture: Too slick and fingers will slide off. Too rough and it’ll be uncomfortable. Can it be fabric? Textured plastic? Metal?
  • Key Sound: As described above the sound indicates a number of things and has to be carefully engineered.
  • Force to fire: How much actual force does it take to drive a given key to its actuation point? Keep in mind this can and perhaps should differ from key to key.

In addition to these core concepts there are many secondary ones that pop up for consideration: Wobble, or the amount a key moves laterally (yes, this is deliberate), snap ratio, involving the feedback from actuation. Drop angle, off-axis actuation, key gap for chiclet boards… and of course the inevitable switch debate.

Keyboard switches, the actual mechanism under the key, have become a major sub-industry as many companies started making their own at the expiration of a few important patents. Hence there’s been a proliferation of new key switches with a variety of aspects, especially on the mechanical side. Microsoft does make mechanical keyboards, and scissor-switch keyboards, and membrane as well, and perhaps even some more exotic ones (though the original touch-sensitive Surface cover keyboard was a bit of a flop).

“When we look at switches, whether it’s for a mouse, QWERTY, or other keys, we think about what they’re for,” said Adams. “We’re not going to say we’re scissor switch all the time or something — we have all kinds. It’s about durability, reliability, cost, supply, and so on. And the sound and tactile experience is so important.”

As for the shape itself, there is generally the divided Natural style, the flat full style, and the flat chiclet style. But with design trends, new materials, new devices, and changes to people and desk styles (you better believe a standing desk needs a different keyboard than a sitting one), it’s a new challenge every time.

They collected a menagerie of keyboards and prototypes in various stages of experimentation. Some were obviously never meant for real use — one had the keys pitched so far that it was like a little cave for the home row. Another was an experiment in how much a design could be shrunk until it was no longer usable. A handful showed different curves a la Natural — which is the right one? Although you can theorize, the only way to be sure is to lay hands on it. So tell rapid prototyping to make variants 1-10, then send them over to Human Factors and text the stress and posture resulting from each one.

“Sure, we know the gable slope should be between 10-15 degrees and blah blah blah,” said Adams, who is actually on the patent for the original Natural Keyboard, and so is about as familiar as you can get with the design. “But what else? What is it we’re trying to do, and how are we achieving that through engineering? It’s super fun bringing all we know about the human body and bringing that into the industrial design.”

Although the comparison is rather grandiose, I was reminded of an orchestra — but not in full swing. Rather, in the minutes before a symphony begins, and all the players are tuning their instruments. It’s a cacophony in a way, but they are all tuning towards a certain key, and the din gradually makes its way to a pleasant sort of hum. So it is that a group of specialists all tending their sciences and creeping towards greater precision seem to cohere a product out of the ether that is human-centric in all its parts.


How Microsoft turns an obsession with detail into micron-optimized keyboards

Arrcus snags $30M Series B as it tries to disrupt networking biz

Source: Tech News – Enterprise

Arrcus has a bold notion to try and take on the biggest names in networking by building a better networking management system. Today it was rewarded with a $30 million Series B investment led by Lightspeed Venture Partners.

Existing investors General Catalyst and Clear Ventures also participated. The company previously raised a seed and Series A totaling $19 million, bringing the total raised to date to $49 million, according to numbers provided by the company.

Founder and CEO Devesh Garg says the company wanted to create a product that would transform the networking industry, which has traditionally been controlled by a few companies. “The idea basically is to give you the best-in-class [networking] software with the most flexible consumption model at the lowest overall total cost of ownership. So you really as an end customer have the choice to choose best-in-class solutions,” Garg told TechCrunch.

This involves building a networking operating system called ArcOS to run the networking environment. For now, that means working with manufacturers of white box solutions and offering some combination of hardware and software, depending on what the customer requires. Garg says that players at the top of the market like Cisco, Arista and Juniper tend to keep their technical specifications to themselves, making it impossible to integrate ArcOS with those companies at this time, but he sees room for a company like Arrcus.

“Fundamentally, this is a very large marketplace that’s controlled by two or three incumbents, And when you have lack of competition you get all of the traditional bad behavior that comes along with that including muted innovation, rigidity in terms of the solutions that are provided, and these legacy procurement models, where there’s not much flexibility with artificially high pricing,” he explained.

The company hopes to fundamentally change the current system with its solutions, taking advantage of unbranded hardware which offers a similar experience, but can run the Arrcus software. “Think of them as white box manufacturers of switches and routers. Oftentimes, they come from Taiwan, where they’re unbranded, but it’s effectively the same components that are used in the same systems that are used by the [incumbents],” he said.

The approach seems to be working as the company has grown to 50 employees since it launched in 2016. Garg says that expects to double that number in the next 6-9 months with the new funding. Currently the company has double-digit paying customers and over 20 in various stages of proofs of concepts, he said.


Arrcus snags M Series B as it tries to disrupt networking biz

Microsoft’s $399 Azure Kinect AI camera is now shipping in the U.S. and China

Source: Microsoft more

Earlier this year, at MWC, Microsoft announced the return of its Kinect sensor in the form of an AI developer kit. The $399 Azure Kinect DK camera system includes a 1MP depth camera, 360-degree microphone, 12MP RGB camera and an orientation sensor, all in a relatively small package. The kit has been available for pre-order for a few months now, but as the company announced today, it’s now generally available and shipping to pre-order customers in the U.S. and China.

Unlike the original Kinect, which launched as an Xbox gaming accessory that never quite caught on, the Azure Kinect is all business. It’s meant to give developers a platform to experiment with AI tools and plug into Azure’s ecosystem of machine learning services (though using Azure is not mandatory).

To help developers get started, the company already launched a number of SDKs, including a preview of a body-tracking SDK that is close to what you may remember from the Kinect’s Xbox days.

kinect developers

The core of the camera has more to do with Microsoft’s HoloLens than the original Kinect. As the company notes in its press materials, the Azure Kinect DK users the same time-of-flight sensor the company developed for the second generation of its HoloLens AR visor. And while the focus here is clearly on using the camera, Microsoft also notes that the microphone array also allows developers to build sophisticated speech solutions.

The company is positioning the device as an easy gateway for its users in health and life sciences, retail, logistics and robotics to start experimenting with using depth sensing and machine learning. We’ve seen somewhat similar dev kits from others, including Microsoft partner Qualcomm, though these devices don’t usually have the depth camera that makes the Kinect DK a Kinect.


Microsoft’s 9 Azure Kinect AI camera is now shipping in the U.S. and China

Three years after moving off AWS, Dropbox infrastructure continues to evolve

Source: Tech News – Enterprise

Conventional wisdom would suggest that you close your data centers and move to the cloud, not the other way around, but in 2016 Dropbox undertook the opposite journey. It (mostly) ended its long-time relationship with AWS and built its own data centers.

Of course, that same conventional wisdom would say, it’s going to get prohibitively expensive and more complicated to keep this up. But Dropbox still believes it made the right decision and has found innovative ways to keep costs down.

Akhil Gupta, VP of Engineering at Dropbox, says that when Dropbox decided to build its own data centers, it realized that as a massive file storage service, it needed control over certain aspects of the underlying hardware that was difficult for AWS to provide, especially in 2016 when Dropbox began making the transition.

“Public cloud by design is trying to work with multiple workloads, customers and use cases and it has to optimize for the lowest common denominator. When you have the scale of Dropbox, it was entirely possible to do what we did,” Gupta explained.

Alone again, naturally

One of the key challenges of trying to manage your own data centers, or build a private cloud where you still act like a cloud company in a private context, is that it’s difficult to innovate and scale the way the public cloud companies do, especially AWS. Dropbox looked at the landscape and decided it would be better off doing just that, and Gupta says even with a small team — the original team was just 30 people — it’s been able to keep innovating.

Kano, the kids-focused coding and hardware startup, inks deal with Microsoft, launches $300 Kano PC

Source: Microsoft more

Kano, the London-based startup that builds hardware designed to teach younger people about computing and coding, is taking a significant step forward in its growth strategy today. The startup has inked a partnership with Microsoft that sees Kano launching the Kano PC, a new 11.6-inch touch-enabled, Intel Atom-powered computer, its first to run Windows — Windows 10 S specifically. As part of the deal, Microsoft is also making an investment of an undisclosed amount in Kano.

The Kano PC is up for pre-order now at $299.99 and £299.99 on Kano.me and the Microsoft Store, to ship in October. It will also go on sale at selected retailers in the US, Canada, and the UK starting October 21, 2019.

The shift to building a Windows-powered device is a significant one for Kano.

The startup first made its name with a popular Kickstarter campaign based around a device built using Raspberry Pi, speaking to the DIY ethos that has shaped it over the last several years.

Alex Klein, Kano’s founder and CEO, said in an interview that while Kano will continue to support its Raspberry Pi-powered devices, it has yet to determine what its roadmap will be in terms of launching new hardware on this processor:

“The Raspberry Pi devices remain in the portfolio at good price points, but this machine is designed for a much broader age set. It’s a proper Windows PC” — Klein said, pointing to the Intel Atom x5-Z8350 Quad core 1.44 GHz processor, the 4GB of memory, and 64GB of storage — “and a powerful machine for the price point.”

While the Kano line up to now has largely been used and tracked by 6-13 year-olds, Klein describes the Kano PC as a “K-12 device,” acknowledging that “branding might take more time to unfold” to connect with the younger and older ends of that range.

It will be doing so with an army of software now supplied by way of its Microsoft partnership:

Make Art – Learn to code high-quality images in Coffeescript
Kano App – Make almost anything, including magic effects and adventurous worlds, with simple steps and programming fundamentals
Paint 3D – Make and share 3D models and send them out for printing
Minecraft: Education Edition – The award-winning creative game-based learning platform
Microsoft Teams – To get new projects and content, and share your work (Yes, Slack, now kids will be using Teams)
Live Tiles – Personalized projects on coding and creativity delivered directly to your dashboard

Up to now, Kano’s traction with a core group of users — younger kids who are interested in computers and coding, as well as parents who want to encourage their kids to be interested in these — has led to it launching a number of other accessories to work with its basic computers. It’s also launched a clever tech toy that plays to its demographic: last year, it launched a Harry Potter magic wand that you could build yourself, program and use. Klein hinted in the interview that we’re going to see more products of this kind coming soon from Kano.

The Microsoft deal will bring it a higher profile among a wider set of consumers beyond early adopters, and likely a new entry point into selling into educational environments, where Microsoft has been making a big push.

This is the second side of the deal that’s also interesting: Microsoft has a long history of selling software and hardware into educational environments and this — a different brand from the rest of the pack — will bring move diversity into the mix, with a brand designed specifically for younger people, rather than adult-focused brands that have been downsized in functionality (but possibly not in price) for children.

“We’re very excited to partner with Kano for the launch of the Kano PC. We align with Kano’s goal of making classroom experiences more inclusive for teachers and students, empowering them to build the future, not just imagine it,” said Anthony Salcito, VP of Education at Microsoft, in a statement.

Kano’s scrappy success up to now has also led to it raising some $50 million in funding from a list of backers that include Saul and Robin Klein (relatives of the founder Alex Klein), as well as Marc Benioff, Index Ventures, Breyer Capital, Troy Carter and a number of other investors. Klein said that it’s likely to be looking for another equity round in the near future, but declined to comment further on that.


Kano, the kids-focused coding and hardware startup, inks deal with Microsoft, launches 0 Kano PC

Helium launches $51M-funded “LongFi” IoT alternative to cellular

Source: Tech News – Enterprise

With 200X the range of WiFi at 1/1000th of the cost of a cellular modem, Helium’s “LongFi” wireless network debuts today. Its transmitters can help track stolen scooters, find missing dogs via IoT collars, and collect data from infrastructure sensors. The catch is that Helium’s tiny, extremely low-power, low-data transmission chips rely on connecting to P2P Helium Hotspots people can now buy for $495. Operating those hotspots earns owners a cryptocurrency token Helium promises will be valuable in the future…

The potential of a new wireless standard has allowed Helium to raise $51 million over the past few years from GV, Khosla Ventures, and Marc Benioff including a new $15 million round co-led by Union Square Ventures and Multicoin Capital. That’s in part because one of Helium’s co-founders is Napster inventer Shawn Fanning. Investors are betting that he can change the tech world again, this time with a wireless protocol that like WiFi and Bluetooth before it could unlock unique business opportunities.

Helium already has some big partners lined up including Lime, which will test it for tracking its lost and stolen scooters and bikes when they’re brought indoors obscuring other connectivity or their battery is pulled out deactivating GPS. “It’s an ultra low-cost version of a LoJack” Helium CEO Amir Haleem says.

InvisiLeash will partner with it to build more trackable pet collars. Agulus will pull data from irrigation valves and pumps for its agriculture tech business, Nestle will track when its time to refill water in its ReadyRefresh coolers at offices, and Stay Alfred will use it to track occupancy status and air quality in buildings. Haleem also imagines the tech being useful for tracking wildfires or radiation.

Haleem met Fanning playing video games in the 2000s. They teamed up with Fanning and Sproutling baby monitor (sold to Mattel) founder Chris Bruce in 2013 to start work on Helium. They foresaw a version of Tile’s trackers that could function anywhere while replacing expensive cell connections for devices that don’t need high-bandwith. Helium will compete with SigFox, another lower-power IoT protocol, though Haleem claims its more centralized infrastructure costs are prohibitive. Lucky for Helium, on-demand rental bikes and scooters that are perfect for its network have reached mainstream popularity just as Helium launches six years after its start.

Helium says its already pre-sold 80% of its Helium Hotspots for its first market in Austin, Texas. People connect them to their Wifi and put in their window so thee devices can pull in data from Helium’s IoT sensors over its open-source LongFi protocol. The hotspots then encrypt and send the data to the company’s cloud that clients can plug into to track and collect info from their devices. The Helium Hotspots only require as much energy as a 12-watt LED lightbulb to run, but that $495 price tag is steep. The lack of a concrete return on investment could deter later adopters from buying the expensive device.

Only 150-200 hotspots are necessary to blanket a city in connectivity, Haleem tells me. But since they need to be distributed across the landscape so a client can’t just fill their warehouse with the hotspots and the upfront price is expensive for individuals, Helium might need to sign up some retail chains as partners for deployment. Haleem admits “The hard part is the education”. Making hotspot buyers understand the potential (and risks) while demonstrating the opportunities for clients will require a ton of outreach and slick marketing.

Without enough Helium Hotspots, the Helium network won’t function. That means this startup will have to simultaneously win at telecom technology, enterprise sales, and cryptocurrency for the network to pan out. As if one of those wasn’t hard enough.


Helium launches M-funded “LongFi” IoT alternative to cellular