Microsoft open-sources a crucial algorithm behind its Bing Search services

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.

AWS launches WorkLink to make accessing mobile intranet sites and web apps easier

If your company uses a VPN and/or a mobile device management service to give you access to its intranet and internal web apps, then you know how annoying those are. AWS today launched a new product, Amazon WorkLink,  that promises to make this process significantly easier.

WorkLink is a fully managed service that, for $5 per month and user, allows IT admins to give employees one-click access to internal sites, no matter whether they run on AWS or not.

After installing WorkLink on their phones, employees can then simply use their favorite browser to surf to an internal website (other solutions often force users to use a sub-par proprietary browser). WorkLink the goes to work, securely requests that site and — and that’s the smart part here — a secure WorkLink container converts the site into an interactive vector graphic and sends it back to the phone. Nothing is stored or cached on the phone and AWS says WorkLink knows nothing about personal device activity either. That also means when a device is lost or stolen, there’s no need to try to wipe it remotely because there’s simply no company data on it.

IT can either use a VPN to connect from an AWS Virtual Private Cloud to on-premise servers or use AWS Direct Connect to bypass a VPN solution. The service works with all SAML 2.0 identity providers (which is the majority of identity services used in the enterprise, including the likes of Okta and Ping Identity) and as a fully managed service, it handles scaling and updates in the background.

“When talking with customers, all of them expressed frustration that their workers don’t have an easy and secure way to access internal content, which means that their employees either waste time or don’t bother trying to access content that would make them more productive,” says Peter Hill, Vice President of Productivity Applications at AWS, in today’s announcement. “With Amazon WorkLink, we’re enabling greater workplace productivity for those outside the corporate firewall in a way that IT administrators and security teams are happy with and employees are willing to use.”

WorkLink will work with both Android and iOS, but for the time being, only the iOS app (iOS 12+) is available. For now, it also only works with Safar, with Chrome support coming in the next few weeks. The service is also only available in Europe and North America for now, with additional regions coming later this year.

For the time being, AWS’s cloud archrivals Google and Microsoft don’t offer any services that are quite comparable with WorkLink. Google offers its Cloud Identity-Aware Proxy as a VPN alternative and as part of its BeyondCorp program, though that has a very different focus, while Microsoft offers a number of more traditional mobile device management solutions.

Google’s Flutter toolkit goes beyond mobile with Project Hummingbird

Flutter, Google’s toolkit for building cross-platform applications, hit version 1.0 today. Traditionally, the project always focused on iOS and Android apps, but as the company announced today, it’s now looking at bringing Flutter to the web, too. That project, currently called Hummingbird, is essentially an experimental web-based implementation of the Flutter runtime.

“From the beginning, we designed Flutter to be a portable UI toolkit, not just a mobile UI toolkit,” Google’s group product manager for Flutter, Tim Sneath, told me. “And so we’ve been experimenting with how we can bring Flutter to different places.”

Hummingbird takes the Dart code that all Flutter applications are written in and then compiles it to JavaScript, which in turn allows the code to run in any modern browser. Developers have always been able to compile Dart to JavaScript, so this part isn’t new, but ensuring that the Flutter engine would work, and bringing all the relevant Flutter features to the web was a major engineering effort. Indeed, Google built three prototypes to see how this could work. Just bringing the widgets over wasn’t enough. A combination of the Flutter widgets and its layout system was also discarded and in the end, the team decided to build a full Flutter web engine that retains all of the layers that sit above the dart:ui library.

“One of the great things about Flutter itself is that it compiles to machine code, to Arm code. But Hummingbird extends that further and says, okay, we’ll also compile to JavaScript and we’ll replace the Flutter engine on the web with the Hummingbird engine which then enables Flutter code to run without changes in web browsers. And that, of course, extends Flutter’s perspective to a whole new ecosystem.”

With tools like Electron, it’s easy enough to bring a web app to the desktop, too, so there’s now also a pathway for bringing Flutter apps to Windows and MacOS that way, though there is already another project in progress to embed Flutter into native desktop apps, too.

It’s worth noting that Google always touted the fact that Flutter compiled to native code — and the speed gains it got from that. Compiling to the web is a bit of a tradeoff, though. Sneath acknowledged as much and stressed that Hummingbird is an experimental project and that Google isn’t releasing any code today. Right now, it’s a technical demonstration.

“If you can go native, you should go native,” he said. “Think of it as an extension of Flutter’s reach rather than a solution to the problem that Flutter itself is solving.”

In its current iteration, the Flutter web engine can handle most apps, but there’s still a lot of work to do to ensure that all widgets run correctly, for example. The team is also looking at building a plugin system and ways to embed Flutter and into existing web apps — and existing web apps into Flutter web apps.

 

Google’s cross-platform Flutter UI toolkit hits version 1.0

Flutter, Google’s UI toolkit for building mobile Android and iOS applications, hit its version 1.0 release today. In addition, Google also today announced a set of new third-party integrations with the likes of Square and others, as well as a couple of new features that make it easier to integrate Flutter with existing applications.

The open source Flutter project made its debut at Google’s 2017 I/O developer conference. Since then, it’s quickly grown in popularity and companies like Groupon, Philips Hue, Tencent, Alibaba, Capital One and others have already built applications with it, despite the fact that it had not hit version 1.0 yet and that developers have to write their apps in the Dart language, which is an additional barrier to entry.

In total, Google says, developers have already published “thousands” of Flutter apps to the Apple and Google app stores.

“Flutter is our portable UI toolkit for creating a beautiful native experience for iOS and Android out of just a single code base,” Tim Sneath, Google’s group product manager for Dart, explained. “The problem we’re solving is the problem that most mobile developers face today. As a developer, you’re kind of forced to choose. Either you build apps natively using the platform SDK, whether you’re building an iOS app or an Android app. And then you’ve to build them twice.”

Sneath was also part of the Silverlight team at Microsoft before he joined Google in 2017, so he’s got a bit of experience in learning what doesn’t work in this space of cross-platform development. It’s no secret, though, that Facebook is trying to solve a very similar problem with React Native, which is also quite popular.

“I mean, React Native is obviously a technology that’s proven quite popular,” Sneath said. “One of the challenges that React Native developers face, or have reported in the past — one challenge is that native React Native code is written in JavaScript, which means that it’s run using the browser’s JavaScript engine, which immediately kind of move this a little bit away from the native model of the platform. The bit that they are very native in is that they use the operating system’s own controls. And while on the surface, that seems like a good thing in practice, that had quite a few challenges for developers around compatibility.”

Google, obviously believes that its ability to compile to native code — and the speed gains that come with that — set its platform apart from the competition. In part, it does this by using a hardware-accelerated 2D engine and, of course, by compiling the Dart code to native ARM code for iOS and Android. The company also stresses that developers get full control over every pixel on the screen.

With today’s launch, Google is also announcing new third-party integrations to Flutter. The first is with Square, which announced two new Flutter SDKs for building payments flows, both for in-app experience and in-person terminals using a Square reader. Others are 2Dimensions, for building vector animations and embedding them right into Flutter, as well as Nevercode, which announced a tool for automating the build and packaging process for Flutter apps.

As for new Flutter features, Google today announced ‘Add to App,’ a new feature that makes it easier for developers to slowly add Flutter code to existing apps. In its early days, Flutter’s focus was squarely on building new apps from scratch, but as it has grown in popularity, developers now want to use it for parts of their existing applications as they modernize them.

The other new feature is ‘Platform Views,’ which is essentially the opposite of ‘Add to App’ in that it allows developers to embed Android and iOS controls in their Flutter apps.

Why Oath keeps Tumblring

I dig on my employer Oath, and then Tencent Music notes and a major loss for the NYC ecosystem and what it means for open source.

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.

My three word Oath? I’m with stupid

It goes without saying that this piece about my employer is my work alone, doesn’t reflect management’s views, and is done under the auspices of TechCrunch’s independent editorial voice. No usage of internal information is assumed or implied.

This is a piece about TechCrunch’s parent company, formerly known as “Oath:” (okay just Oath, but who am I to flout a mandatory colon?) and now ReBranded™ as Verizon Media Group / Oath (See what they did there? They literally slashed Oath. Poetic).

Oath is essentially the creature of Frankenstein, a middle-school corporate alchemy experiment to fuse the properties of the companies formerly known as AOL and Yahoo into the larger behemoth known as Verizon. You can feel the terrible synergy emanating from the multiple firewalls it takes to get to our corporate resources.

Oath has a problem:* it needs to grow for Wall Street to be happy and for Verizon not to neuter it, but it has an incredible penchant for making product decisions that basically tell users to fuck off. Oath’s year over year revenues last quarter were down 6.9%, driven by extreme competition from digital ad leaders Google and Facebook.

The solution apparently? Drive page views down. If that logic doesn’t make sense, well then, maybe you should fill out a job application.

The kerfuffle is over Tumblr, which is among Oath’s most important brands, in that people actually know what it is and kind of still like it. Tumblr, which Yahoo notably acquired under Marissa Mayer back in 2013, has been something of a product orphan — one of the few true software platforms left in a world filled with editorial content like TechCrunch and HuffPost (Oath sold off Flickr earlier this year to SmugMug — which also seems to be going through its own boneheaded product decision phase).

All was well and good — well, at least quiet — in the Tumblr world until Apple pulled the plug on Tumblr’s app in the App Store a few weeks ago over claims of child porn. Now let’s be absolutely clear: child porn is abhorrent, and filtering it out of online photo sharing sites is a prime directive (and legally mandated).

But Oath has decided to do something equally obnoxious: it intends to ban anything that might be considered “adult content” starting December 17th, just in time for the holidays when purity around family gatherings is key.

In Tumblr’s policy, “Adult content primarily includes photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts.” You’ll notice the written legerdemain — “primarily” doesn’t exclude the wider world of adult-oriented content that almost invariably is going to be subsumed under this policy.

Obviously, adults (and presumably teens as well) are pissed. As users are starting to see what photos are getting flagged (hint: not the ones with porn in them), that’s only making them more angry.

Oath is attempting to compress the content moderation engineering and testing of Facebook down to a span of a few weeks. And Facebook hasn’t even figured this one out yet, which is why people are still being murdered across the world from viral messages and memes it hosts that incite ethnic hatred and genocide.

I get the pressure from Apple. I get the safety of saying “just ban all the images” à la Renaissance pope. I get the business decision of trying to maintain Tumblr’s clean image. These points are all reasonable, but they all are just useless without Tumblr’s core and long-time users.

What flummoxes me from a product perspective is that it’s not as if banning all adult content is the singular solution to the problem. There is an entire spectrum of product, policy, legal, and product cultural ingredients that could be drawn upon. There could be more age verification, better separation of “safe for children” and “meant for adults content,” and more focus on messaging to users that moderation was meant to help the product and focus audiences rather than to puritanically filter.

Or you can just kill the photos, the somehow still loyal core user base, a safe space for expression via nudity and sexuality and, well, traffic along with it. And then you look at -6.9% growth and think: huh, I wonder if there is a connection.

*Mandatory colon

Tencent Music reintroduces its IPO

Tencent Music. Photo by Zhan Min/VCG via Getty Images

Maybe the IPO markets are thawing a bit after the crash of the last few weeks and…tariffs. From my colleague Catherine Shu:

Tencent Music Entertainment’s initial public offering is back in motion, two months after the company reportedly postponed it amid a global selloff. In a regulatory filing today, the company, China’s largest streaming music service, said it plans to offer 82 million American depositary shares (ADS), representing 164 million Class A ordinary shares, for between $13 to $15 each. That means the IPO will potentially raise up to $1.23 billion.

My colleague Eric Peckham wrote a deeper dive behind the lessons of Tencent Music for the broader music industry:

At its heart, Tencent Music is an interactive media company. Its business isn’t merely providing music, it’s getting people to engage around music. Given its parent company Tencent has become the leading force in global gaming—with control of League of Legends maker Riot Games and Clash of Clans maker Supercell, plus a 40 percent stake in Fortnite creator Epic Games, and role as the top mobile games publisher in China—its team is well-versed in the dynamics of in-game purchasing.

Tencent Music has staked out a very differentiated business model from Spotify, Pandora, Apple Music, etc. It has used an engagement-based product model to make live-streaming and virtual gifts huge business lines, without dealing with the product marketing logistics of subscription. Where the West always asks you to pay for access, Tencent is asking you essentially to pay to have fun and be part of an experience.

Eric asks I think a deep question: why hasn’t this model (which seems particularly obvious in music given the overall events component of that business) been back-ported from China to the Western world? He sees a world where Facebook buys Spotify (I don’t) but I think there is absolutely a gap in the market for a music platform to really own this model.

NYC loses an open-source superstar

Photo: Amanda Hall / robertharding / Getty Images

Wes McKinney is a major open-source star and the engineer behind pandas, which is one of the fundamental Python data libraries, as well as a founding engineer of Apache Arrow, which is an in-memory data structure specification.

So it is big news that he has decided to decamp from New York City, where has has lived for ten years, to Nashville. Writing on his personal blog:

I’ve increasingly felt that open source development is at odds with the values that are driving a large portion of the corporate world, particularly in the United States. Many companies won’t fund open source work because there is no “return on investment”. This is deeply frustrating, and being surrounded by people whose actions align with profit-motive can be pretty discouraging. It’s not necessarily that people who work in NYC or SF are greedy or amorally concerned with making money. In many cases they are just responding to incentives coming from pretty low on the hierarchy of needs.

And

Full-time open source developers in many cases will make less money than their peers who work at Google, Facebook, Microsoft, Apple, or another major tech company. If we are to enable more people to do open source development as a full-time vocation, we need to grow supportive tech communities in places that are more affordable. (emphasis his).

I think this is a very interesting trend to watch in the coming years. It’s not just the small business and art types who want to move to lower cost locales to match their lifestyle spending to the (economic) value of their work. Software developers who want to work on more meaningful projects outside of advertising and finance will also increasingly need to consider these sorts of geographical adjustments.

As I wrote a few months ago about digital nomads:

From cryptocurrency millionaires in Puerto Rico to digital nomads in hotspots like Thailand, Indonesia, and Colombia, there is increasingly a view that there is a marketplace for governance, and we hold the power as consumers. Much like choosing a cereal from the breakfast department of a supermarket, highly-skilled professionals are now comparing governments online — and making clear-headed choices based on which ones are most convenient and have the greatest amenities available.

Economic migration — whether from cost-of-living, ecosystem or governance culture, or just for new horizons — is the watchword of this century. It’s a huge loss for NYC that people like McKinney can no longer find their work compatible with the city.

What’s next

I am still obsessing about next-gen semiconductors. If you have thoughts there, give me a ring: danny@techcrunch.com.

Thoughts on Articles

Imagined Communities – a major classic book of social science thought, it’s amazing how well it has held up, and the lessons it holds for us in the cyber age. Intending to write a review of it for this weekend, so expect more notes later.

Quietly, Japan has established itself as a power in the aerospace industry – I love industrial policy and national economic development, and Eric Berger has done a great job on both fronts with his dispatch in Ars Technica. Japan is roaring back into space, increasing its launch capabilities and also preparing to deploy its own GPS infrastructure. An important contextual read for those who follow SpaceX.

Why we stopped trusting elites — a compelling deep dive by William Davies in The Guardian into how populism is animated by the failures of elites. Couldn’t agree more that elites have lost significant trust over the last few decades, mostly from hubris, corruption, and outright fraud (the financial crisis being just the largest). Elites need to hold themselves to much higher standards if we want to ask our fellow citizens for their support.

Reading docket

What I’m reading (or at least, trying to read)

  • Huge long list of articles on next-gen semiconductors. More to come shortly.

Adobe XD now lets you prototype voice apps 

Adobe XD, the company’s platform for designing and prototyping user interfaces and experiences, is adding support for a different kind of application to its lineup: voice apps. Those could be applications that are purely voice-based — maybe for Alexa or Google Home — or mobile apps that also take voice input.

The voice experience is powered by Sayspring, which Adobe acquired earlier this year. As Sayspring’s founder and former CEO Mark Webster told me, the team has been working on integrating these features into XD since he joined the company.

To support designers who are building these apps, XD now includes voice triggers and speech playback. That user experience is tightly integrated with the rest of XD and in a demo I saw ahead of today’s reveal, building voice apps didn’t look all that different from prototyping any other kind of app in XD.

To make the user experience realistic, XD can now trigger speech playback when it hears a specific word or phrase. This isn’t a fully featured natural language understanding system, of course, since the idea here is only to mock-up what the user experience would look like.

“Voice is weird,” Webster told me. “It’s both a platform like Amazon Alexa and the Google Assistant, but also a form of interaction […] Our starting point has been to treat it as a form of interaction — and how do we give designers access to the medium of voice and speech in order to create all kinds of experiences. A huge use case for that would be designing for platforms like Amazon Alexa, Google Assistant and Microsoft Cortana.”

And these days, with the advent of smart displays from Google and its partners, as well as the Amazon Echo Show, these platforms are also becoming increasingly visual. As Webster noted, the combination of screen design and voice is being more and more important now and so adding voice technology into XD seemed like a no-brainer.

Adobe’s product management lead for XD Andrew Shorten stressed that before acquiring Sayspring and integrating it into XD, its users had a hard time building voice experiences. “We started to have interactions with customers who were beginning to experiment with creating experiences for voice,” he said. “And then they were describing the pain and the frustration — and all the tools that they’d use to be able to prototype didn’t help them in this regard. And so they had to pull back to working with developers and bringing people in to help with making prototypes.”

XD is getting a few other new features, too. It now features a full range of plugins, for example, that are meant to automate some tasks and integrate it with third-party tools.

Also new is auto-animate, which brings relatively complex animation to XD that appear when you are transitioning between screens in your prototype app. The interesting part here, of course, is that this is automated. To see it in action, all you have to do is duplicate an existing artboard, modify some of the elements on the pages and tell XD to handle the animations for you.

The release also features a number of other new tools. Drag Gestures now allows you to re-create the standard drag gestures in mobile apps, maybe for building an image carousel, for example, while linked symbols make it easier to apply changes across artboards. There is also now a deeper integration with Adobe Illustrator and you can export XD designs to After Effects, Adobe’s animation tool for those cases where you need full control over animations inside your applications.

Chef launches deeper integration with Microsoft Azure

DevOps automation service Chef today announced a number of new integrations with Microsoft Azure. The news, which was announced at the Microsoft Ignite conference in Orlando, Florida, focuses on helping enterprises bring their legacy applications to Azure and ranges from the public preview of Chef Automate Managed Service for Azure to the integration of Chef’s InSpec compliance product with Microsoft’s cloud platform.

With Chef Automate as a managed service on Azure, which provides ops teams with a single tool for managing and monitoring their compliance and infrastructure configurations, developers can now easily deploy and manage Chef Automate and the Chef Server from the Azure Portal. It’s a fully managed service and the company promises that businesses can get started with using it in as little as thirty minutes (though I’d take those numbers with a grain of salt).

When those configurations need to change, Chef users on Azure can also now use the Chef Workstation with Azure Cloud Shell, Azure’s command line interface. Workstation is one of Chef’s newest products and focuses on making ad-hoc configuration changes, no matter whether the node is managed by Chef or not.

And to remain in compliance, Chef is also launching an integration of its InSpec security and compliance tools with Azure. InSpec works hand in hand with Microsoft’s new Azure Policy Guest Configuration (who comes up with these names?) and allows users to automatically audit all of their applications on Azure.

“Chef gives companies the tools they need to confidently migrate to Microsoft Azure so users don’t just move their problems when migrating to the cloud, but have an understanding of the state of their assets before the migration occurs,” said Corey Scobie, the senior vice president of products and engineering at Chef, in today’s announcement. “Being able to detect and correct configuration and security issues to ensure success after migrations gives our customers the power to migrate at the right pace for their organization.”

more Microsoft Ignite 2018 coverage

TomTom launches a free mobile maps SDK for developers

TomTom, the mapping and navigation company you probably still remember from its heyday as a leader in the stand-alone in-car GPS space, is launching a free mobile maps SDK for developers at TechCrunch Disrupt today. This move is part of the company’s overall transformation from a consumer device manufacturer to a software company.

The new SDK will feature free maps and traffic tiles for all Android and iOS users. As TomTom VP of business development and product marketing Leandro Margulis told me, free in this case really means free. While the SDK doesn’t offer routing and some other advanced features, there’s no limit to how developers use its mapping and traffic tiles.

“This is about putting the developer at the center of everything that we do,” Magulis told me. “If you look at any kind of partnership that you do, either big or small, at some point you tell and engineer or a developer to go and try the API. We want to make sure that everybody can see the beauty of what we can do.” He noted that other players in this space also give away a lot of different things, but that TomTom decided to give away what it does best — and to do so without any restrictions in terms of API calls. And developers won’t even have to give TomTom a credit card number to do so.

As Magulis also stressed, developers can mix and match geolocation services from multiple vendors without breaking any of TomTom’s rules.

In addition to the new SDK, TomTom is also making a number of other announcements today. STMicroelectronics, for example, is connecting its development tools directly to TomTom’s Maps API to help IoT companies locate their devices. RideOS, an autonomous driving startup, will use TomTom’s real-time and historical traffic data and maps for its platform while Zenly will use the company’s routing and parts of its search APIs to power its social maps.

“People may say, ‘TomTom, are you guys still around?’ But yeah, we’re thriving,” Margulis said. “And we’re not your dad’s TomTom, we’re not you mom’s TomTom. We’re your TomTom. We are a technology company, we are location experts and we are here to enable the next generation of location-based use cases.”