DefinedCrowd offers mobile apps to empower its AI-annotating masses

DefinedCrowd, the Startup Battlefield alumnus that produces and refines data for AI-training purposes, has just debuted iOS and Android apps for its army of human annotators. It should help speed up a process that the company already touts as one of the fastest in the industry.

It’s no secret that AI relies almost totally on data that has been hand-annotated by humans, pointing out objects in photos, analyzing the meaning of sentences or expressions, and so on. Doing this work has become a sort of cottage industry, with many annotators doing it part time or between other jobs.

There’s a limit, however, to what you can do if the interface you must use to do it is only available on certain platforms. Just as others occasionally answer an email or look over a presentation while riding the bus or getting lunch, it’s nice to be able to do work on mobile — essential, really, at this point.

To that end DefinedCrowd has made its own app, which shares the Neevo branding of the company’s annotation community, that lets its annotators work whenever they want, tackling image or speech annotation tasks on the go. It’s available on iOS and Android starting today.

It’s a natural evolution of the market, CEO Daniela Braga told me. There’s a huge demand for this kind of annotation work, and it makes no sense to restrict the schedules or platforms of the people doing it. She suggested everyone in the annotation space would have apps soon, just as every productivity or messaging service does. And why not?

DefinedCrowd’s next-gen platform solves the AI data acquisition problem

The company is growing quickly, going from a handful of employees to over a hundred, spread over its offices in Lisbon, Porto, Seattle, and Tokyo. The market, likewise, is exploding as more and more companies find that AI is not just applicable to what they do, but not out of their reach.

Microsoft open-sources a crucial algorithm behind its Bing Search services

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.

Factmata gets backed by eyeo, maker of Adblock Plus, and takes over its Trusted News app

“Fake news” — news content that either misleads people with half-truths, or outright lies — has become a permanent fixture of the internet. Now, as tech and media platforms continue to search for the best way to fight it, Factmata — a London startup backed by Biz Stone, Craig Newmark, Mark Cuban, Mark Pincus, and more to build a platform to detect when false information is shared online — is announcing a new investor and partnership that will see it expanding its scope.

The company is picking up an investment from eyeo, the company behind Adblock Plus, and as part of it, Factmata is taking on the running of Trusted News, the Chrome extension that eyeo launched last year to give a nudge to those browsing content on the web to indicate whether a story is legit or shit.

Dhruv Ghulati, the CEO of Factmata — who co-founded the company with Sebastian Riedel, and Andreas Vlachos (Riedel’s other fake-news-fighting startup, Bloomsbury AI, was acquired by Facebook last year) — said that the financial terms of the deal were not being disclosed. He added that “eyeo invested both cash and the asset” and that “it’s a significant amount that strategically helps us accelerate development.” He points out that Factmata has yet to raise money from any VCs.

Trusted News today — an example of how it looks is in the screenshot above — has “tens of thousands” of users, Ghulati said, and the aim will be to continue developing and taking those numbers to the next level, hundreds of thousands of users by changing up the product. The plan will be to build extensions for other browsers — “You can imagine a number of platforms across browsers (eg Brave) search engines (eg Mozilla), hosting companies (eg Cloudflare) could be interested but we haven’t engaged in discussions yet,” he said — as well as to expand what Trusted News itself provides.

“The goal… is to make it a lot more interactive where users can get involved in the process of rating articles,” he said. “We found that young people especially surprisingly really want to get involved in debating how an article is written with others and engaging in rating systems, rather than just being handed a rating to trust.”

Ghulati said that eyeo’s decision to hand off running Trusted News to Factmata was a case of horses for courses.

“They are giving it to us in return for a stake because we are the best placed and most focused natural language understanding company to make use of it, and progress it forward fast,” he said. “For Factmata, we partner with a company that has proven ability to generate large, engaged community growth.”

“Just as eyeo and Adblock Plus are protecting users from harmful, annoying ads, the partnership between Factmata and Trusted News gets us one step closer to a safer, more transparent internet. Content that is harmful gets flagged automatically, giving users more control over what kind of content they trust and want to read,” said Till Faida, CEO & Co-Founder, eyeo, in a statement.

Factmata has already started thinking about how it can put some of its own technology into the product, for example by adding in the eight detection algorithms that it has built (detailed in the screenshot above that include clickbait, hate speech, racism, etc.). Ghulati added that it will be swapping out the way that Trusted News looks up information. Up to now, it’s been using a tool from MetaCert to power the app, a database of information that’s used to provide a steer on bias.

“We will replace MetaCert and make the system work at the content level rather than a list lookup, using machine learning,” he said, also noting that Factmata plans to add other signals “beyond just if the content is politically hyperpartisan or hate speech, and more things like if it is opinionated, one-sided, and or could be deemed controversial. “We won’t deploy anything into the app until it reaches 90% accuracy,” Ghulati said. “Hopefully from there, humans get it more accurate, per a public testing set we will make available for all signals.”

Ghulati himself is a machine learning specialist and while we haven’t heard a lot from Factmata in the last year, part of that is likely because building a platform from scratch to detect a problem that seems to have endless tentacles (like the web itself) can be a challenge (just as Facebook, which is heavily resourced and still seems to let things slip through).

He said that the eight algorithms it’s built “work well” — which more specifically he said are rating at more than 90 percent accuracy on Factmata’s evaluation sets on US English language news articles. It’s been meanwhile refining the algorithms on short form content using YouTube video transcripts, Tweets, Blog posts, and a move into adding more languages, starting with French.

“The results are promising on the expanded types of content because we have been developing proprietary techniques to allow the models to generalise across domains,” he said.

Factmata has also been working with ad exchanges — as we noted back when Factmata first raised $1 million, this was one of the big frontiers it wanted to tackle, since ad networks are so often used to disseminate false information. It’s now completed case studies with 14 major ad exchanges, SSPs and DSPs and found that up to 4.92 percent of a sample of pages served in some ad exchanges contain high levels of hate speech or hyperpartisan language, “despite them thinking they were clean and them using a number of sophisticated tools with bigger teams than us.”

“This for us showed us there is a lot of this type of language out there that is being inadvertently funded by brands,” he noted.

It’s also been gathering more training data to help classify content, working with people who are “experts in the fields of hate speech or journalistic bias.” He said that Factmata has “proven our hypothesis of using ‘expert driven AI’ makes sense for classifying things that are inherently subjective.” But that is in conjunction with humans: using experts leads to inter-annotator agreement rates above 68 percent, whereas using non experts the agreement of what is or is not a claim or what is or is not bias is lower than 50 percent.

“The eyeo deal along with other commercial partnerships we’re working on are a sign: though the system is not 100 percent accurate yet, within a year of building and testing our tech is ready to start commercialisation,” Ghulati added.

Singapore’s SalesWhale raises $5.3M to bring AI to sales and marketing teams

SalesWhale, a Singapore-based startup that uses AI to help marketers and salespeople generate leads, has announced a Series A round worth $5.3 million.

The investment is led by Monk’s Hill Ventures — the Southeast Asia-focused firm that led SalesWhale’s seed round in 2017 — with participation from existing backers GREE Ventures, Wavemaker Partners and Y Combinator. That’s right, SalesWhale is one a select few Southeast Asian startups to have been through YC, it graduated back in summer 2016.

SalesWhale — which calls itself “a conversational email marketing platform” — uses AI-powered “bots” to handle email. In this case, its digital workforce is trained for sales leads. That means both covering the menial parts of arranging meetings and coordination, and the more proactive side of engaging old and new leads.

Back when we last wrote about the startup in 2017, it had just half a dozen staff. Fast-forward two years and that number has grown to 28, CEO Gabriel Lim explained in an interview. The company is going after more growth with this Series A money, and Lim expects headcount to jump past 70; SalesWhale is deliberating opening an office in California. That location would be primarily to encourage new business and increase communication and support for existing clients, most of whom are located in the U.S., according to Lim. Other hires will be tasked with increasing integration with third-party platforms, and particularly sales and enterprise services.

The past two years have also seen SalesWhale switch gears and go from targeting startups as customers, to working with mid-market and enterprise firms. SalesWhale’s “hundreds” of customers include recruiter Randstad, educational company General Assembly and enterprise service business Unit4. As it has added greater complexity to its service, so the income has jumped from an initial $39-$99 per seat all those years ago to more than $1,000 per month for enterprise customers.

SalesWhale’s founding team (left to right): Venus Wong, Ethan Lee and Gabriel Lim

While AI is a (genuine) threat to many human jobs, SalesWhale sits on the opposite side of that problem in that it actually helps human employees get more work done. That’s to say that SalesWhale’s service can get stuck into a pile (or spreadsheet) of leads that human staff don’t have time for, begin reaching out, qualifying leads and sending them on to living and breathing colleagues to take forward.

“A lot of potential leads aren’t touched” by existing human teams, Lim reflected.

But when SalesWhale reps do get involved, they are often not recognized as the bots they are.

“Customers are often so convinced they are chatting with a human — who is sending collateral, PDFs and arranging meetings — that they’ll say things like ‘I’d love to come by and visit someday,’ ” Lim joked in an interview.

“Indeed, a lot of times, sales team refer to [SalesWale-powered] sales assistant like they are a real human colleague,” he added.

Let’s save the bees with machine learning

Machine learning and all its related forms of “AI” are being used to work on just about every problem under the sun, but even so, stemming the alarming decline of the bee population still seems out of left field. In fact it’s a great application for the technology and may help both bees and beekeepers keep hives healthy.

The latest threat to our precious honeybees is the Varroa mite, a parasite that infests hives and sucks the blood from both bees and their young. While it rarely kills a bee outright, it can weaken it and cause young to be born similarly weak or deformed. Over time this can lead to colony collapse.

The worst part is that unless you’re looking closely, you might not even see the mites — being mites, they’re tiny: a millimeter or so across. So infestations often go on for some time without being discovered.

Beekeepers, caring folk at heart obviously, want to avoid this. But the solution has been to put a flat surface beneath a hive and pull it out every few days, inspecting all the waste, dirt and other hive junk for the tiny bodies of the mites. It’s painstaking and time-consuming work, and of course if you miss a few, you might think the infestation is getting better instead of worse.

Machine learning to the rescue!

As I’ve had occasion to mention about a billion times before this, one of the things machine learning models are really good at is sorting through noisy data, like a surface covered in random tiny shapes, and finding targets, like the shape of a dead Varroa mite.

Students at the École Polytechnique Fédérale de Lausanne in Switzerland created an image recognition agent called ApiZoom trained on images of mites that can sort through a photo and identify any visible mite bodies in seconds. All the beekeeper needs to do is take a regular smartphone photo and upload it to the EPFL system.

The project started back in 2017, and since then the model has been trained with tens of thousands of images and achieved a success rate of detection of about 90 percent, which the project’s Alain Bugnon told me is about at parity with humans. The plan now is to distribute the app as widely as possible.

“We envisage two phases: a web solution, then a smartphone solution. These two solutions allow to estimate the rate of infestation of a hive, but if the application is used on a large scale, of a region,” Bugnon said. “By collecting automatic and comprehensive data, it is not impossible to make new findings about a region or atypical practices of a beekeeper, and also possible mutations of the Varroa mites.”

That kind of systematic data collection would be a major help for coordinating infestation response at a national level. ApiZoom is being spun out as a separate company by Bugnon; hopefully this will help get the software to beekeepers as soon as possible. The bees will thank them later.

Autonomous subs spend a year cruising under Antarctic ice

The freezing waters underneath Antarctic ice shelves and the underside of the ice itself are of great interest to scientists… but who wants to go down there? Leave it to the robots. They won’t complain! And indeed, a pair of autonomous subs have been nosing around the ice for a full year now, producing data unlike any other expedition ever has.

The mission began way back in 2017, with a grant from the late Paul Allen. With climate change affecting sea ice around the world, precise measurements and study of these frozen climes is more important than ever. And fortunately, robotic exploration technology had reached a point where long-term missions under and around ice shelves were possible.

The project would use a proven autonomous seagoing vehicle called the Seaglider, which has been around for some time but had been redesigned to perform long-term operations in these dark, sealed-over environments. ne of the craft’s co-creators, UW’s Chris Lee, said of the mission at the time: “This is a high-risk, proof-of-concept test of using robotic technology in a very risky marine environment.”

The risks seem to have paid off, as an update on the project shows. The modified craft have traveled hundreds of miles during a year straight of autonomous operation.

It’s not easy to stick around for a long time on the Antarctic coast for a lot of reasons. But leaving robots behind to work while you go relax elsewhere for a month or two is definitely doable.

“This is the first time we’ve been able to maintain a persistent presence over the span of an entire year,” Lee said in a UW news release today. “Gliders were able to navigate at will to survey the cavity interior… This is the first time any of the modern, long-endurance platforms have made sustained measurements under an ice shelf.”

You can see the paths of the robotic platforms below as they scout around near the edge of the ice and then dive under in trips of increasing length and complexity:

They navigate in the dark by monitoring their position with regard to a pair of underwater acoustic beacons fixed in place by cables. The blue dots are floats that go along with the natural currents to travel long distances on little or no power. Both are equipped with sensors to monitor the shape of the ice above, the temperature of the water, and other interesting data points.

It isn’t the first robotic expedition under the ice shelves by a long shot, but it’s definitely the longest term and potentially the most fruitful. The Seagliders are smaller, lighter, and better equipped for long-term missions. One went 87 miles in a single trip!

The mission continues, and two of the three initial Seagliders are still operational and ready to continue their work.

Sophia Genetics bags $77M Series E, with 850+ hospitals signed up to its “data-driven medicine”

Another sizeable cash injection for big data biotech: Sophia Genetics has announced a $77M Series E funding round, bringing its total raised to $140M since the business was founded back in 2011.

The company, which applies AI to DNA sequencing to enable what it dubs “data-driven medicine”, last closed a $30M Series D in fall 2017.

The Series E was led by Generation Investment Management . Also investing: European private equity firm, Idinvest Partners. Existing investors, including Balderton Capital and Alychlo, also participated in the round.

When we last spoke to Sophia Genetics it had around 350 hospitals linked via its SaaS platform, and was then adding around 10 new hospitals per month.

Now it says its Sophia AI platform is being used by more than 850 hospitals across 77 countries, and it claims to have supported the diagnosis of more than 300,000 patients.

The basic idea is to improve diagnoses by enabling closer collaboration and knowledge sharing between hospitals via the Sophia AI platform, with an initial focus on oncology, hereditary cancer, metabolic disorders, pediatrics and cardiology. 

Expert (human) insights across the network of hospital users are used to collectively enhance genomic diagnostics, and push towards predictive analysis, by feeding and training AI algorithms intended to enhance the reading and analysis of DNA sequencing data.

Sophia Genetics describes its approach as the “democratization” of DNA sequencing expertise.

Commenting on the Series E in a statement, Lilly Wollman, co-head of Generation’s growth equity team said: “We believe that leveraging genetic sequencing and advanced digital analysis will enable a more sustainable healthcare system. Sophia Genetics is a leader in the preventive and personalized medicine revolution, enabling the development of targeted therapeutics, thereby vastly improving health outcomes. We admire Sophia Genetics not just for its differentiated analytics capability across genomic and radiomic data, but also for its exceptional team and culture”.

The new funding will be put towards further expanding the number of hospitals using Sophia Genetics’ technology, and also on growing its headcount with a plan to ramp up hiring in the US especially.

The Swiss-founded firm is now co-based in Lausanne and Boston, US.

In another recent development the company added radiomics capabilities to its platform last year, allowing for what it describes as “a prediction of the evolution of a tumour”, which it suggests can help inform a physician’s choice of treatment for the patient.

China’s WeChat is the latest to get Snap-like ‘Stories’

WeChat, the Chinese messaging giant with more than 1 billion monthly active users around the world, just added a Snap-like ephemeral video feature as part of its biggest overhaul since 2014.

The revamp comes as Tencent, which owns stakes in Snap, sees increasing rivalry from up-and-comers like video app TikTok and news app Jinri Toutiao. WeChat has, over the years, morphed beyond a straight-up messenger to include many utility purposes. With more than 1 million lightweight apps up and running, users can accomplish a long list of tasks, ranging from shopping to ride-hailing, without ever having to leave WeChat.

Meanwhile, some have expressed frustration over WeChat’s core as a social app. Moments, a feature akin to Facebook News Feed, was once a haven for close friends to share articles, photos and videos. But newsfeed content became blander over time as people’s contact list grew to include their bosses and their local fruit seller who needs to be added as a friend to process WeChat payments.

WeChat founder Allen Zhang is known for his obsession with user experience and has been cautious with tweaks, so a major redesign to the super app is effectively a guidebook for where WeChat is headed for the next few years.

The new off-the-cuff video feature, aptly named “Time Capsule,” is one of WeChat’s more noticeable updates. In the past, users shared videos to three main destinations: A friend, a group chat or Moments. This route remains unchanged, but with Time Capsule, users also can upload videos of up to 15 seconds that disappear after 24 hours, similar to how Snap Stories and its slew of clones, including Instagram Stories, work. Meanwhile, Snap also has drawn inspiration from Chinese apps in a recent redesign.

A blue ring will appear near the profile of those who have recorded an instant story. Screenshot by TechCrunch

Different from Instagram, which recently started allowing users to share Stories to close friends, WeChat doesn’t let users share Time Capsule videos to friends yet. Instead of lining up all the instant videos at the top of the app as Instagram does, WeChat is asking users to find them in less conspicuous ways: On Moments, in a group chat or in one’s starred friend list, a blue ring will appear near the profile of those who have recorded instant stories.

These secret entry points mean users are prompted to watch videos of those they know well, as they rarely click on the profile of, say, a fruit vendor.

Time Capsule is also a step up from WeChat’s old video sharing tool, with additional features such as locations and music, functions that are ubiquitous in TikTok and other short-form video apps. Users also can react to Time Capsule videos by blowing virtual “bubbles,” whereas the old video format doesn’t allow such interaction.

Time Capsule is a step up from WeChat’s old video sharing tool, with additional features such as locations and music. Screenshot by TechCrunch

While Time Capsule is not necessarily a direct challenger to TikTok — a product of the world’s most valuable startup ByteDance — it enriches the video experience for users who want to give close friends a window into their life. TikTok, by comparison, delivers content by relying on artificial intelligence to read users’ past habits rather than studying their social graphs.

That said, WeChat has shown signs to catch up with TikTok by rolling out a dozen video apps this year. While Tencent blocks TikTok videos from being shared to WeChat, its own proprietary video app Weishi gets preferential treatment. When users choose to record a video on WeChat, there’s an option to record it via Weishi. But Tencent’s short video fleet has a long way to go before they reach TikTok’s global dominance of 500 million monthly active users.

Another WeChat update also appears as a response to a popular ByteDance app. While WeChat users could show appreciation for an article by clicking on a “like” button, there was no effective way in the past to know what their friends enjoyed. The revamped WeChat now lets people see all the articles their friends have liked under one single stream called “Wow.”

That’s a feature that ByteDance’s Jinri Toutiao news app cannot rival, as Wow is built on billions of users who know each other, unlike Jinri Toutiao, which relies on AI personalization like its sibling TikTok. WeChat is already colossal and can never please every user, but its new move shows that it’s paying close attention to whoever that may steal its users’ eyeball time away.

Facebook open sources PyText NLP framework

Facebook AI Research is open sourcing some of the conversational AI tech it is using to power its Portal video chat display and M suggestions on Facebook Messenger.

The company announced today that its PyTorch-based PyText NLP framework is now available to developers.

Natural language processing deals with how systems parse human language and are able to make decisions and derive insights. The PyText framework, which the company sees as a conduit for AI researchers to move more quickly between experimentation and deployment will be particularly useful for tasks like document classification, sequence tagging, semantic parsing and multitask modeling, among others, Facebook says.

The company has built the framework to fit pretty seamlessly into research and production workflows with an emphasis on robustness and low-latency to meet the company’s real-time NLP needs. The product is responsible for models powering more than a billion daily predictions at Facebook.

Another big highlight is the framework’s modularity, allowing it to not only to create new pipelines from scratch but to modify existing workflows. PyText connects to the ONNX and Caffe2 frameworks. It also supports training multiple models at once in addition to distributed training to train models over several runs.

The company obviously isn’t done with making improvements to its NLP frameworks. Facebook says that going forward they’re paying particular attention to working to build an end-to-end workflow for models running on mobile devices.

PyText is available on GitHub.

Tencent is launching its own version of Snap Spectacles

Some were surprised to see Snap release a second version of its “face-camera” Spectacles gadget, since the original version failed to convert hype into sales.

But those lackluster sales — which dropped to as low as 42,000 per quarter — didn’t only fail to dissuade the U.S. social firm from making more specs, because now Tencent, the Chinese internet giant and Snap investor, has launched its own take on the genre.

Tencent this week unveiled its answer to the video-recording sunglasses, which, you’ll notice, bear a striking resemblance to Snap’s Spectacles.

Called the Weishi smartglasses, Tencent’s wearable camera sports a lens in the front corner that allows users to film from a first-person perspective. Thankfully, the Chinese gaming and social giant has not made the mistake of Snap’s first-generation Spectacles, which highlighted the camera with a conspicuous yellow ring.

Tencent, which is best known for operating China’s massively popular WeChat messenger, has been an investor in Snap for some time after backing it long before it went public. But, when others have criticized the company and its share price struggled, Tencent doubled down. It snapped up an additional 12 percent stake one year ago and it is said to have offered counsel to Snap CEO Evan Spiegel on product strategy. We don’t know, however, if the two sides’ discussions have ever covered Spectacles and thus inspired this new Tencent take on then.

The purpose behind Tencent’s new gadget is implicit in its name. Weishi, which means “micro videos” in Chinese, is also the name of the short-video sharing app that Tencent has been aggressively promoting in recent months to catch up with market dominators TikTok and Kuaishou .

TikTok, known as Douyin in China, is part of the entertainment ecosystem that Beijing-based ByteDance is building. ByteDance also runs the popular Chinese news aggregator Toutiao and is poised to overtake Uber as the world’s most-valued tech startup when it closes its mega $3 billion funding round.

Weishi’s other potential rival Kuaishou is, interestingly, backed by Tencent. Kuaishou launched its own video-taking sunglasses in July.

Alongside the smart sunglasses, Tencent has also rolled out a GoPro-like action camera that links to the Weishi app. Time will tell whether the gadgets will catch on and get more people to post on Weishi.

Snap Spectacles V1 (top) and V2

The spectacles will go on sale November 11, a date that coincides with Singles Day, the annual shopping spree run by Tencent’s close rival Alibaba. Tencent does not make the gadget itself and instead has teamed up with Shenzhen-based Tonot, a manufacturer that claims to make “trendy” video-taking glasses. Tonot has also worked with Japan’s Line chat app on camera glasses.

“There isn’t really a demand for video-recording glasses,” says Mi Zou, a Beijing-based entrepreneur working on an AI selfie app. That’s because smartglasses are “not offering that much more to consumers than smartphones do,” she argues. Plus, a lot of people on apps like Douyin and Kuaishou love to take selfies, a need that smartglasses fail to fulfill.

“Tencent will have to work on its marketing. It could perhaps learn a few things from the Apple Watch, which successfully touts a geeky product as a fashionable accessory,” suggests Mi, who points out Snap Spectacles’ so-far dim reception.

Weishi had not responded to TechCrunch’s request for comment at the time of writing, but we’ll update this story with any additional information should the company provide it.