Using Trainable Classifiers to Assign Office 365 Retention Labels

The content below is taken from the original ( Using Trainable Classifiers to Assign Office 365 Retention Labels), to continue reading please visit the site. Remember to respect the Author & Copyright.

Training a classifier
Training a classifier

The Challenge of Retention Processing

Retention labels control how long items remain in an Office 365 workload and what happens once the retention period expires. Labels can be assigned manually, but the success of manual labeling depends on users understanding how to make the best choice from the available retention labels. Sometimes the choice is clear, as in a document which obviously contains information that should be kept, and sometimes it’s not.

Auto-label policies try to solve the problem by looking for documents and messages which match patterns. For example, if a document holds four instances of a credit card number, it should be assigned the Financial Data label. On the other hand, if a document holds personal information like a social security number, it should get the PII Data label.

Auto-label policies work well when items hold content that is identifiable by matching against the 100-plus sensitive data types defined by Microsoft or a keyword search for a specific phrase (like “project Contoso”). They are especially valuable when organizations have large numbers of existing documents to be labeled. Computers are better at repetitive tasks than humans, and it makes sense to deploy intelligent technology to find and label documents at scale.

That is, if you can be sure that the documents you want to label can be accurately located. Sensitive data types and keyword searches do work, but there’s always likely to be some form of highly-specific information in an organization that searching by data type or keyword doesn’t quite suit. Using a trainable classifier might help in these situations.

Standard Classifiers and Licensing

A trainable classifier is a digital map of a type of document (Office 365 has supported digital fingerprints extracted from template documents for several years). The classifier is trainable because it learns by observing samples of the documents you want to process plus some examples of non-matching items until the predictions made by the classifier are accurate enough for it to be used.

Microsoft has a set of classifiers for use in compliance features, like the Profanity or Threat classifiers used in communication compliance policies. As the names suggest, these classifiers identify items containing profane or threatening text. Microsoft created the classifiers by training them with large numbers of text examples for the classifiers to learn the essential signs of what might constitute profane or threatening language.

A preview allowing tenants to create and use trainable classifiers in Office 365 is available in the data classification section of the Microsoft 365 compliance center. Like all auto-label functionality, when trainable classifiers are generally available, they’ll need Office 365 or Microsoft 365 E5 compliance licenses.

Creating a Trainable Classifier

To create a trainable classifier, you’ll need at least 250 samples of the type of document you eventually want to use the classifier to locate (more is better). The documents can’t be encrypted, must be in English (for now), and be stored in SharePoint Online folders that only hold items to be used for training.

To test things out, I created a classifier for Customer Invoices using ten years’ worth of the Excel worksheets I use to generate invoices. The steps I took were:

  • Create a folder in a SharePoint Online site and copied customer invoices to the folder. The training model is built from these documents.
  • Create the new trainable classifier in the Microsoft 365 compliance center by giving the classifier a name and telling it the folder holding the seed documents.
  • Wait for the seed documents to be processed to create the training model (this can take 24 to 48 hours). After indexing the folder, the new classifier will examine the seed documents to understand their characteristics. In my case, what makes an invoice? For instance, the classifier will learn that invoices have a customer name, the name of my company, a date, some lines of billing information, and instructions how to pay. Although the seed documents contain different information, the essential structure of the documents are the same, and this is what helps the classifier learn how to recognize future documents of the same type.
  • Go through a review process (batches of 30 items) to check the predictions made by the classifier. A human review tells the classifier when it is right or wrong (Figure 1). The training model is updated after you complete a batch and applied to the next batch of reviews.
Image 1 Expand
Figure 1: Training a classifier (image credit: Tony Redmond)

Publish the Classifier

As testing proceeds, the accuracy of the classifier should improve as it processes more seed documents. Eventually the accuracy will get good enough (Figure 2) and you’ll be able to publish the classifier to make it available to auto-label policies. Microsoft says that they have seen successful classifiers at 88% accuracy, and providing that the classifier is stable and predictable at that point, it’s good to go.  It’s important that you don’t rush to publish until the classifier is thoroughly trained because you can’t force the classifier to go through extra training after publication.

Image 2 Expand
Figure 2: Ready to publish a trainable classifier

Two steps remain before you can use the trainable classifier. First, you create a suitable retention label for the classifier. This can be an existing label, but you might want to create a new label for exclusive use with the classifier.

Second, you create an auto-label policy to apply the chosen label when the trainable classifier matches an item. The policy is built from the label, the classifier (Figure 3), and the locations where you want auto-labeling to happen. This can be all SharePoint sites and mailboxes in the tenant or just a selected few. My recommendation is to start with one or two sites and monitor progress until you’re happy to use the classifier everywhere.

Image 3 Expand
Figure 3: Choosing a trainable classifier in an auto-label policy (image credit: Tony Redmond)

Differentiating Between SharePoint Sites and SharePoint Sites

For some reason, auto-label policies differentiate between “regular” SharePoint sites and those connected to Microsoft 365 groups. Make sure that you select the right category: I spent a week or so wondering why a policy wasn’t working only to discover that it was because I had input the URL of the site belonging to a group (under SharePoint sites) instead of the group name (under Microsoft 365 groups). I don’t understand why Microsoft differentiates regular and group-connected sites.

It’s possible that you might want to apply labels only to documents in a site belonging to a group and not to messages in the group mailbox, but there doesn’t seem to be a good way to do this in the current setup.

Checking Classifier Effectiveness

As noted above, once published you can’t retrain a classifier, but you can check what it’s doing by monitoring items labeled by the auto-label policy. Remember that auto-label policies will not process items that already been assigned a label.

The simplest test is to examine the retention labels on documents which you expect to be auto-labeled. If this is the case, then there’s a reasonable chance that the classifier is working as expected. To confirm that this is true, the activity explorer in the data classification section of the Microsoft 365 compliance center (Figure 4) gives an insight into the application of retention labels and sensitivity labels.

Image 4 Expand
Figure 4: Documents show up as being auto-labeled (image credit: Tony Redmond)

You can also check by looking for audit records in the Office 365 audit log. Records for ComplianceSettingChanged operations are generated when retention labels are applied, but only for SharePoint Online and OneDrive for Business documents.

Black Box Processing

Checking outputs from a process is a good way of knowing if the process works, but it’s not as satisfactory as it would be if greater visibility existed into aspects of auto-label policies such as:

  • When the auto-label policy is processed against the selected locations.
  • What documents are classified (and documents that match but are not labeled because they already have a label).
  • Any errors which occur.

Ideally, an administrator should be able to view an auto-label policy and see details of recent runs. It would also be good if the administrator could force the policy to run against one or more selected locations, much in the same way that a site owner can force SharePoint Online to reindex a site.

Good Application of Machine Learning

Even though the implementation of trainable classifiers in auto-label policies has some rough edges, I like the general thrust of what Microsoft is trying to do. Being able to build tenant-specific classifiers based on real-life information is goodness. Casting more light into how classifiers work when used in auto-label policies would make these policies so much sweeter.

The post Using Trainable Classifiers to Assign Office 365 Retention Labels appeared first on Petri.

Should You Be Using Cloud-Based Digital CAD Platforms?

The content below is taken from the original ( Should You Be Using Cloud-Based Digital CAD Platforms?), to continue reading please visit the site. Remember to respect the Author & Copyright.

For years, desktop-based, on-site CAD tools have been the standard for designers and engineers working in various industries. These tools are… Read more at VMblog.com.

Global heatmap of cheater density says Brazil is the worst at video games, but there’s no data on China

The content below is taken from the original ( Global heatmap of cheater density says Brazil is the worst at video games, but there’s no data on China), to continue reading please visit the site. Remember to respect the Author & Copyright.

Script kiddies run rampant in Minecraft

Ever torn your keyboard from the desk and flung it across the room, vowing to find the “scrub cheater” who ended your run of video-gaming success? Uh, yeah, us neither, but a study into the crooked practice might help narrow down the hypothetical search.…

Hidden Windows Terminal goodies to check out: Retro mode that emulates blurry CRT display – and more

The content below is taken from the original ( Hidden Windows Terminal goodies to check out: Retro mode that emulates blurry CRT display – and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Don’t worry, there are some useful features in the update too

Microsoft has bequeathed new capabilities to both the released and the preview versions of Windows Terminal, a feature-laden alternative to the command prompt.…

How to Identify Unsupported Teams Devices using Endpoint Manager

The content below is taken from the original ( How to Identify Unsupported Teams Devices using Endpoint Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.


At the end of June Microsoft announced that they would retire Teams mobile support for Android 4.4 (KitKat) by September this year, which is just around the corner.

This in general is good, because if you use Intune to manage your devices today, then you should be planning to move to Android Enterprise management features that require Android 5.0 or higher – or ideally Android 6.0 and above.

However, moving to a newer version of Android isn’t straightforward because unlike Apple’s iOS, which has a clear-cut set of definitions for which devices will get OS updates, the decision to update an Android OS rests with various device manufacturers, and in many cases wireless carriers.

Devices affected by this change are smartphones and tablets that are typically at least five years old and include older devices such as the Samsung Galaxy S3. Most devices from around 2015 onward received updates to Android 5 (Lollipop) and Android 6 (Marshmallow); dedicated Teams phones are not affected by this change.

Finding older devices enrolled with Intune

If you enroll devices with Intune, then these will be straightforward to identify. The Intune Company Portal app has only supported Android 5.0 and higher since January 2020, therefore it’s unlikely you will have received any new enrolments since then – but devices with the app already installed can continue to enrol today.

To find these devices and export a list, visit the Microsoft Endpoint Manager Admin Center and navigate to Devices>Android Devices. You will see a list of all Android devices currently enrolled:

Image #1 Expand
Figure 1: Viewing Android 4.4 devices in the MEM portal (image credit: Steve Goodman)

If you need to filter the list of enrolled Android devices running the affected version, enter 4.4 into the Search by.. box in the UI. You can the use Export to gain a list of all enrolled devices.

Finding all older devices using Azure AD Sign-In logs

If you allow mobile devices to use Teams without Intune enrollment, or do not use Intune, then you will need to use a different method to discover devices running older versions of Android.

The Azure AD admin center provides the ability to review, filter and export sign-ins for the last month, which should give you a good indication of all older Android devices connected to your environment.

To find these devices, visit the Azure AD admin center and navigate to Sign-ins. Choose the time period you wish to filter by, then filter by Operating system starts with entering the value “Android 4” combined with Application starts with, using the value “Microsoft Teams”.

This will show a comprehensive list of all sign-ins to the Microsoft Teams application from Android 4.4 devices:

Image #2 Expand
Figure 2: Using Azure AD Sign-In logs to find Android 4.4 devices using Teams (image credit: Steve Goodman)

Because these are sign-in logs, you will see an element of duplication, which will make it harder to identify individual devices. Therefore, use the Download option to export a CSV report of the filtered sign-ins. You can then open this in Excel, and use the Remove Duplicates, using Username as the key to reduce this list to one line per user:

Image #3 Expand
Figure 3: Using Excel to provide a consolidated list of Android 4.4 Teams clients (image credit: Steve Goodman)

After identifying users affected, you will have several options. For BYOD (Bring your own device) scenarios, it is unlikely you will need to provide a replacement, but you will need to inform users that Teams is expected to cease working on their device.

For corporate-owned devices running older versions that you replace, it is worth ensuring any replacement devices you issue will continue to receive security updates as well as Android OS updates.

The post How to Identify Unsupported Teams Devices using Endpoint Manager appeared first on Petri.

Setting Up Virtual Lab Environments Using Azure Lab Services

The content below is taken from the original ( in /r/ AZURE), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://ift.tt/32ENDzA

Support to assess physical, AWS, GCP servers now generally available

The content below is taken from the original ( Support to assess physical, AWS, GCP servers now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

The assessments offer Azure suitability analysis, migration cost planning and performance-based rightsizing.

Immersive Reader is now generally available

The content below is taken from the original ( Immersive Reader is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Immersive Reader is an Azure Cognitive Service for developers who want to embed inclusive capabilities into their apps for enhancing text reading and comprehension for users regardless of age or ability.

How Parcel Shuttle makes last-mile delivery more eco-friendly and drivers happier

The content below is taken from the original ( How Parcel Shuttle makes last-mile delivery more eco-friendly and drivers happier), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: Today’s post comes from Simon Seeger, founder of Parcel Shuttle, a GLS Group backed Berlin parcel delivery solution that’s rethinking the sector with a “smart microgrid” system which enables it to reduce the carbon footprint of delivery runs while offering an opportunity to provide a flexible income for drivers. 

When we launched Parcel Shuttle last year, I was confident that Google Maps Platform would help us bring eco-friendly parcel delivery to Berlin through precise navigation. Never did I imagine that it would also lend a helping hand in the most precious delivery of all: a baby.

I’ll never forget the day a driver called our dispatch office saying he couldn’t continue his shift because his wife had gone into labor. The team scrambled to remotely adjust his navigation, redirecting his route to his home, then the hospital. Next, they used route optimization and live traffic prediction to get the couple to the delivery room as fast as possible.

The day little Vivien was born safely, weighing 7.16 lbs, brought great joy to all of us at Parcel Shuttle. It also reminded us why we’re in this business in the first place. Delivering essentials in the greenest possible way, while bringing happiness to our drivers through fairness and work flexibility.

A green and fair vision for parcel delivery

We’re disrupting the parcel sector through a “smart microgrid” concept that drastically reduces the distance required to deliver parcels, and brings drivers more flexibility by keeping delivery runs within their own neighborhood. All of our drivers decide exactly when they want to work, enabling them, for example, to attend dance classes in the morning and work a two-hour shift for us in the afternoon.

Parcel Shuttle Smart Microgrid

In parcel delivery, fleets of vans normally fan out from warehouses outside the city to drop off parcels in town before returning to the depot. 60 percent of last mile delivery kilometers are made up of additional mileage and man-hours spent making the back-and-forth depot journey, not to mention criss-crossing town for deliveries. We decided to turn the process on its head.

In our model, one large truck leaves the warehouse on a “milk run” to drop parcels into delivery cars parked within each Berlin micro-grid, walking distance from drivers’ homes. Then, the driver just walks out the door, finds the car loaded with parcels, and delivers the goods in an efficient loop around their neighborhood, reducing the burden on drivers, roads, and air quality.

Our innovation may sound simple, but execution is a huge challenge. To make it work, we need cutting-edge Google Maps Platform navigation, route sequencing, and geocoding solutions to quickly guide freelance drivers to delivery points. 

Delivering goods to the right address as efficiently as possible

Imagine having to deliver a parcel with invalid address inputs for just about everything: company name, street number, postal code, and more. In a town like Berlin, it’s like trying to find a needle in a haystack. And we can’t blame the client for giving us the wrong address. We just have to deliver. Period. If not, we lose that business.

In order to crunch reams of location data to fix incorrect address inputs, enabling our drivers to find the right delivery point with minimum fuss we use the Geocoding API. Working with the Geocoding API, we can get a completely garbled address come in, and amazingly, the correct address pops out. It’s a real life-saver.

The right address is critical, but drivers also need to determine how to get there. We are able to lead our novice drivers to each destination, as if they’ve been doing this job for decades with the Maps JavaScript API.

Parche Shuttle Mobile Navigation

Parcel Shuttle Navigation

We quickly learned that calculating distances is only one part of the route optimization solution. We need to analyze live traffic conditions, such as bottlenecks, roadworks, and red lights, to deliver the best possible route at any given moment and to tell drivers the most efficient order in which to make multiple stops. This is where using the Maps JavaScript API and the API’s Traffic Layer helps us out.

The results have been encouraging. We’ve achieved more than 60 percent reduction in road covered in the city center, thanks to a combination of our own proprietary Android application and Google Maps Platform tools. We’ve also gained 25 percent time savings in last-mile delivery via Google Maps Platform guided solutions.

Serving Berlin amid the COVID-19 storm

The COVID-19 situation tested our business model due to a huge spike in demand. The lockdown not only led to families in Berlin ordering most of their everyday needs online, it also meant that small businesses needed to source supplies and spare parts through parcel deliveries instead of buying them onsite from wholesalers. 

During lockdown, we experienced a 25 percent rise in delivery points. Increasing stops by that amount greatly complicates routing calculations for each run. Basically, we have a quarter more places to visit in the same time window as before. It could have been a nightmare without state-of-the-art route optimization.

We’re relieved that our delivery model has held strong during COVID-19, and proud that we’ve been able to serve the Berlin community. We experienced almost no stress to our eco-friendly model and commitment to timely delivery. This just wouldn’t have been possible without the powerful navigation and geocoding solutions of Google Maps Platform.

Global expansion with Google solutions

We see our growth potential as unlimited due to the combination of a simple yet unique business model, and Google Maps Platform tools to help make our model work. Our next steps will be bringing our platform to more German cities, then to the world.

We’re excited about greening parcel delivery around the world. And how’s baby Vivien doing closer to home? Just as fine as can be. We keep her picture up on our office wall as a happy reminder that our mission is to bring joy, in all its forms, to people’s doorsteps.

For more information on Google Maps Platform, visit our website.

LG’s transparent OLED displays are on subway windows in China

The content below is taken from the original ( LG’s transparent OLED displays are on subway windows in China), to continue reading please visit the site. Remember to respect the Author & Copyright.

LG is bringing transparent OLED displays to subways in Beijing and Shenzhen. The 55-inch, see-through displays show real-time info about subway schedules, locations and transfers on train windows. They also provide info on flights, weather and the ne…

Windows 10 can run apps from your Samsung phone

The content below is taken from the original ( Windows 10 can run apps from your Samsung phone), to continue reading please visit the site. Remember to respect the Author & Copyright.

You’ll soon have access to your phone’s apps on your PC — if you have the right phone, at least. Microsoft is rolling out a Windows 10 Your Phone update with support for running mobile apps on your desktop, as promised when Samsung revealed the Galax…

I figured out how to log into an Azure VM using Azure AD credentials. This is not well documented.

The content below is taken from the original ( in /r/ AZURE), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft uses AI to boost its reuse, recycling of server parts

The content below is taken from the original ( Microsoft uses AI to boost its reuse, recycling of server parts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is bringing artificial intelligence to the task of sorting through millions of servers to determine what can be recycled and where.

The new initiative calls for the building of so-called Circular Centers at Microsoft data centers around the world, where AI algorithms will be used to sort through parts from decommissioned servers or other hardware and figure out which parts can be reused on the campus.

Microsoft says it has more than three million servers and related hardware in its data centers, and that a server’s average lifespan is about five years. Plus, Microsoft is expanding globally, so its server numbers should increase.

To read this article in full, please click here

Cloud management is changing rapidly with the times: Here’s what you need to know

The content below is taken from the original ( Cloud management is changing rapidly with the times: Here’s what you need to know), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Jesse Stockall, Chief Architect of Cloud Management at Snow It’s a fast-changing world. But at the same time, we’re also falling back on the tried Read more at VMblog.com.

IBM details next-gen POWER10 processor

The content below is taken from the original ( IBM details next-gen POWER10 processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM on Monday took the wraps off its latest POWER RISC CPU family, optimized for enterprise hybrid-cloud computing and artificial intelligence (AI) inferencing, along with a number of other improvements.

Power is the last of the Unix processors from the 1990s, when Sun Microsystems, HP, SGI, and IBM all had competing Unixes and RISC processors to go with them. Unix gave way to Linux and RISC gave way to x86, but IBM holds on.

This is IBM’s first 7-nanometer processor, and IBM claims it will deliver an up-to-three-times improvement in capacity and processor energy efficiency within the same power envelope as its POWER9 predecessor. The processor comes in a 15-core design (actually 16-cores but one is not used) and allows for single or dual chip models, so IBM can put two processors in the same form factor. Each core can have up to eight threads, and each socket supports up to 4TB of memory.

To read this article in full, please click here

Samsung Pay Card launches in UK, powered by fintech Curve

The content below is taken from the original ( Samsung Pay Card launches in UK, powered by fintech Curve), to continue reading please visit the site. Remember to respect the Author & Copyright.

Samsung Pay Card, a new Mastercard debit card from the mobile handset giant, has launched in the U.K. today.

Powered by London-based fintech Curve, it lets you consolidate all of your other existing bank cards into a single card and digital wallet, making it easy to manage your money and, of course, use Samsung Pay more universally.

Unsurprisingly, Samsung Pay Card users will also get access to other Curve features. They include a single view of your card spending that is entirely agnostic to where your money is stored, as well as instant spend notifications, cheaper FX fees than your bank typically charges, peer-to-peer payments from any linked bank account and the ability to switch payment sources retroactively.

The latter — dubbed “Go Back in Time” — lets you move transactions from one card to another after they’ve been made, meaning that you have more flexibility and control of your spending. For example, perhaps you made a large purchase from one of your linked debit cards but for cash flow cashflow purposes decide it would be better charged to your credit card. That’s possible to do using Curve and now Samsung Pay Card.

In addition, as an introductory offer, Samsung Pay Card users get 1% cashback at selected merchants, and exclusive to the Samsung Pay Card, can also earn 5% on all purchases at Samsung.com.

Comments Conor Pierce, corporate vice president Corporate Vice-President of Samsung UK & Ireland: “At Samsung we believe in the power of innovation and, through our partnership with Curve, the Samsung Pay Card brings a series of pioneering features that will change the way that our customers manage their spending, with their Samsung smartphone and smartwatch at the heart of it. This is the future of banking and we look forward to continuing this journey with our customers.”

Pass that Brit guy with the right-hand drive: UK looking into legalising automated lane-keeping systems by 2021

The content below is taken from the original ( Pass that Brit guy with the right-hand drive: UK looking into legalising automated lane-keeping systems by 2021), to continue reading please visit the site. Remember to respect the Author & Copyright.

First step to self-driving vehicles on British roads

RoTM Self-driving vehicles have taken a modest step forward towards legality, with the UK’s Department for Transport (DfT) launching a Call for Evidence that will determine the safety and efficacy of Automated Lane Keeping Systems (ALKS) with an aim to legalise the technology by spring 2021.…

Tech At Home Winners Who Made the Best of their Quarantine

The content below is taken from the original ( Tech At Home Winners Who Made the Best of their Quarantine), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back in April we challenged hackers to make the best of a tough situation by spending their time in isolation building with what they had laying around the shop. The pandemic might have forced us to stay in our homes and brought global shipping to a near standstill, but judging by the nearly 300 projects that were ultimately entered into the Making Tech At Home Contest, it certainly didn’t stifle the creativity of the incredible Hackaday community.

While it’s never easy selecting the winners, we think you’ll agree that the Inverse Thermal Camera is really something special. Combining a surplus thermal printer, STM32F103 Blue Pill, and OV7670 camera module inside an enclosure made from scraps of copper clad PCB, the gadget prints out the captured images on a roll of receipt paper like some kind of post-apocalyptic lo-fi Polaroid.


The HexMatrix Clock also exemplified the theme of working with what you have, as the electronics were nothing more exotic than a string of WS2811 LEDs and either an Arduino or ESP8266 to drive them. With the LEDs mounted into a 3D printed frame and diffuser, this unique display has an almost alien beauty about it. If you like that concept and have a few more RGB LEDs laying around, then you’ll love the Hive Lamp which took a very similar idea and stretched it out into the third dimension to create a standing technicolor light source that wouldn’t be out of place on a starship.

Each of these three top projects will receive a collection of parts and tools courtesy of Digi-Key valued at $500.

Runners Up

Out friends at Digi-Key were also kind enough to provide smaller grab bags of electronic goodies to the creators of the following 30 projects to help them keep hacking in these trying times:

The Making Tech At Home Contest might be over, but unfortunately, it looks like COVID-19 will be hanging around for a bit. Hopefully some of these incredible projects will inspire you to make the most out of your longer than expected downtime.

Advancing the outage experience—automation, communication, and transparency

The content below is taken from the original ( Advancing the outage experience—automation, communication, and transparency), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Service incidents like outages are an unfortunate inevitability of the technology industry. Of course, we are constantly improving the reliability of the Microsoft Azure cloud platform. We meet and exceed our Service Level Agreements (SLAs) for the vast majority of customers and continue to invest in evolving tools and training that make it easy for you to design and operate mission-critical systems with confidence.

In spite of these efforts, we acknowledge the unfortunate reality that—given the scale of our operations and the pace of change—we will never be able to avoid outages entirely. During these times we endeavor to be as open and transparent as possible to ensure that all impacted customers and partners understand what’s happening. As part of our Advancing Reliability blog series, series, I asked Sami Kubba, Principal Program Manager overseeing our outage communications process, to outline the investments we’re making to continue improving this experience.”—Mark Russinovich, CTO, Azure


In the cloud industry, we have a commitment to bring our customers the latest technology at scale, keeping customers and our platform secure, and ensuring that our customer experience is always optimal. For this to happen Azure is subject to a significant amount of change—and in rare circumstances, it is this change that can bring about unintended impact for our customers. As previously mentioned in this series of blog posts we take change very seriously and ensure that we have a systematic and phased approach to implementing changes as carefully as possible.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors can align to cause service incidents—also known as outages. The reality of our industry is that impact caused by change is an intrinsic problem. When we think about outage communications we tend not to think of our competition as being other cloud providers, but rather the on-premises environment. On-premises change windows are controlled by administrators. They choose the best time to invoke any change, manage and monitor the risks, and roll it back if failures are observed.

Similarly, when an outage occurs in an on-premises environment, customers and users feel that they are more ‘in the know.’ Leadership is promptly made fully aware of the outage, they get access to support for troubleshooting, and expect that their team or partner company would be in a position to provide a full Post Incident Report (PIR)—previously called Root Cause Analysis (RCA)—once the issue is understood. Although our data analysis supports the hypothesis that time to mitigate an incident is faster in the cloud than on-premises, cloud outages can feel more stressful for customers when it comes to understanding the issue and what they can do about it.

Introducing our communications principles

During cloud outages, some customers have historically reported feeling as though they’re not promptly informed, or that they miss necessary updates and therefore lack a full understanding of what happened and what is being done to prevent future issues occurring. Based on these perceptions, we now operate by five pillars that guide our communications strategy—all of which have influenced our Azure Service Health experience in the Azure portal and include:

  1. Speed
  2. Granularity
  3. Discoverability
  4. Parity
  5. Transparency

Speed

We must notify impacted customers as quickly as possible. This is our key objective around outage communications. Our goal is to notify all impacted Azure subscriptions within 15 minutes of an outage. We know that we can’t achieve this with human beings alone. By the time an engineer is engaged to investigate a monitoring alert to confirm impact (let alone engaging the right engineers to mitigate it, in what can be a complicated array of interconnectivities including third-party dependencies) too much time has passed. Any delay in communications leaves customers asking, “Is it me or is it Azure?” Customers can then spend needless time troubleshooting their own environments. Conversely, if we decide to err on the side of caution and communicate every time we suspect any potential customer impact, our customers could receive too many false positives. More importantly, if they are having an issue with their own environment, they could easily attribute these unrelated issues to a false alarm being sent by the platform. It is critical that we make investments that enable our communications to be both fast and accurate.

Last month, we outlined our continued investment in advancing Azure service quality with artificial intelligence: AIOps. This includes working towards improving automatic detection, engagement, and mitigation of cloud outages. Elements of this broader AIOps program are already being used in production to notify customers of outages that may be impacting their resources. These automatic notifications represented more than half of our outage communications in the last quarter. For many Azure services, automatic notifications are being sent in less than 10 minutes to impacted customers via Service Health—to be accessed in the Azure portal, or to trigger Service Health alerts that have been configured, more on this below.

With our investment in this area already improving the customer experience, we will continue to expand the scenarios in which we can notify customers in less than 15 minutes from the impact start time, all without the need for humans to confirm customer impact. We are also in the early stages of expanding our use of AI-based operations to identify related impacted services automatically and, upon mitigation, send resolution communications (for supported scenarios) as quickly as possible.

Granularity

We understand that when an outage causes impact, customers need to understand exactly which of their resources are impacted. One of the key building blocks in getting the health of specific resources are Resource Health signals. The Resource Health signal will check if a resource, such as a virtual machine (VM), SQL database, or storage account, is in a healthy state. Customers can also create Resource Health alerts, which leverage Azure Monitor, to let the right people know if a particular resource is having issues, regardless of whether it is a platform-wide issue or not. This is important to note: a Resource Health alert can be triggered due to a resource becoming unhealthy (for example, if the VM is rebooted from within the guest) which is not necessarily related to a platform event, like an outage. Customers can see the associated Resource Health checks, arranged by resource type.

We are building on this technology to augment and correlate each customer resource(s) that has moved into an unhealthy state with platform outages, all within Service Health. We are also investigating how we can include the impacted resources in our communication payloads, so that customers won’t necessarily need to sign in to Service Health to understand the impacted resources—of course, everyone should be able to consume this programmatically.

All of this will allow customers with large numbers of resources to know more precisely which of their services are impacted due to an outage, without having to conduct an investigation on their side. More importantly, customers can build alerts and trigger responses to these resource health alerts using native integrations to Logic Apps and Azure Functions.

Discoverability

Although we support both ‘push’ and ‘pull’ approaches for outage communications, we encourage customers to configure relevant alerts, so the right information is automatically pushed out to the right people and systems. Our customers and partners should not have to go searching to see if the resources they care about are impacted by an outage—they should be able to consume the notifications we send (in the medium of their choice) and react to them as appropriate. Despite this, we constantly find that customers visit the Azure Status page to determine the health of services on Azure.

Before the introduction of the authenticated in-portal Service Health experience, the Status page was the only way to discover known platform issues. These days, this public Status page is only used to communicate widespread outages (for example, impacting multiple regions and/or multiple services) so customers looking for potential issues impacting them don’t see the full story here. Since we rollout platform changes as safely as possible, the vast majority of issues like outages only impact a very small ‘blast radius’ of customer subscriptions. For these incidents, which make up more than 95 percent of our incidents, we communicate directly to impacted customers in-portal via Service Health.

We also recently integrated the ‘Emerging Issues’ feature into Service Health. This means that if we have an incident on the public Status page, and we have yet to identify and communicate to impacted customers, users can see this same information in-portal through Service Health, thereby receiving all relevant information without having to visit the Status page. We are encouraging all Azure users to make Service Health their ‘one stop shop’ for information related to service incidents, so they can see issues impacting them, understand which of their subscriptions and resources are impacted, and avoid the risk of making a false correlation, such as when an incident is posted on the Status page, but is not impacting them.

Most importantly, since we’re talking about the discoverability principle, from within Service Health customers can create Service Health alerts, which are push notifications leveraging the integration with Azure Monitor. This way, customers and partners can configure relevant notifications based on who needs to receive them and how they would best be notified—including by email, SMS, LogicApp, and/or through a webhook that can be integrated into service management tools like ServiceNow, PagerDuty, or Ops Genie.

To get started with simple alerts, consider routing all notifications to email a single distribution list. To take it to the next level, consider configuring different service health alerts for different use cases—maybe all production issues notify ServiceNow, maybe dev and test or pre-production issues might just email the relevant developer team, maybe any issue with a certain subscription also sends a text message to key people. All of this is completely customizable, to ensure that the right people are notified in the right way.

Parity

All Azure users should know that Service Health is the one place to go, for all service impacting events. First, we ensure that this experience is consistent across all our different Azure Services, each using Service Health to communicate any issues. As simple as this sounds, we are still navigating through some unique scenarios that make this complex. For example, most people using Azure DevOps don’t interact with the Azure portal. Since DevOps does not have its own authenticated Service Health experience, we can’t communicate updates directly to impacted customers for small DevOps outages that don’t justify going to the public Status page. To support scenarios like this, we have stood up the Azure DevOps status page where smaller scale DevOps outages can be communicated directly to the DevOps community.

Second, the Service Health experience is designed to communicate all impacting events across Azure—this includes maintenance events as well as service or feature retirements, and includes both widespread outages and isolated hiccups that only impact a single subscription. It is imperative that for any impact (whether it is potential, actual or upcoming) customers can expect the same experience and put in place a predictable action plan across all of their services on Azure.

Lastly, we are working towards expanding our philosophy of this pillar to extend to other Microsoft cloud products. We acknowledge that, at times, navigating through our different cloud products such as Azure, Microsoft 365, and Power Platform can sometimes feel like navigating technologies from three different companies. As we look to the future, we are invested in harmonizing across these products to bring about a more consistent, best-in-class experience.

Transparency

As we have mentioned many times in the Advancing Reliability blog series, we know that trust is earned and needs to be maintained. When it comes to outages, we know that being transparent about what is happening, what we know, and what we don’t know is critically important. The cloud shouldn’t feel like a black box. During service issues, we provide regular communications to all impacted customers and partners. Often, in the early stages of investigating an issue, these updates might not seem detailed until we learn more about what’s happening. Even though we are committed to sharing tangible updates, we generally try to avoid sharing speculation, since we know customers make business decisions based on these updates during outages.

In addition, an outage is not over once customer impact is mitigated. We could still be learning about the complexities of what led to the issue, so sometimes the message sent at or after mitigation is a fairly rudimentary summation of what happened. For major incidents, we follow this up with a PIR generally within three days, once the contributing factors are better understood.

For incidents that may have impacted fewer subscriptions, our customers and partners can request more information from within Service Health by requesting a PIR for the incident. We have heard feedback in the past that PIRs should be even more transparent, so we continue to encourage our incident managers and communications managers to provide as much detail as possible—including information about the issue impact, and our next steps to mitigate future risk. Ideally to ensure that this class of issue is less likely and/or less impactful moving forward.

While our industry will never be completely immune to service outages, we do take every opportunity to look at what happened from a holistic perspective and share our learnings. One of the future areas of investment at which we are looking closely, is how best to keep customers updated with the progress we are making on the commitments outlined in our PIR next steps. By linking our internal repair items to our external commitments in our next steps, customers and partners will be able to track the progress that our engineering teams are making to ensure that corrective actions are completed.

Our communications across all of these scenarios (outages, maintenance, service retirements, and health advisories) will continue to evolve, as we learn more and continue investing in programs that support these five pillars.

Reliability is a shared responsibility

While Microsoft is responsible for the reliability of the Azure platform itself, our customers and partners are responsible for the reliability of their cloud applications—including using architectural best practices based on the requirements of each workload. Building a reliable application in the cloud is different from traditional application development. Historically, customers may have purchased levels of redundant higher-end hardware to minimize the chance of an entire application platform failing. In the cloud, we acknowledge up front that failures will happen. As outlined several times above, we will never be able to prevent all outages. In addition to Microsoft trying to prevent failures, when building reliable applications in the cloud your goal should be to minimize the effects of any single failing component.

To that end, we recently launched the Microsoft Azure Well-Architected Framework—a set of guiding tenets that can be used to improve the quality of a workload. Reliability is one of the five pillars of architectural excellence alongside Cost Optimization, Operational Excellence, Performance Efficiency, and Security. If you already have a workload running in Azure and would like to assess your alignment to best practices in one or more of these areas, try the Microsoft Azure Well-Architected Review.

Specifically, the Reliability pillar describes six steps for building a reliable Azure application. Define availability and recovery requirements based on decomposed workloads and business needs. Use architectural best practices to identify possible failure points in your proposed/existing architecture and determine how the application will respond to failure. Test with simulations and forced failovers to test both detection and recovery from various failures. Deploy the application consistently using reliable and repeatable processes. Monitor application health to detect failures, monitor indicators of potential failures, and gauge the health of your applications. Finally, respond to failures and disasters by determining how best to address it based on established strategies.

Returning to our core topic of outage communications, we are working to incorporate relevant Well-Architected guidance into our PIRs in the aftermath of each service incident. Customers running critical workloads will be able to learn about specific steps to improve reliability that would have helped to avoid and lessen impact from that particular outage. For example, if an outage only impacted resources within a single Availability Zone, we will call this out as part of the PIRs and encourage impacted customers to consider zonal redundancies for their critical workloads.

Going forward

We outlined how Azure approaches communications during and after service incidents like outages. We want to be transparent about our five communication pillars, to explain both our progress to date and the areas in which we’re continuing to invest. Just as our engineering teams endeavor to learn from each incident to improve the reliability of the platform, our communications teams endeavor to learn from each incident to be more transparent, to get customers and partners the right details to make informed decisions, and to support customers and partners as best as possible during each of these difficult situations.

We are confident that we are making the right investments to continuing improving in this space, but we are increasingly looking for feedback on whether our communications are hitting the mark. We include an Azure post-incident survey at the end of each PIR we publish. We strive to review every response to learn from our customers and partners and validate whether we are focusing on the right areas and to keep improving the experience.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors align to cause outages. Since trust is earned and needs to be maintained, we are committed to being as transparent as possible—especially during these infrequent but inevitable service issues.

Apple will give third-party Mac repair shops its stamp of approval

The content below is taken from the original ( Apple will give third-party Mac repair shops its stamp of approval), to continue reading please visit the site. Remember to respect the Author & Copyright.

Getting your Mac fixed could soon be much easier. Apple says it will now verify third-party Mac repair shops, Reuters reports. The program will provide parts and training to qualifying repair stores.Apple began verifying third-party iPhone repair sho…

Google Cloud VMware Engine explained: Integrated networking and connectivity

The content below is taken from the original ( Google Cloud VMware Engine explained: Integrated networking and connectivity), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: This the first installment in a new blog series that dives deep into our Google Cloud VMware Engine managed service. Stay tuned for other entries on migration, integration, running stateful database workloads, and enabling remote workers, to name a few.

We recently announced the general availability of Google Cloud VMware Engine, a managed VMware platform service that enables enterprises to lift and shift their VMware-based applications to Google Cloud without changes to application architectures, tools or processes. With VMware Engine, you can deploy a private cloud—an isolated VMware stack—that consists of three or more nodes, enabling you to run VMware Cloud Foundation platform natively. This approach lets you retire or extend your data center to the cloud, use the cloud as a disaster recovery target, or migrate and modernize workloads by integrating with cloud-native services such as BigQuery, Cloud AI, etc.

But before you can do that, you need easy-to-provision, high-performance, highly available networking to connect between:

  • On-premises data centers and the cloud

  • VMware workloads and cloud-native services

  • VMware private clouds in single or multi-region deployments.

Google Cloud VMware Engine networking leverages existing connectivity services for on-premises connections and provides seamless connectivity to other Google Cloud services. Furthermore, the service is built on high-performance, reliable and high-capacity infrastructure, giving you a fast and highly available VMware experience, at a low cost.

Let’s take a closer look at some of the networking features you’ll find on VMware Engine. 

High Availability and 100G throughput

Google Cloud VMware Engine private clouds are deployed on enterprise-grade infrastructure with redundant and dedicated 100Gbps networking that provides 99.99% availability, low latency and high throughput.

Integrated networking and on-prem connectivity 

Subnets associated with private clouds are allocated in Google Cloud VPCs and delegated to VMware Engine. As a result, Compute Engine instances in the VPC communicate with VMware workloads using RFC 1918 private addresses, with no need for External IP-based addressing. 

Private clouds can be accessed from on-prem using existing Cloud VPN or Cloud Interconnect-based connections to Google Cloud VPCs without additional VPN or Interconnect attachments to VMware Engine private clouds. You can also stretch your on-prem networks to VMware Engine to facilitate workload migration.

Furthermore, for internet access, you can choose to use VMware Engine’s internet access service or route internet-bound traffic from on-prem to meet your security or regulatory needs.

Access to Google Cloud services from VMware Engine private clouds

VMware Engine workloads can access other Google Cloud services such as Cloud SQL, Cloud Storage, etc., using options such as Private Google Access and Private Service Access. Just like a Compute Engine instance in a VPC, a VMware workload can use private access options to communicate with Google Cloud services while staying within a secure and trusted Google Cloud network boundary. As such, you don’t need to exit out to the public internet to access Google Cloud services from VMware Engine, regardless of whether internet access is enabled or disabled. This provides for low-latency and secure communication between VMware Engine and other Google Cloud services.

Multi-region connectivity between VMware private clouds 

VMware workloads in private clouds in the same region can talk to one another directly—without needing to “trombone” or “hairpin” across the Google Cloud VPCs. In the case where VMware workloads need to communicate with one another across regions, they can do so using VMware Engine’s global routing service. This approach to multi-region connectivity doesn’t require a VPN, or any other latency-inducing connectivity options. 

Access to full NSX-T functionality

VMware Engine supports full NSX-T functionality for VMware workloads. With this, you can use VMware’s NSX-T policy-based UI or API to create network segments, gateway firewall policies or distributed/east-west firewall policies. In addition, you can also leverage NSX-T’s load balancer, NAT and service insertion functionality. 

Networking is critical to any Enterprise’s cloud transformation journey—even more so for VMware-related use cases. The networking functionality in VMware Engine makes it easy for you to take advantage of the scale, flexibility and agility that Google Cloud provides without compromising on functionality.

What’s next

In the coming weeks, we’ll share more about VMware Engine and migration, building business resiliency, enabling work from anywhere, and your enterprise database options. To learn more or to get started, visit the VMware Engine website where you’ll find detailed information on key features, use cases, product documentation, and pricing.

Meet the startup that helped Microsoft build the world of Flight Simulator

The content below is taken from the original ( Meet the startup that helped Microsoft build the world of Flight Simulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s new Flight Simulator is a technological marvel that sets a new standard for the genre. But to recreate a world that feels real and alive and contains billions of buildings all in the right spots, Microsoft and Asobo Studios relied on the work of multiple partners.

One of those is the small Austrian startup Blackshark.ai blackshark.ai from Graz that, with a team of only about 50 people, recreated every city and town around the world with the help of AI and massive computing resources in the cloud.

Ahead of the launch of the new Flight Simulator, we sat down with Blackshark co-founder and CEO Michael Putz to talk about working with Microsoft and the company’s broader vision.

Image Credits: Microsoft

Blackshark is actually a spin-off of game studio Bongfish, the maker of World of Tanks: Frontline, Motocross Madness and the Stoked snowboarding game series. As Putz told me, it was actually Stoked that set the company on the way to what would become Blackshark.

“One of the first games we did in 2007 was a snowboarding game called Stoked and S Stoked Bigger Edition, which was one of the first games having a full 360-degree mountain where you could use a helicopter to fly around and drop out, land everywhere and go down,” he explained. “The mountain itself was procedurally constructed and described — and also the placement of obstacles of vegetation, of other snowboarders and small animals had been done procedurally. Then we went more into the racing, shooting, driving genre, but we still had this idea of positional placement and descriptions in the back of our minds.”

Bongfish returned to this idea when it worked on World of Tanks, simply because of how time-consuming it is to build such a huge map where every rock is placed by hand.

Based on this experience, Bongfish started building an in-house AI team. That team used a number of machine-learning techniques to build a system that could learn from how designers build maps and then, at some point, build its own AI-created maps. The team actually ended up using this for some of its projects before Microsoft came into the picture.

“By random chance, I met someone from Microsoft who was looking for a studio to help them out on the new Flight Simulator. The core idea of the new Flight Simulator simulator was to use Bing Maps as a playing field, as a map, as a background,” Putz explained.

But Bing Maps’ photogrammetry data only yielded exact 1:1 replicas of 400 cities — for the vast majority of the planet, though, that data doesn’t exist. Microsoft and Asobo Studios needed a system for building the rest.

This is where Blackshark comes in. For Flight Simulator, the studio reconstructed 1.5 billion buildings from 2D satellite images.

Now, while Putz says he met the Microsoft team by chance, there’s a bit more to this. Back in the day, there was a Bing Maps team in Graz, which developed the first cameras and 3D versions of Bing Maps. And while Google Maps won the market, Bing Maps actually beat Google with its 3D maps. Microsoft then launched a research center in Graz and when that closed, Amazon and others came in to snap up the local talent.

“So it was easy for us to fill positions like a PhD Ph.D. in rooftop reconstruction,” Putz said. “I didn’t even know this existed, but this was exactly what we needed — and we found two of them.

“It’s easy to see why reconstructing a 3D building from a 2D map would be hard. Even figuring out a building’s exact outline isn’t easy.

Image Credits: Blackshark.ai

“What we do basically in Flight Simulator is we look Simulators is we looking at areas, 2D areas and then finding out footprints of buildings, which is actually a computer vision task,” said Putz. “But if a building is obstructed by a shadow of a tree, we actually need machine learning because then it’s not clear anymore what is part of the building and what is not because of the overlap of the shadow — but then machine learning completes the remaining part of the building. That’s a super simple example.”

While Blackshark was able to rely on some other data, too, including photos, sensor data and existing map data, it has to make a determination about the height of the building and some of its characteristics based on very little information.

The obvious next problem is figuring out the height of a building. If there is existing GIS data, then that problem is easy to solve, but for most areas of the world, that data simply doesn’t exist or isn’t readily available. For those areas, the team takes the 2D image and looks for hints in the image, like shadows. To determine the height of a building based on a shadow, you need the time of day, though, and the Bing Maps images aren’t actually timestamped. For other use cases the company is working on, Blackshark has that and that makes things a lot easier. And that’s where machine learning comes in again.

Image Credits: Blackshark.ai

“Machine learning takes a slightly different road,” noted Putz. “It also looks at the shadow, we think — because it’s a black box, we don’t really know what it’s doing. But also, if you look at a flat rooftop, like a skyscraper versus a shopping mall. Both have mostly flat rooftops, but the rooftop furniture is different on a skyscraper than on a shopping mall. This helps the AI to learn when you label it the right way.”

And then, if the system knows that the average height of a shopping mall in a given area is usually three floors, it can work with that.

One thing Blackshark is very open about is that its system will make mistakes — and if you buy Flight Simulator, you will see that there are obvious mistakes in how some of the buildings are placed. Indeed, Putz told me that he believes one of the hardest challenges in the project was to convince the company’s development partners and Microsoft to let them use this approach.

“You’re talking 1.5 billion buildings. At these numbers, you cannot do traditional Q&A anymore. And the traditional finger-pointing in like a level of Halo or something where you say ‘this pixel is not good, fix it,’ does not really work if you develop on a statistical basis like you do with AI. So it might be that 20% of the buildings are off — and it actually is the case I guess in the Flight Simulator — but there’s no other way to tackle this challenge because outsourcing to hand-model 1.5 billion buildings is, just from a logistical level and also budget level, not doable.”

Over time, that system will also improve, and because improve and since Microsoft streams a lot of the data to the game from Azure, users will surely see changes over time.

Image Credits: Blackshark.ai

Labeling, though, is still something the team has to do simply to train the model, and that’s actually an area where Blackshark has made a lot of progress, though Putz wouldn’t say too much about it because it’s part of the company’s secret sauce and one of the main reasons why it can do all of this with just about 50 people.

“Data labels had not been a priority for our partners,” he said. “And so we used our own live labeling to basically label the entire planet by two or three guys […] It puts a very powerful tool and user interface in the hands of the data analysts. And basically, if the data analyst wants to detect a ship, he tells the learning algorithm what the ship is and then he gets immediate output of detected ships in a sample image.”

From there, the analyst can then train the algorithm to get even better at detecting a specific object like a ship, in this example, or a mall in Flight Simulator. Other geospatial analysis companies tend to focus on specific niches, Putz also noted, while the company’s tools are agnostic to the type of content being analyzed.

Image Credits: Blackshark.ai

And that’s where Blackshark’s bigger vision comes in. Because while the company is now getting acclaim for its work with Microsoft, Blackshark also works with other companies around reconstructing city scenes for autonomous driving simulations, for example.

“Our bigger vision is a near-real-time digital twin of our planet, particularly the planet’s surface, which opens up a trillion use cases where traditional photogrammetry like a Google Earth or what Apple Maps is doing is not helping because those are just simplified for photos clued on simple geometrical structures. For this we have our cycle where we have been extracting intelligence from aerial data, which might be 2D images, but it also could be 3Dpoint counts, which are already doing another project. And then we are visualizing the semantics.”

Those semantics, which describe the building in very precise detail, have one major advantage over photogrammetry: Shadow and light information is essentially baked into the images, making it hard to relight a scene realistically. Since Blackshark knows everything about that building it is constructing, it can then also place windows and lights in those buildings, which creates the surprisingly realistic night scenes in Flight Simulator.

Point clouds, which aren’t being used in Flight Simulator, are another area Blackshark is focusing on right now. Point clouds are very hard to read for humans, especially once you get very close. Blackshark uses its AI systems to analyze point clouds to find out how many stories a building has.

“The whole company was founded on the idea that we need to have a huge advantage in technology in order to get there, and especially coming from video games, where huge productions like in Assassin’s Creed or GTA are now hitting capacity limits by having thousands of people working on it, which is very hard to scale, very hard to manage over continents and into a timely delivered product. For us, it was clear that there need to be more automated or semi-automated steps in order to do that.”

And though Blackshark found its start in the gaming field — and while it is working on this with Microsoft and Asobo Studios — it’s actually not focused on gaming but instead on things like autonomous driving and geographical analysis. Putz noted that another good example for this is Unreal Engine, which started as a game engine and is now everywhere.

“For me, having been in the games industry for a long time, it’s so encouraging to see, because when you develop games, you know how groundbreaking the technology is compared to other industries,” said Putz. “And when you look at simulators, from military simulators or industrial simulators, they always kind of look like shit compared to what we have in driving games. And the time has come that the game technologies are spreading out of the game stack and helping all those other industries. I think Blackshark is one of those examples for making this possible.”

IBM takes Power10 processors down to 7nm with Samsung, due to ship by end of 2021

The content below is taken from the original ( IBM takes Power10 processors down to 7nm with Samsung, due to ship by end of 2021), to continue reading please visit the site. Remember to respect the Author & Copyright.

Up to 15 SMT8 CPU cores per chip, Power ISA 3.1 support, and other bits and bytes

Hot Chips Big Blue hopes to roll out its first commercial 7nm data-center-grade processors – the IBM Power10 series – by the end of 2021.…

How to repair Microsoft 365 using Command Prompt in Windows 10

The content below is taken from the original ( How to repair Microsoft 365 using Command Prompt in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re unable to repair Office 365 (now renamed as Microsoft 365) because the entire system is completely restricted and unable to reach the Programs […]

This article How to repair Microsoft 365 using Command Prompt in Windows 10 first appeared on TheWindowsClub.com.

AWS Announces General Availability of Amazon Braket

The content below is taken from the original ( AWS Announces General Availability of Amazon Braket), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, Amazon Web Services, Inc. (AWS) announced the general availability of Amazon Braket , a fully managed AWS service that provides a development… Read more at VMblog.com.