Posted on in category News

10 Things Data Center Operators Can Do to Prepare for GDPR

The content below is taken from the original (10 Things Data Center Operators Can Do to Prepare for GDPR), to continue reading please visit the site. Remember to respect the Author & Copyright.

10 Things Data Center Operators Can Do to Prepare for GDPR

As we explained in an article earlier this week, the new European General Data Protection Regulation, which goes into effect next May, has wide-reaching implications for data center operators in and outside of Europe. We asked experts what steps they would recommend operators take to prepare. Here’s what they said:

Ojas Rege, chief marketing and strategy officer at MobileIron, a mobile and cloud security company based in Mountain View, California:

Every corporate data center holds an enormous amount of personal data about employees and customers. GDPR compliance will require that only the essential personal data is held and that it is effectively protected from breach and loss. Each company should consider a five-step process:

  • Do an end-to-end data mapping of the data stored in its data center to identify personal data.
  • Ensure that the way this personal data is used is consistent with GDPR guidelines.
  • Fortify its protections for that personal data since the penalties for GDPR compliance are so extensive.
  • Proactively establish a notification and forensics plan in the case of breach.
  • Extensively document its data flows, policies, protections, and remediation methods for potential GDPR review.

Neil Thacker, deputy CISO at Forcepoint, a cybersecurity company based in Austin, Texas:

Data centers preparing for GDPR must be in position to identify, protect, detect, respond, and recover in case of a data breach. Some of the key actions they should take include:

  • Perform a complete analysis of all data flows from the European Economic Area and establish in which non-EEA countries processing will be undertaken.
  • Review cloud service agreements for location of data storage and any data transfer mechanism, as relevant.
  • Implement cybersecurity practices and technologies that provide deep visibility into how critical data is processed across their infrastructure, whether on-premises, in the cloud, or in use by a remote workforce.
  • Monitor, manage, and control data — at rest, in use, and in motion.
  • Utilize behavioral analytics and machine learning to discover broken business processes and identify employees that elevate risk to critical data.

See also: What Europe’s New Data Protection Law Means for Data Center Operators

Posted on in category News

Online training for Azure Data Lake

The content below is taken from the original (Online training for Azure Data Lake), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are pleased to announce the availability of new, free online training for Azure Data Lake. We’ve designed this training to get developers ramped up fast. It covers all the topics a developer needs to know to start being productive with big data and how to address the challenges of authoring, debugging, and optimizing at scale.

Explore the training

Click on the link below to start!

Microsoft Virtual Academy: Introduction to Azure Data Lake

Looking for more?

You can find this training and many more resources for developers.

Course outline

1 | Introduction to Azure Data Lake

Get an overview of the entire Azure Data Lake set of services including HDI, ADL Store, and ADL Analytics.

2 | Introduction to Azure Data Lake Tools for Visual Studio

Since ADL developers of all skill levels use Azure Data Lake Tools for Visual Studio, review the basic set of capabilities offered in Visual Studio.

3 | U-SQL Programming

Explore the fundamentals of the U-SQL language, and learn to perform the most common U-SQL data transformations.

4 | Introduction to Azure Data Lake U-SQL Batch Job

Find out what’s happening behind the scenes, when running a batch U-SQL script in Azure.

5 | Advanced U-SQL

Learn about the more sophisticated features of the U-SQL language to calculate more useful statistics and learn how to extend U-SQL to meet many diverse needs.

6 | Debugging U-SQL Job Failures

Since, at some point, all developers encounter a failed job, get familiar with the causes of failure and how they manifest themselves.

7 | Introduction to Performance and Optimization

Review the basic concepts that drive performance in a batch U-SQL job, and examine strategies available to address those issues when they come up, along with the tools that are available to help.

8 | ADLS Access Control Model

Explore how Azure Data Lake Store uses the POSIX Access Control model, which is very different for users coming from a Windows background.

9 | Azure Data Lake Outro and Resources

Learn about course resources.

Posted on in category News

OpenStack Developer Mailing List Digest July 22-28

The content below is taken from the original (OpenStack Developer Mailing List Digest July 22-28), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

Project Team Gathering Planning

Oslo DB Network Database Base namespace throughout OpenStack Projects

  1. http://bit.ly/2tYdbp6l
  2. http://bit.ly/2vmipysl
  3. http://bit.ly/2tYhv7wl
  4. http://bit.ly/2vm2dgvl
  5. http://bit.ly/2tYF40al
  6. http://bit.ly/2vmjF4Bl
  7. http://bit.ly/2tYvBG4l
  8. http://bit.ly/2vmdjSSl
  9. http://bit.ly/2tXO7yml
  10. http://bit.ly/2vm6T68l
  11. http://bit.ly/2tY3WFsl
  12. http://bit.ly/2vmmcvMl
  13. http://bit.ly/2tXO7yj/
  14. http://bit.ly/2vmmcfg3
  15. http://bit.ly/2tXMGjr/
  16. http://bit.ly/2vmqpiJ7

Posted on in category News

HP made a VR backpack for on-the-job training

The content below is taken from the original (HP made a VR backpack for on-the-job training), to continue reading please visit the site. Remember to respect the Author & Copyright.

To date, VR backpack PCs have been aimed at gamers who just don’t want to trip over cords while they’re fending off baddies. But what about pros who want to collaborate, or soldiers who want to train on a virtual battlefield? HP thinks it has a fix. It’s launching the Z VR Backpack, a spin on the Omen backpack concept that targets the pro crowd. It’s not as ostentatious as the Omen, for a start, but the big deal is its suitability to the rigors of work. The backpack is rugged enough to meet military-grade drop, dust and water resistance standards, and it uses business-class hardware that includes a vPro-enabled quad Core i7 and Quadro P5200 graphics with a hefty 16GB of video memory.

The wearable computer has tight integration with the HTC Vive Business Edition, but HP stresses that you’re not obligated to use it — it’ll work just fine with an Oculus Rift or whatever else your company prefers. The pro parts do hike the price, though, as you’ll be spending at least $3,299 on the Z VR Backpack when it arrives in September. Not that cost is necessarily as much of an issue here — that money might be trivial compared to the cost of a design studio or a training environment.

There’s even a project in the works to showcase what’s possible. HP is partnering with a slew of companies (Autodesk, Epic Games, Fusion, HTC, Launch Forth and Technicolor) on a Mars Home Planet project that uses VR for around-the-world collaboration. Teams will use Autodesk tools to create infrastructure for a million-strong simulated Mars colony, ranging from whole buildings to pieces of clothing. The hope is that VR will give you a better sense of what it’d be like to live on Mars, and help test concepts more effectively than you would staring at a screen. You can sign up for the first phase of the project today.

Source: HP (1), (2)

Posted on in category News

Google just made scheduling work meetings a little easier

The content below is taken from the original (Google just made scheduling work meetings a little easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a little bit of good news for people juggling both Google G Suite tools and Microsoft Exchange for their schedule management at work. Google has released an update that will allow G Suite users to access coworkers’ real-time free/busy information through both Google Calendar’s Find a Time feature and Microsoft Outlook’s Scheduling Assistant interchangeably.

G Suite admins can enable the new Calendar Interop management feature through the Settings for Calendar option in the admin console. Admins will also be able to easily pinpoint issues with the setup via a troubleshooting tool, which will also provide suggestions for resolving those issues, and can track interoperability successes and failures for each user through logs Google has made available.

The new feature is available on Android, iOS and web versions of Google Calendar as well as desktop, mobile and web clients for Outlook 2010+, for admins who choose to enable it. Google says the full rollout should be completed within three days.

Via: TechCrunch

Source: Google (1), (2)

Posted on in category News

Microsoft Teams – explainer video

Posted on in category News

Understand the multicloud management trade-off

The content below is taken from the original (Understand the multicloud management trade-off), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the trends I’ve been seeing for a while is the use of multiple clouds or multicloud. This typically means having two or three public clouds in the mix that are leveraged at the same time. Sometimes you’re mixing private clouds and traditional systems as well.

In some cases even applications and data span two or more public clouds, looking to mix and match cloud services. Why? Enterprises are seeking to leverage the best and most cost-effective cloud services, and sometimes that means picking and choosing from different cloud providers.

[ To the cloud! Real-world container migrations. | Dig into the the red-hot open source framework in InfoWorld’s beginner’s guide to Docker. ]

In order to make multicloud work best for an enterprise you need to place a multicloud management tool, such as a CMP (cloud management platform) or a CSB (cloud services broker) between you and the plural clouds. This spares you from having to deal with the complexities of the native cloud services from each cloud provider.

Instead you deal with an abstraction layer, sometimes called a “single pane of glass” where you are able to leverage a single user interface and sometimes a single set of APIs to perform common tasks among the cloud providers you’re leveraging. Tasks may include provisioning storage or compute, auto-scaling, data movement, etc.   

While many consider this a needed approach when dealing with complex multicloud solutions, there are some looming issues. The abstraction layers seem to have a trade-off when it comes to cloud service utilization. By not utilizing the native interfaces from each cloud provider you’re in essence not accessing the true power of the cloud provider, but instead just leveraging a subset of the services. 

Case in point: cloud storage. Say you’re provisioning storage through a CMP or CSB, and thus you’re leveraging an abstraction layer that has to use a least-common-denominator approach when managing the back-end cloud computing storage services. This means that you’re taking advantage of some storage services but not all. Although you do gain access to storage services that each cloud has in common, you may miss out on storage services that are specific to a cloud, such as advanced caching or systemic encryption.

The point here is that there is a trade-off. You can’t gain simplicity without sacrificing power. This may leave you with a much weaker solution than one that leverages all cloud-native features. No easy choices here.

Posted on in category News

Brad Dickinson | New smaller Windows Server IaaS Image

New smaller Windows Server IaaS Image

The content below is taken from the original (New smaller Windows Server IaaS Image), to continue reading please visit the site. Remember to respect the Author & Copyright.

We continue to find ways to make Azure a better value for our customers. Azure Managed Disks, a new disk service launched in Feb ’17, simplifies the management and scaling of Virtual Machines (VM). You can choose to create an empty Managed Disk, or create a Managed Disk from a VHD in a storage account, or from an Image as part of VM creation.

The pricing of Managed Disks, both Premium and Standard, is based on the provisioned disk size, which is different from the pricing of Standard Unmanaged Disks. To keep the cost lower, we created lower disk pricing options with smaller 32GB and 64GB Standard Managed Disk sizes. Building on that foundation, we have also added a second set of Windows Server offerings with 30GB OS disks for Windows Server 2008R2, Windows Server 2012, Windows Server 2012R2 and Windows Server 2016 in Azure Marketplace. These smaller images are prepended with  “[smalldisk]” in the image title on Azure Portal. For Powershell, CLI and ARM Templates, the image SKU is appended with "-smalldisk". If your application do not require large amount of OS disk space, you would observe savings of $2.18 per VM if you choose to deploy with 32GB Standard Managed OS disk vs. 127GB. For large scale deployments, the benefit would accumulate and may represent significant cost savings. 

You can also have the flexiblity to expand the OS disk by following the existing guide for expanding OS Disk:

If you have expanded the OS Disk, log into your Windows VM and use Disk Management Tool to extend the OS partition to match the OS disk size.