Posted on in category News

10 Things Data Center Operators Can Do to Prepare for GDPR

The content below is taken from the original (10 Things Data Center Operators Can Do to Prepare for GDPR), to continue reading please visit the site. Remember to respect the Author & Copyright.

10 Things Data Center Operators Can Do to Prepare for GDPR

As we explained in an article earlier this week, the new European General Data Protection Regulation, which goes into effect next May, has wide-reaching implications for data center operators in and outside of Europe. We asked experts what steps they would recommend operators take to prepare. Here’s what they said:

Ojas Rege, chief marketing and strategy officer at MobileIron, a mobile and cloud security company based in Mountain View, California:

Every corporate data center holds an enormous amount of personal data about employees and customers. GDPR compliance will require that only the essential personal data is held and that it is effectively protected from breach and loss. Each company should consider a five-step process:

  • Do an end-to-end data mapping of the data stored in its data center to identify personal data.
  • Ensure that the way this personal data is used is consistent with GDPR guidelines.
  • Fortify its protections for that personal data since the penalties for GDPR compliance are so extensive.
  • Proactively establish a notification and forensics plan in the case of breach.
  • Extensively document its data flows, policies, protections, and remediation methods for potential GDPR review.

Neil Thacker, deputy CISO at Forcepoint, a cybersecurity company based in Austin, Texas:

Data centers preparing for GDPR must be in position to identify, protect, detect, respond, and recover in case of a data breach. Some of the key actions they should take include:

  • Perform a complete analysis of all data flows from the European Economic Area and establish in which non-EEA countries processing will be undertaken.
  • Review cloud service agreements for location of data storage and any data transfer mechanism, as relevant.
  • Implement cybersecurity practices and technologies that provide deep visibility into how critical data is processed across their infrastructure, whether on-premises, in the cloud, or in use by a remote workforce.
  • Monitor, manage, and control data — at rest, in use, and in motion.
  • Utilize behavioral analytics and machine learning to discover broken business processes and identify employees that elevate risk to critical data.

See also: What Europe’s New Data Protection Law Means for Data Center Operators

Posted on in category News

Online training for Azure Data Lake

The content below is taken from the original (Online training for Azure Data Lake), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are pleased to announce the availability of new, free online training for Azure Data Lake. We’ve designed this training to get developers ramped up fast. It covers all the topics a developer needs to know to start being productive with big data and how to address the challenges of authoring, debugging, and optimizing at scale.

Explore the training

Click on the link below to start!

Microsoft Virtual Academy: Introduction to Azure Data Lake

Looking for more?

You can find this training and many more resources for developers.

Course outline

1 | Introduction to Azure Data Lake

Get an overview of the entire Azure Data Lake set of services including HDI, ADL Store, and ADL Analytics.

2 | Introduction to Azure Data Lake Tools for Visual Studio

Since ADL developers of all skill levels use Azure Data Lake Tools for Visual Studio, review the basic set of capabilities offered in Visual Studio.

3 | U-SQL Programming

Explore the fundamentals of the U-SQL language, and learn to perform the most common U-SQL data transformations.

4 | Introduction to Azure Data Lake U-SQL Batch Job

Find out what’s happening behind the scenes, when running a batch U-SQL script in Azure.

5 | Advanced U-SQL

Learn about the more sophisticated features of the U-SQL language to calculate more useful statistics and learn how to extend U-SQL to meet many diverse needs.

6 | Debugging U-SQL Job Failures

Since, at some point, all developers encounter a failed job, get familiar with the causes of failure and how they manifest themselves.

7 | Introduction to Performance and Optimization

Review the basic concepts that drive performance in a batch U-SQL job, and examine strategies available to address those issues when they come up, along with the tools that are available to help.

8 | ADLS Access Control Model

Explore how Azure Data Lake Store uses the POSIX Access Control model, which is very different for users coming from a Windows background.

9 | Azure Data Lake Outro and Resources

Learn about course resources.

Posted on in category News

OpenStack Developer Mailing List Digest July 22-28

The content below is taken from the original (OpenStack Developer Mailing List Digest July 22-28), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

Project Team Gathering Planning

Oslo DB Network Database Base namespace throughout OpenStack Projects

  1. http://bit.ly/2tYdbp6l
  2. http://bit.ly/2vmipysl
  3. http://bit.ly/2tYhv7wl
  4. http://bit.ly/2vm2dgvl
  5. http://bit.ly/2tYF40al
  6. http://bit.ly/2vmjF4Bl
  7. http://bit.ly/2tYvBG4l
  8. http://bit.ly/2vmdjSSl
  9. http://bit.ly/2tXO7yml
  10. http://bit.ly/2vm6T68l
  11. http://bit.ly/2tY3WFsl
  12. http://bit.ly/2vmmcvMl
  13. http://bit.ly/2tXO7yj/
  14. http://bit.ly/2vmmcfg3
  15. http://bit.ly/2tXMGjr/
  16. http://bit.ly/2vmqpiJ7

Posted on in category News

HP made a VR backpack for on-the-job training

The content below is taken from the original (HP made a VR backpack for on-the-job training), to continue reading please visit the site. Remember to respect the Author & Copyright.

To date, VR backpack PCs have been aimed at gamers who just don’t want to trip over cords while they’re fending off baddies. But what about pros who want to collaborate, or soldiers who want to train on a virtual battlefield? HP thinks it has a fix. It’s launching the Z VR Backpack, a spin on the Omen backpack concept that targets the pro crowd. It’s not as ostentatious as the Omen, for a start, but the big deal is its suitability to the rigors of work. The backpack is rugged enough to meet military-grade drop, dust and water resistance standards, and it uses business-class hardware that includes a vPro-enabled quad Core i7 and Quadro P5200 graphics with a hefty 16GB of video memory.

The wearable computer has tight integration with the HTC Vive Business Edition, but HP stresses that you’re not obligated to use it — it’ll work just fine with an Oculus Rift or whatever else your company prefers. The pro parts do hike the price, though, as you’ll be spending at least $3,299 on the Z VR Backpack when it arrives in September. Not that cost is necessarily as much of an issue here — that money might be trivial compared to the cost of a design studio or a training environment.

There’s even a project in the works to showcase what’s possible. HP is partnering with a slew of companies (Autodesk, Epic Games, Fusion, HTC, Launch Forth and Technicolor) on a Mars Home Planet project that uses VR for around-the-world collaboration. Teams will use Autodesk tools to create infrastructure for a million-strong simulated Mars colony, ranging from whole buildings to pieces of clothing. The hope is that VR will give you a better sense of what it’d be like to live on Mars, and help test concepts more effectively than you would staring at a screen. You can sign up for the first phase of the project today.

Source: HP (1), (2)

Posted on in category News

Google just made scheduling work meetings a little easier

The content below is taken from the original (Google just made scheduling work meetings a little easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a little bit of good news for people juggling both Google G Suite tools and Microsoft Exchange for their schedule management at work. Google has released an update that will allow G Suite users to access coworkers’ real-time free/busy information through both Google Calendar’s Find a Time feature and Microsoft Outlook’s Scheduling Assistant interchangeably.

G Suite admins can enable the new Calendar Interop management feature through the Settings for Calendar option in the admin console. Admins will also be able to easily pinpoint issues with the setup via a troubleshooting tool, which will also provide suggestions for resolving those issues, and can track interoperability successes and failures for each user through logs Google has made available.

The new feature is available on Android, iOS and web versions of Google Calendar as well as desktop, mobile and web clients for Outlook 2010+, for admins who choose to enable it. Google says the full rollout should be completed within three days.

Via: TechCrunch

Source: Google (1), (2)

Posted on in category News

Microsoft Teams – explainer video

Posted on in category News

Understand the multicloud management trade-off

The content below is taken from the original (Understand the multicloud management trade-off), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the trends I’ve been seeing for a while is the use of multiple clouds or multicloud. This typically means having two or three public clouds in the mix that are leveraged at the same time. Sometimes you’re mixing private clouds and traditional systems as well.

In some cases even applications and data span two or more public clouds, looking to mix and match cloud services. Why? Enterprises are seeking to leverage the best and most cost-effective cloud services, and sometimes that means picking and choosing from different cloud providers.

[ To the cloud! Real-world container migrations. | Dig into the the red-hot open source framework in InfoWorld’s beginner’s guide to Docker. ]

In order to make multicloud work best for an enterprise you need to place a multicloud management tool, such as a CMP (cloud management platform) or a CSB (cloud services broker) between you and the plural clouds. This spares you from having to deal with the complexities of the native cloud services from each cloud provider.

Instead you deal with an abstraction layer, sometimes called a “single pane of glass” where you are able to leverage a single user interface and sometimes a single set of APIs to perform common tasks among the cloud providers you’re leveraging. Tasks may include provisioning storage or compute, auto-scaling, data movement, etc.   

While many consider this a needed approach when dealing with complex multicloud solutions, there are some looming issues. The abstraction layers seem to have a trade-off when it comes to cloud service utilization. By not utilizing the native interfaces from each cloud provider you’re in essence not accessing the true power of the cloud provider, but instead just leveraging a subset of the services. 

Case in point: cloud storage. Say you’re provisioning storage through a CMP or CSB, and thus you’re leveraging an abstraction layer that has to use a least-common-denominator approach when managing the back-end cloud computing storage services. This means that you’re taking advantage of some storage services but not all. Although you do gain access to storage services that each cloud has in common, you may miss out on storage services that are specific to a cloud, such as advanced caching or systemic encryption.

The point here is that there is a trade-off. You can’t gain simplicity without sacrificing power. This may leave you with a much weaker solution than one that leverages all cloud-native features. No easy choices here.

Posted on in category News

Brad Dickinson | Partner Interconnect now generally available

Partner Interconnect now generally available

The content below is taken from the original ( Partner Interconnect now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are happy to announce that Partner Interconnect, launched in beta in April, is now generally available. Partner Interconnect lets you connect your on-premises resources to Google Cloud Platform (GCP) from the partner location of your choice, at a data rate that meets your needs.

With general availability, you can now receive an SLA for Partner Interconnect connections if you use one of the recommended topologies. If you were a beta user with one of those topologies, you will automatically be covered by the SLA. Charges for the service start with GA (see pricing).

Partner Interconnect is ideal if you want physical connectivity to your GCP resources but cannot connect at one of Google’s peering locations, or if you want to connect with an existing service provider. If you need help understanding the connection options, the information here can help.

In this blog we will walk through how you can start using Partner Interconnect, from choosing a partner that works best for you all the way through how you can deploy and start using your interconnect.

Choosing a partner

If you already have a service provider partner for network connectivity, you can check the list of supported service providers to see if they offer Partner Interconnect service. If not, you can select a partner from the list based on your data center location.

Some critical factors to consider are:

  • Make sure the partner can offer the availability and latency you need between your on-premises network and their network.
  • Check whether the partner offers Layer 2 connectivity, Layer 3 connectivity, or both. If you choose a Layer 2 Partner, you have to configure and establish a BGP session between your Cloud Routers and on-premises routers for each VLAN attachment that you create. If you choose a Layer 3 partner, they will take care of the BGP configuration.
  • Please review the recommended topologies for production-level and non-critical applications. Google provides a 99.99% (with Global Routing) or 99.9% availability SLA, and that only applies to the connectivity between your VPC network and the partner’s network.

Bandwidth options and pricing

Partner Interconnect provides flexible options for bandwidth between 50 Mbps and 10 Gbps. Google charges on a monthly basis for VLAN attachments depending on capacity and egress traffic (see options and pricing).

Setting up Partner Interconnect VLAN attachments

Once you’ve established network connectivity with a partner, and they have set up interconnects with Google, you can set up and activate VLAN attachments using these steps:

  1. Create VLAN attachments.
  2. Request provisioning from the partner.
  3. If you have a Layer 2 partner, complete the BGP configuration and then activate the attachments for traffic to start. If you have a Layer 3 partner, simply activate the attachments, or use the pre-activation option.

With Partner Interconnect, you can connect to GCP where and how you want to. Follow these steps to easily access your GCP compute resources from your on-premises network.

Related content