Posted on in category News

The silver lining to this floating cloud is that it’s a Bluetooth speaker

The content below is taken from the original (The silver lining to this floating cloud is that it’s a Bluetooth speaker), to continue reading please visit the site. Remember to respect the Author & Copyright.

cloud-speaker
There are well over 9000 Bluetooth speakers for you to choose from to satisfy your wireless listening needs. Not many of them are as cool as this one designed by Richard Clarkson. As […]

Posted on in category News

Set expiration date for VMs in Azure DevTest Labs

The content below is taken from the original (Set expiration date for VMs in Azure DevTest Labs), to continue reading please visit the site. Remember to respect the Author & Copyright.

In scenarios such as training, demos and trials, you may want to create virtual machines  and delete them automatically after a fixed duration so that you don’t incur unnecessary costs. We recently announced a feature which allows you to do just that; set an expiration date for a lab VM.

This feature is currently only available using our APIs which you can use through Azure resource manager (ARM) template, Azure PowerShell SDK and Azure CLI.

You can create a lab VM with an expiration date using an ARM template by specifying the expirationDate property for the VM. You can check out a sample Resource Manager template in our public GitHub repository. You can also modify any of the existing sample Resource Manager templates for the VM creation (name starting with 101-dtl-create-vm) by adding the expirationDate property.

For more details on this feature and what’s coming next, please check out the post on our team blog.

Please try this feature and let us know how we can make it better by sharing your ideas and suggestions at the DevTest Labs feedback forum.  Note that this feature will be available soon in the Azure portal as well.

If you run into any problems with this feature or have any questions, we are always ready to help you at our MSDN forum.

Posted on in category News

How to keep IT security together in a company that’s gone bankrupt

The content below is taken from the original (How to keep IT security together in a company that’s gone bankrupt), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even in a best-case scenario, in a bankruptcy “there’s going to be oversight by a trustee of the court if not a judge. The bar for justifying current spend during bankruptcy will be set high, and new spend is also going to be difficult,” says Cisco’s Alterson.

Ron Schlecht, Jr., managing partner of BTB Security, recalls a case where he helped a company in what he called “selloff mode.” “They realized they needed some control over what IT assets and what information was leaving, so they retained us to baseline where their information was, and document and implement procedures for decommissioning equipment,” he said. “While there wasn’t any pushback from creditors, the company made it very clear that they had to be cost-conscious with spending when we proposed work.”

Posted on in category News

Enable Multi-Factor Authentication for Office 365 Users

The content below is taken from the original (Enable Multi-Factor Authentication for Office 365 Users), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office365-secure-hero

Office365-secure-hero

In today’s Ask the Admin, I’ll show you how to enable two-factor authentication on a Microsoft account with the help of Microsoft’s Authenticator mobile app.

In What Is Multifactor Authentication and How Does It Work? on the Petri IT Knowledgebase, I explained the concept of two-factor and multi-factor authentication, and why you should enable it for sensitive accounts. Two-factor authentication (2FA) has been available in Office 365 for a couple of years, but you need to manually enable it for your users. Microsoft’s Authenticator app for iOS, Android, and Windows Phone makes it easier than ever to implement 2FA by using push notifications for verification rather than requiring users to type in access codes.

 

 

Before setting up 2FA for Office 365 users, make sure you enable Modern Authentication (MA) for Exchange Online if users are accessing Exchange using Outlook 2016 or 2013. Versions of Outlook prior to 2013 don’t support Modern Authentication. For details on how to enable MA for Exchange Online tenants, see Enable Modern Authentication in Exchange Online.

I’ve tested 2FA with Microsoft’s mobile Office apps, Outlook Groups, Office 2016 desktop apps, and OneDrive for Business in Windows 10, and found no problems. But there may be apps that are incompatible or require an app password, so make sure you test all the apps in your organization before enabling 2FA.

If you intend to set up 2FA for tenant administrator accounts, you should note that those accounts won’t be able to sign in to Office 365 using PowerShell. Microsoft recommends creating a specialized account for each administrative user for the purposes of accessing Office 365 via PowerShell. These accounts should be disabled when not in use.

Enable 2FA in the Admin Portal

The first step is to enable 2FA for one or more users in the Office 365 admin portal. 2FA can be enabled for individual users or in bulk. In this demo, because I’m testing 2FA before I roll out the feature to all users, I’ll enable 2FA for one account only. Before continuing, I recommend installing Microsoft Authenticator on your mobile device, not to be confused with Authenticator, a similar app from Microsoft but without support for push notifications.

Sponsored

 

Enable multi-factor authentication for an Office 365 user (Image Credit: Russell Smith)

Enable multi-factor authentication for an Office 365 user (Image Credit: Russell Smith)

Enable multi-factor authentication for an Office 365 user (Image Credit: Russell Smith)

Enable multi-factor authentication for an Office 365 user (Image Credit: Russell Smith)

The MULTI-FACTOR AUTH STATUS column for the user will change to Enabled. Close the browser window and sign out of the admin portal.

Enroll Account for 2FA

The user is then required to enroll for 2FA once the feature has been enabled for their account by an administrator. The user should sign in to Office 365 as usual with their username and password, and then click Set it up now on the sign in screen and follow the instructions below:

Set up multi-factor authentication in Office 365 (Image Credit: Russell Smith)

Set up multi-factor authentication in Office 365 (Image Credit: Russell Smith)

Set up multi-factor authentication in Office 365 (Image Credit: Russell Smith)

Set up multi-factor authentication in Office 365 (Image Credit: Russell Smith)

Web-based and mobile apps can use Microsoft Authenticator app verifications for 2FA logins, but Office desktop apps require an app password. The final step provides you with an app password for these apps.

You’ll be prompted to sign in again, this time by verifying the login using the Microsoft Authenticator app.

Sponsored

In this article, I showed you how to enable 2FA for Office 365 user accounts and how users should enroll once 2FA has been enabled for their accounts, using the Microsoft Authenticator app for verification.

The post Enable Multi-Factor Authentication for Office 365 Users appeared first on Petri.

Posted on in category News

Devs! Here’s how to secure your IoT network, in, uh, 75 easy pages

The content below is taken from the original (Devs! Here’s how to secure your IoT network, in, uh, 75 easy pages), to continue reading please visit the site. Remember to respect the Author & Copyright.

An in-depth security guidance report aimed at Internet of Things developers has been released by the Cloud Security Alliance.

Titled Future-proofing the Connected World: 13 steps to developing secure IoT products, the report offers practical and technical guidance to devs trying to secure networks of IoT devices.

“An IoT system is only as secure as its weakest link,” wrote Brian Russell, chair of the CSA’s IoT working group. “This document is our attempt at providing actionable and useful guidance for securing the individual products that make up an IoT system.”

Split into comprehensive sections, the guide covers: the need for IoT security; why dev organisations should care about securing IoT networks; device security challenges; and includes detailed guidance for secure IoT development, including everything from tips on implementing a secure dev environment to designing in hardware security controls and securing your firmware updates. A “detailed checklist for security engineers to follow during the development process” is also included.

“The CSA looks to provide much needed education and direction to product developers who know their products are at risk of compromise, but may lack the understanding as to where to start the process for mitigating that risk,” said the organisation in a canned quote.

As a sample of what’s on offer, the guide has this to say about the use of cryptographic modules:

The National Institute of Standards and Technology (NIST) provides valuable documentation and tools for secure cryptographic modules. The Federal Information Processing Standard (FIPS) 140-2 should be followed whenever implementing cryptographic protections within an IoT device. IoT developers can procure FIPS 140-2 validated modules or create their own to be certified modules. The FIPS 140-2 Security Requirements for Cryptographic Modules document provides a valuable summary of the requirements that span the most lenient (Level 1) to the most stringent (Level 4) design requirements for cryptographic modules.

It also includes a brief and lucid guide to classifying IoT devices, given the excessively broad meaning of the term Internet of Things.

Professor William Webb, CEO of IoT connectivity talking-shop the Weightless SIG, told The Register: “I liked this. I thought it was clear, pitched at the right level and comprehensive without being over the top. It has lots of good references and picks up on everything I’d recommend to developers.”

He added: “Reading it all and then acting on all the recommendations is a huge task. But equally, missing out any might leave serious security loopholes in products.”

The Register asked the Cloud Security Alliance whether SMEs looking at the document would be overwhelmed by its length and detail.

“That’s always a challenging problem when putting out guidance like this,” Russell told us. “How much is too much? My advice to smaller firms is that they need to start somewhere. Two of the 13 steps that we mention stand out in that regard. First – for IoT developers that are already far along on the product lifecycle, look at recommendation #13 ‘Perform Security Reviews’. If you can at least have an independent organization test your product for security weaknesses, you will gain a lot of value.”

And what should firms that have already rolled out insecure IoT kit do in light of this guidance?

“It’s never too late to start,” advised Russell. “If you think of these controls as somewhat cyclical, you can figure out where to jump in and start applying the concepts based on where you are in the product lifecycle.”

He added: “Secure update should be a fundamental capability in all IoT devices and vendors need to think through the entire process – end-to-end.”

El Reg also asked him: is the whole IoT shebang worth companies getting into as more and more security howlers come to light? Russell said yes, at length:

Right now we’re seeing IoT products that provide lots of consumer benefits – enabling smart lighting in the home for example. The big payoff though will come when the IoT starts enabling trusted autonomous transactions to provide enhanced capabilities for businesses, streamlined services for municipalities, reduced casualties for drivers, quicker responses for emergency responders, etc. I think we need to go through these growing pains unfortunately in order to get to the other side where we can have some semblance of confidence that connected physical devices are locked down sufficiently to be safe.

The report can be downloaded in full from the CSA website. ®

Posted on in category News

IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration

The content below is taken from the original (IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a follow-up to our recent announcement of IPv6 support for Amazon S3, I am happy to be able to tell you that IPv6 support is now available for Amazon CloudFront, Amazon S3 Transfer Acceleration, and AWS WAF and that all 50+ CloudFront edge locations now support IPv6. We are enabling IPv6 across all of our Autonomous System Networks (ASNs) in a phased rollout that starts today and will extend across all of the networks over the next few weeks.

CloudFront IPv6 Support
You can now enable IPv6 support for individual Amazon CloudFront distributions. Viewers and networks that connect to a CloudFront edge location over IPv6 will automatically be served content over IPv6. Those that connect over IPv4 will continue to work as before. Connections to your origin servers will be made using IPv4.

Newly created distributions are automatically enabled for IPv6; you can modify an existing distribution by checking Enable IPv6 in the console or setting it via the CloudFront API:

Here are a couple of important things to know about this new feature:

  • Alias Records – After you enable IPv6  support for a distribution, the DNS entry for the distribution will be updated to include an AAAA record. If you are using Amazon Route 53 and an alias record to map all or part of your domain to the distribution, you will need to add an AAAA alias to the domain.
  • Log Files – If you have enabled CloudFront Access Logs, IPv6 addresses will start to show up in the c-ip field; make sure that your log processing system knows what to do with them.
  • Trusted Signers -If you make use of Trusted Signers in conjunction with an IP address whitelist, we strongly recommend the use of an IPv4-only distribution for Trusted Signer URLs that have an IP whitelist and a separate, IPv4/IPv6 distribution for the actual content. This model sidesteps an issue that would arise if the signing request arrived over an IPv4 address and was signed as such, only to have the request for the content arrive via a different, IPv6 address that is not on the whitelist.
  • CloudFormation – CloudFormation support is in the works. With today’s launch, distributions that are created from a CloudFormation template will not be enabled for IPv6. If you update an existing stack, the setting will remain as-is for any distributions referenced in the stack..
  • AWS WAF – If you use AWS WAF in conjunction with CloudFront, be sure to update your WebACLs and your IP rulesets as appropriate in order to whitelist or blacklist IPv6 addresses.
  • Forwarded Headers – When you enable IPv6 for a distribution, the X-Forwarded-For header that is presented to the origin will contain an IPv6 address. You need to make sure that the origin is able to process headers of this form.

To learn more, read IPv6 Support for Amazon CloudFront.

AWS WAF IPv6 Support
AWS WAF helps you to protect your applications from application-layer attacks (read New – AWS WAF to learn more).

AWS WAF can now inspect requests that arrive via IPv4 or IPv6 addresses. You can create web ACLs that match IPv6 addresses, as described in Working with IP Match Conditions:

All existing WAF features will work with IPv6 and there will be no visible change in performance. The IPv6 will appear in the Sampled Requests collected and displayed by WAF:

S3 Transfer Acceleration IPv6 Support
This important new S3 feature (read AWS Storage Update – Amazon S3 Transfer Acceleration + Larger Snowballs in More Regions for more info) now has IPv6 support. You can simply switch to the new dual-stack endpoint for your uploads. Simply change:

http://bit.ly/2dxc78M

to

http://bit.ly/2dPdieJ

Here’s some code that uses the AWS SDK for Java to create a client object and enable dual-stack transfer:

AmazonS3Client s3 = new AmazonS3Client();
s3.setS3ClientOptions(S3ClientOptions.builder().enableDualstack().setAccelerateModeEnabled(true).build());

Most applications and network stacks will prefer IPv6 automatically, and no further configuration should be required. You should plan to take a look at the IAM policies for your buckets in order to make sure that they will work as expected in conjunction with IPv6 addresses.

To learn more, read about Making Requests to Amazon S3 over IPv6.

Don’t Forget to Test
As a reminder, if IPv6 connectivity to any AWS region is limited or non-existent, IPv4 will be used instead. Also, as I noted in my earlier post, the client system can be configured to support IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Therefore, we recommend some application-level testing of end-to-end connectivity before you switch to IPv6.

Jeff;

 

Posted on in category News

IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration

The content below is taken from the original (IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a follow-up to our recent announcement of IPv6 support for Amazon S3, I am happy to be able to tell you that IPv6 support is now available for Amazon CloudFront, Amazon S3 Transfer Acceleration, and AWS WAF and that all 50+ CloudFront edge locations now support IPv6. We are enabling IPv6 across all of our Autonomous System Networks (ASNs) in a phased rollout that starts today and will extend across all of the networks over the next few weeks.

CloudFront IPv6 Support
You can now enable IPv6 support for individual Amazon CloudFront distributions. Viewers and networks that connect to a CloudFront edge location over IPv6 will automatically be served content over IPv6. Those that connect over IPv4 will continue to work as before. Connections to your origin servers will be made using IPv4.

Newly created distributions are automatically enabled for IPv6; you can modify an existing distribution by checking Enable IPv6 in the console or setting it via the CloudFront API:

Here are a couple of important things to know about this new feature:

  • Alias Records – After you enable IPv6  support for a distribution, the DNS entry for the distribution will be updated to include an AAAA record. If you are using Amazon Route 53 and an alias record to map all or part of your domain to the distribution, you will need to add an AAAA alias to the domain.
  • Log Files – If you have enabled CloudFront Access Logs, IPv6 addresses will start to show up in the c-ip field; make sure that your log processing system knows what to do with them.
  • Trusted Signers -If you make use of Trusted Signers in conjunction with an IP address whitelist, we strongly recommend the use of an IPv4-only distribution for Trusted Signer URLs that have an IP whitelist and a separate, IPv4/IPv6 distribution for the actual content. This model sidesteps an issue that would arise if the signing request arrived over an IPv4 address and was signed as such, only to have the request for the content arrive via a different, IPv6 address that is not on the whitelist.
  • CloudFormation – CloudFormation support is in the works. With today’s launch, distributions that are created from a CloudFormation template will not be enabled for IPv6. If you update an existing stack, the setting will remain as-is for any distributions referenced in the stack..
  • AWS WAF – If you use AWS WAF in conjunction with CloudFront, be sure to update your WebACLs and your IP rulesets as appropriate in order to whitelist or blacklist IPv6 addresses.
  • Forwarded Headers – When you enable IPv6 for a distribution, the X-Forwarded-For header that is presented to the origin will contain an IPv6 address. You need to make sure that the origin is able to process headers of this form.

To learn more, read IPv6 Support for Amazon CloudFront.

AWS WAF IPv6 Support
AWS WAF helps you to protect your applications from application-layer attacks (read New – AWS WAF to learn more).

AWS WAF can now inspect requests that arrive via IPv4 or IPv6 addresses. You can create web ACLs that match IPv6 addresses, as described in Working with IP Match Conditions:

All existing WAF features will work with IPv6 and there will be no visible change in performance. The IPv6 will appear in the Sampled Requests collected and displayed by WAF:

S3 Transfer Acceleration IPv6 Support
This important new S3 feature (read AWS Storage Update – Amazon S3 Transfer Acceleration + Larger Snowballs in More Regions for more info) now has IPv6 support. You can simply switch to the new dual-stack endpoint for your uploads. Simply change:

http://bit.ly/2dxc78M

to

http://bit.ly/2dPdieJ

Here’s some code that uses the AWS SDK for Java to create a client object and enable dual-stack transfer:

AmazonS3Client s3 = new AmazonS3Client();
s3.setS3ClientOptions(S3ClientOptions.builder().enableDualstack().setAccelerateModeEnabled(true).build());

Most applications and network stacks will prefer IPv6 automatically, and no further configuration should be required. You should plan to take a look at the IAM policies for your buckets in order to make sure that they will work as expected in conjunction with IPv6 addresses.

To learn more, read about Making Requests to Amazon S3 over IPv6.

Don’t Forget to Test
As a reminder, if IPv6 connectivity to any AWS region is limited or non-existent, IPv4 will be used instead. Also, as I noted in my earlier post, the client system can be configured to support IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Therefore, we recommend some application-level testing of end-to-end connectivity before you switch to IPv6.

Jeff;

 

Posted on in category News

Why being a data scientist ‘feels like being a magician’

The content below is taken from the original (Why being a data scientist ‘feels like being a magician’), to continue reading please visit the site. Remember to respect the Author & Copyright.

The data scientist role was thrust into the limelight early this year when it was named 2016’s “hottest job,” and there’s been considerable interest in the position ever since. Just recently, the White House singled data scientists out with a special appeal for help.

Those in the job can expect to earn a median base salary of roughly $116,840 — if they have what it takes. But what is it like to be a data scientist? Read on to hear what three people currently on the front lines had to say.

How the day breaks down

That data scientists spend a lot of time working with data goes without saying. What may be less obvious is that meetings and face-to-face time are also a big part of the picture.

“Typically, the day starts with meetings,” said Tanu George, an account manager and data scientist with LatentView Analytics. Those meetings can serve all kinds of purposes, she said, including identifying a client’s business problem, tracking progress or discussing reports.

tanu george latentview LatentView Analytics

Tanu George is a data scientist with LatentView Analytics.

By midmorning the meetings die down, she said. “This is when we start doing the number crunching,” typically focused on trying to answer the questions asked in meetings earlier.

Afternoon is often spent on collaborative meetings aimed at interpreting the numbers, followed by sharing analyses and results via email at the end of the day.

Roughly 50 percent of George’s time is taken up in meetings, she estimates, with another 20 percent in computation work and 30 percent in interpretation, including visualizing and putting data into actionable form.

Meetings with clients also represent a significant part of the day for Ryan Rosario, an independent data scientist and mentor at online education site Springboard. “Clients explain the problem and what they’d like to see for an outcome,” he said.  

Next comes a discussion of what kinds of data are needed. “More times than not, the client actually doesn’t have the data or know where to get it,” Rosario said. “I help develop a plan for how to get it.”

ryan rosario data scientist Ryan Rosario

Ryan Rosario is an independent data scientist and engineer.

A lot of data science is not working with the data per se but more trying to understand the big picture of “what does this mean for a company or client,” said Virginia Long, a predictive analytics scientist at healthcare-focused MedeAnalytics. “The first step is understanding the area — I’ll spend a lot of time searching the literature, reading, and trying to understand the problem.”

Figuring out who has what kind of data comes next, Long said. “Sometimes that’s a challenge,” she said. “People really like the idea of using data to inform their decisions, but sometimes they just don’t have the right data to do that. Figuring out ways we can collect the right data is sometimes part of my job.”

Once that data is in hand, “digging in” and understanding it comes next. “This is the flip side of the basic background research,” Long said. “You’re really finding out what’s actually in the data. It can be tedious, but sometimes you’ll find things you might not have noticed otherwise.”

virginia long medeanalytics Virginia Long

Virginia Long is a predictive analytics scientist at MedeAnalytics.

Long also spends some of her time creating educational materials for both internal and external use, generally explaining how various data science techniques work.

“Especially with all the hype, people will see something like machine learning and see just the shiny outside. They’ll say, ‘oh we need to do it,'” she explained. “Part of every day is at least some explaining of what’s possible and how it works.”

Best and worst parts of the job

Meetings are George’s favorite part of her day: “They make me love my job,” she said.

For Rosario, whose past roles have included a stint as a machine learning engineer at Facebook, the best parts of the job have shifted over time.

“When I worked in Silicon Valley, my favorite part was massaging the data,” he said. “Data often comes to us in a messy format, or understandable only by a particular piece of software. I’d move it into a format to make it digestible.”

As consultant, he loves showing people what data can do.

“A lot of people know they need help with data, but they don’t know what they can do with it,” he said. “It feels like being a magician, opening their minds to the possibilities. That kind of exploration and geeking out is now my favorite part.”

Long’s favorites are many, including the initial phases of researching the context of the problem to be solved as well as figuring out ways to get the necessary data and then diving into it headfirst.

Though some reports have suggested that data scientists still spend an inordinate amount of their time on “janitorial” tasks, “I don’t think of it as janitorial,” Long said. “I think of it as part of digging in and understanding it.”

As for the less exciting bits, “I prefer not to have to manage projects,” Long said. Doing so means “I often have to spend time managing everyone else’s priorities while trying to get my own things done.”

As for Rosario, who was trained in statistics and data science, systems building and software engineering are the parts he prefers to de-emphasize.

Preparing for the role

It’s no secret that data science requires considerable education, and these three professionals are no exception. LatentView Analytics’ George holds a bachelor’s degree in electrical and electronics engineering along with an MBA, she said.

Rosario holds a BS in statistics and math of computation as well as an MS in statistics and an MS in computer science from UCLA; he’s currently finishing his PhD in statistics there.

As for MedeAnalytics’ Long, she holds a PhD in behavioral neuroscience, with a focus on learning, memory and motivation.

“I got tired of running after the data,” Long quipped, referring to the experiments conducted in the scientific world. “Half of your job as a scientist is doing the data analysis, and I really liked that aspect. I also was interested in making a practical difference.”

The next frontier

And where will things go from here?

“I think the future has a lot more data coming,” said George, citing developments such as the internet of things (IoT). “Going forward, all senior and mid-management roles will incorporate some aspect of data management.”

The growing focus on streaming data means that “a lot more work needs to be done,” Rosario agreed. “We’ll see a lot more emphasis on developing algorithms and systems that can merge together streams of data. I see things like the IoT and streaming data being the next frontier.”

Security and privacy will be major issues to tackle along the way, he added.

Data scientists are still often expected to be “unicorns,” Long said, meaning that they’re asked to do everything single-handedly, including all the coding, data manipulation, data analysis and more.

“It’s hard to have one person responsible for everything,” she said. “Hopefully, different types of people with different skill sets will be the future.”

Words of advice

For those considering a career in data science, Rosario advocates pursuing at least a master’s degree. He also suggests trying to think in terms of data.

“We all have problems around us, whether it’s managing our finances or planning a vacation,” he said. “Try to think about how you could solve those problems using data. Ask if the data exists, and try to find it.”

For early portfolio-building experience, common advice suggests finding a data set from a site such as Kaggle and then figuring out a problem that can be solved using it.

“I suggest the inverse,” Rosario said. “Pick a problem and then find the data you’d need to solve it.”

“I feel like the best preparation is some sense of the scientific method, or how you approach a problem,” said MedeAnalytics’ Long. “It will determine how you deal with the data and decide to use it.”

Tools can be mastered, but “the sensibility of how to solve the problem is what you need to get good at,” she added.

Of course, ultimately, the last mile for data scientists is presenting their results, George pointed out.

“It’s a lot of detail,” she said. “If you’re a good storyteller, and if you can weave a story out of it, then there’s nothing like it.”

Posted on in category News

How does a hybrid infrastructure fit my accreditations?

The content below is taken from the original (How does a hybrid infrastructure fit my accreditations?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Security-related certifications such as ISO 27001 and, more particularly, the Payment Card Industry Data Security Standard (PCI-DSS), have stringent requirements regarding the controls on infrastructure, how data is routed and stored around it, and so on.

Particularly in the cloud components of a hybrid setup, the control you have over the lower-level elements of the infrastructure is limited. So can a hybrid setup and your required accreditations live in harmony?

Incidentally: I know I should be saying “ISO/IEC 27001:2013”, but life’s too short and you’d get fed up with me. Hence I’ll refer to it as “ISO 27001”.

The purpose of the accreditations

The first thing to remember is that the likes of ISO 27001 don’t exist to tell you how to implement your hybrid infrastructure. They don’t insist that you use, say, Windows rather than Linux, so why would they dictate whether you should have on-premise systems rather than cloud systems (or, for that matter, vice versa)?

Neither does any of them claim to be the be all and end all of systems and information security: for example ISO notes that: “Using this family of standards [the ISO 27000 range] will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties”. Note use of the word “help” there – it’s just one tool you can use as a framework for the security of your organisation.

This said, though, many companies go to the trouble of obtaining certifications because it’s demanded of them by either their suppliers or their customers. So it’d be no surprise to have your bank on your back until you obtain PCI-DSS accreditation, for instance, and there’s an increasing number of customers who insist on ISO 27001 certification before they’ll buy services from you. And in both cases you’ll find that cyber security insurance premiums will reduce if you’re able to wave some certificates under the nose of the broker.

Do I actually need the accreditations?

Continuing this train of thought, then, none of these certifications is actually obligatory in the legal sense (unlike stuff like Data Protection registration) or the regulatory sense (eg, the rules imposed by the regulator of your industry if you’re a telco, a power company or some such).

In many cases when your suppliers or customers ask for a particular certification, they’re actually saying: “We want to see that your systems are configured and managed to sufficiently rigorous standards, and that you have strong policies and procedures in place, such that we can have a good degree of confidence that our data is safe with you.”

In many cases it’s enough to show yourself to be “compliant” with the requirements of the standards – basically to self-certify that you conform without an external auditor stating that you do so. To quote ISO again: “Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed.”

Personally I don’t really see a great deal of point in being compliant without going the extra step of having the audit. Unless you’re a very small company, getting yourselves to the stage where you’re genuinely aligned with the requirements of the standard is a huge task, and if you’re confident that you’ve got that far then the cost of the audit (a few days out of the schedule of a lump of your team, plus a few thousand quid) is a modest one.

Either way, we’ll assume if you’re reading this that you’ve decided that yes, you do need the accreditations, whether self-cert or officially audited.

Public versus private components

Moving on then, let’s look at the difference between the worlds on each side of the hybrid infrastructure.

Looking at the on-premise aspects for a start, we have a collection of systems that are connected to a LAN, are almost certainly connected to the Internet, and which have people sitting at computers accessing them.

There’s some kind of user database (probably Active Directory in a Windows world), and perhaps a few systems that have their own authentication mechanisms because they can’t integrate with the central directory service for one reason or another.

Technical staff, either in-house or outsourced, manage the systems and underlying networks, and there will be some kind of regime, either ad-hoc or rigorous, for applying updates to systems – both trivial (eg, standard Windows Update functionality) and complex (eg, major version upgrades for your core database or finance system).

What do we have when we look at the public cloud aspects? Well, pretty much the same actually. There’s nothing fundamental in the public cloud that doesn’t exist in some form in the private cloud: really the only difference is the level to which they apply in each world.

Internet connectivity’s the first obvious one: the nature of the public cloud is that unless you’re loaded and can afford direct private circuit connectivity into the back end, you’re going to be managing it over the internet.

Hence the default level of Internet access into the public cloud installation is much greater than that into the on-premise components, which might not allow any inbound management connectivity at all. Turning this on its head, though, the reason you don’t have inbound connectivity to the on-premise components is that you put a firewall in the way to prevent illicit access – which of course is precisely what you’ll do on your public cloud too. Yes, there’ll be a little more access in the latter case but this will be controlled by strong encryption and certificate-based authentication and you’ll only permit it from your own on-premise IP ranges (you will, won’t you?).

Then there’s the geographical aspect: you have absolute control over the geography (and hence the applicable data protection laws and other associated legislation) of your on-premise installation. But realistically the same applies to the public cloud too: in the multi-region providers you have control over which region your data sits in, and you can be sure it remains there unless you move it. So geography’s not a problem either.

The other difference is the level of control you have over the underlying infrastructure in each side of your hybrid cloud. In your private setup you have access to everything from the bare metal upwards, which gives you absolute control over its operation.

In the cloud you don’t have this: you have to rely on the service level agreement from the cloud provider, and if something goes pear-shaped then you depend on the provider’s techies to fix the problem. But there’s a flip-side: a cloud provider’s infrastructure is likely to be much better protected against disaster than the average on-premise installation, and they’ve probably got far more access to spares and engineers than you have internally. So what you lose in absolute control you probably make up for in speed to fix in the event of a problem.

Sponsored:
HPC and HPDA for the Cognitive Journey with OpenPOWER

Posted on in category News

NAS vendor Synology taking on Microsoft, Dell, Slack and AWS

The content below is taken from the original (NAS vendor Synology taking on Microsoft, Dell, Slack and AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Taiwanese network-attached storage (NAS) vendor Synology is about to pick fights with several far larger competitors.

The company offered its first English-language preview of two new rack-mount NAS devices. One, the FlashStation FS3017 is a 24-drive, all-flash NAS with a pair of 10GbE/40GbE ports and claimed 200,000 IOPS when performing random 4k writes. There’s also a new 16-disk RS4017xs+ that when filled with 10TB disks and left un-RAIDED can store 160 terabytes.

The new boxes take Synology a decent way out of its nano-NAS ghetto and into competition with low-end kit from the likes of NetApp, Dell and Dell EMC.

Synology is also pitching itself agains Nimble, which puts its cloudy performance analytics service front and centre.

The new NAS also come with what Synology is calling “C2”, a pair of cloud services. “C2 backup to cloud” does what it says on the can – lift your files into the cloud. “C2 site recovery” snapshots virtual machines and, if the NAS goes down, runs the applications in the cloud.

Those services mean Synology competes with Microsoft for cloud backup and site recovery.

They also take Synology into battle with other cloud providers, because the company told The Register it plans to build its own data centre in Europe. We asked if it meant colocation – The Register‘s Mandarin is infinitely worse than Synology folks’ English – and were assured the company intends to break ground for its own bit barn somewhere in Europe. Making it a competitor of sorts for AWS, Azure, Google and IBM. While we’re talking IBM, the new NASes also boast a fast file transfer feature that might just overlap with Big Blues Aspera file transfer product.

Just where and when the data centre will be built and will commence operations hasn’t been decided, nor has the cost of the C2 services. But we’re assured they’re coming.

Also inside the new NAS is an update to Synology’s productivity suite, which has grown a chat client that in a demo beheld by The Register looks like a pretty close Slack clone.

That’s a lot of fights to pick for a privately-held company thought to have annual revenue around US$100m and operating in a shrinking market for small NAS. Synology reckons it can pick them because it focuses on small to medium businesses. So much so that when your correspondent asked if it supports VVOLs we got a blank stare in reply. Or perhaps that was another language-fail moment. ®

Sponsored:
HPC and HPDA for the Cognitive Journey with OpenPOWER

Posted on in category News

AWS has a lousy hybrid cloud story. VMware might fix that soon

The content below is taken from the original (AWS has a lousy hybrid cloud story. VMware might fix that soon), to continue reading please visit the site. Remember to respect the Author & Copyright.

VMware and Amazon Web Services are reportedly about to stage a public display of affection.

Fortune reckons the two have been having intimate chats about the same kind of relationship Virtzilla has with IBM. That cloudy tryst sees IBM offer VMware’s service-provider-grade-vSphere -as-a-service, the better to help VMware users build public clouds.

VMware did that deal after deciding its own cloud-building ambitions were a bit far-fetched. It therefore stopped building new data centres and positioned its own vCloud Air as the virtualisation connoisseur’s cloud complete with tales of DOS applications humming along inside for very odd clients. IBM got the job of giving VMware world-girdling scale and also the chance to be the one holding vSphere users hands and re-assuring them the cloud won’t be scary.

VMware’s also, of late, come to realise it can’t keep its users on-premises forever and has admitted its flagship vSphere private cloud product is in long-term decline. It’s therefore on the hunt for ways to keep vSphere users happy for as long as possible, often with hybrid cloud but also with new lines of business.

Amazon offers plenty of the things VMware needs. Its cloud is bigger than IBM’s and also rather better-patronised, which helps VMware’s cause by giving its users an option they almost certainly want.

Amazon also has a lousy hybrid cloud story. While it has plug-ins for vCenter and System Center and isn’t opposed to hybrid cloud, it hardly ever mentions it in polite company.

Replicating the IBM/VMware relationship would therefore improve AWS’ hybrid cloud story at the stroke of a pen. VMware would gain yet more scale.

Virtzilla would also add some needed credibility to its Cross Cloud architecture vision, articulated at VMworld, which promises abstraction of resources across clouds. When launching Cross Cloud VMware said it would be doable with nothing more than public APIs, a proposition El Reg‘s virtualisation desk found a little flimsy. A formal relationship would surely be a better launchpad for the new Cross Cloud business.

Fortune says the deal will be announced early next week. Suffice to say we’re keeping an eye on this. ®

Sponsored:
Fast data protection ROI?

Posted on in category News

It’s time for Microsoft to revisit dated defaults

The content below is taken from the original (It’s time for Microsoft to revisit dated defaults), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sysadmin blog What works for 100 users frequently doesn’t work for 10,000. The same is true in reverse, however, there are far fewer vendors worrying about tailoring software designed for the enterprise to the needs of the SMB. True mass market software needs to walk the tightrope between both worlds, and very little of it succeeds.

Let’s consider Active Directory (AD) replication times as an example. By default, AD is scheduled to do inter-site replication every 180 minutes (three hours). This makes sense if your AD is enormous and one or more of your sites happens to live on the other end of connectivity from the past. An ISDN line, for example. Or perhaps a telegraph.

This value can be changed from the default to occur as frequently as once every 15 minutes. Here again, 15 minutes might be rational if your AD is some multi-tentacle hydra with fleventy-five domains in multiple forests bound together into a complicated and incomprehensible circus of insanity. Fortunately, most of us don’t have to play that game, and thus 15 minutes represents quite a conservative minimum replication interval.

Indeed, for many multi-site businesses, 15 minute replication windows are increasingly an unacceptably long choice. The default 180 minute delays are simply absurd. Technology is in constant motion, and recent changes mean its time for Microsoft to revisit many of its choices.

DNS is everything

The biggest issue with AD replication times is that AD integrated DNS zones have to wait on the rest of the AD in order to replicate. This was fine back in 2000, when dynamicity of networks was an extreme novelty, but today’s new technologies are all designed to be highly dynamic and are absolutely reliant on DNS.

The problem with DNS and AD is twofold. First: Microsoft’s DNS servers are extremely user friendly, battle tested and reliable. More importantly, they integrate with other key infrastructure elements, most notably DHCP. Levy whatever venom against Microsoft you wish, its DNS servers are an excellent choice for the role, the result of which is a lot of critical infrastructure relies on them.

The second part of this is that increased adoption of IPv6, microservices, load balancers and so forth are driving a DNS-dependant dynamic infrastructure. Sysadmins aren’t the only ones creating workloads these days. Developers and end users might be doing it, and even the machines are spinning up workloads on their own!

The velocity of change

Putting aside the DNS issues – as one can always use non-AD-integrated DNS solutions if needed – there are other reasons 15 minute AD replication times are an annoyance. The first is imply that of the time it takes changes made by systems administrators to show up. Yes, we can go into AD sites and services and manually trigger replication, but that’s a pain.

Additionally, the combination of self-service interfaces and hybrid cloud solutions means that systems administrators increasingly have no idea what’s occurring on their networks. We design the networks, but we don’t necessarily know every time a user has added a new device, or from where.

One particularly bothersome example is that of a marketing executive who purchased a new notebook and registered it against the company’s network using the provided cloud-based mobile device management service, as she had been instructed. The device was added, it began to receive emails, but her attempts to log in to the network failed.

The reason was that she had activated the notebook from the one of the company’s smaller sites. The cloud service synchronized against the head office’s AD, but didn’t replicate the changes out to the site the marketing head was physically located in until 15 minutes later.

The result? Some frantic phone calls to a help desk professional who had no idea what could cause this (devices are typically activated at head office), which resulted in the marketing exec’s user getting locked out. The marketing exec’s shiny new notebook was thus not working in time for a major customer presentation and words were exchanged with IT. Loudly.

The solution

Fortunately, despite the GUI, the PowerShell commands and the official guidance all saying replication can only be set as frequently as every 15 minutes, there is a workaround. The trick is to enable Inter-site Change Notification for the relevant links.

Microsoft’s Chad Duffey has an excellent blog post discussing the issue. There is a slightly quicker TechNet article if you don’t want the whys and wherefores.

Inter-site Change Notification essentially causes Active Directory to treat replication between AD servers located across site links as though they were in the same site. When a change is made it is immediately pushed across. (Actually, it takes between three and five seconds, which can make a difference for some high-churn applications, so be warned!)

The problem with the solution

The problem with the solution is that in solving some very real problems (like DNS), we create others. The prominent example is: as goes AD replication, so goes GPO dissemination. Even experienced systems administrators have been known to kill entire networks with a bad GPO. The idea of every GPO change made instantly replicating throughout the AD fabric is pretty scary.

This issue isn’t going to go away. It’s only going to get more pressing. What’s needed isn’t simply a GUI toggle to enable Inter-site Change Notification, but a fundamental change in how AD behaves.

Today, AD is (mostly) an all-or-nothing affair. When AD replicates, it all replicates. (There are some exceptions, such as lockouts.) This needs to change.

Overall, individual applications or groups of applications need to be able to set different replication times. DNS using the AD infrastructure is grand, but it would be groovy if it could replicate asynchronously of GPOs, for example. It would also be useful to specify the order in which services replicate, if they replicate together. DNS replicating ahead of GPOs, for example, helps to solve a lot of problems.

Imagine if your hybrid cloud infrastructure could say “replicate this information throughout the AD fabric immediately, because it is a user/device registration” without triggering a fabric-wide reconvergence.

This is a major undertaking, and it’s an open question whether or not Microsoft is interested in solving the problem. Traditionally, Microsoft only seems to engage with its customers when absolutely required, and even then only if a large enough customer makes a big enough noise. I don’t yet see any large customers hollering about this.

Windows Server is oddly like looking at any jurisdiction’s laws. It’s a curious combination of product rigidity and seemingly bizarre default values that make no sense for the majority of individuals or businesses in the present day. Sadly, getting things changed may be just as hard as getting politicians to crack open the books and put out-of-date laws to bed. ®

Sponsored:
HPC and HPDA for the Cognitive Journey with OpenPOWER

Posted on in category News

Edinburgh University to flog its supercomputer for £0.0369 per core hour

The content below is taken from the original (Edinburgh University to flog its supercomputer for £0.0369 per core hour), to continue reading please visit the site. Remember to respect the Author & Copyright.

The University of Edinburgh’s supercomputer, Cirrus, is now being rented to businesses for their mega-performance computing needs.

Cirrus is housed at the University’s Advanced Computing Facility at Easter Bush, which also hosts the UK’s national supercomputing service, ARCHER, although it doesn’t really compare to ARCHER’s 118,080 processing cores.

Time on Cirrus is charged at £0.0369 per core hour (exclusive of VAT), although to celebrate Cirrus’ launch the University is offering 1,000 core hours of free use to the first 20 companies that apply to use it.

Customers will be able to use the machine (worth £1,000,000) to tackle whatever challenges are being thrown up by their research or design efforts. The service is fully supported and customers will also have access to Edinburgh Parallel Computing Centre’s (EPCC) consulting expertise in high performance computing and data analytics.

Cirrus is advertised as a “mid-range, industry standard Linux cluster” on which customers can run their own codes or access standard commercial software tools.

In detail, it is an SGI ICE XA cluster with 56 compute nodes which utilises superfast Infiniband interconnect. Each of its compute nodes contains 36 cores, providing 2016 cores in total, and each also hs 256GB RAM. Hyperthreading is also enabled on each node providing a total of 72 threads per node.

There are three login nodes, with identical hardware to the compute nodes, which are provided for general use. Local Lustre storage is provided by a single Lustre filesystem, with 200TB of disk space, while users will also have access to EPCC’s data storage and archiving services.

George Graham, Commercial Manager of EPCC, said: “This newly installed computing power – in tandem with EPCC’s in-house expertise – means we are well placed to help businesses meet many of the computational challenges associated with developing new products and services.” ®

Sponsored:
Optimizing the hybrid cloud

Posted on in category News

Built.io launches an IFTTT for business users

The content below is taken from the original (Built.io launches an IFTTT for business users), to continue reading please visit the site. Remember to respect the Author & Copyright.

With Flow, Built.io has long offered an integration tool that allowed technical users (think IT admins and developers) to create complex, multi-step integrations with the help of an easy to use drag-and-drop interface. Now, however, the company has also launched a more basic version of Flow that is aimed at business users who want to create IFTTT-like integrations between applications like Cisco Spark, Slack, Gmail, Marketo and Salesforce. To clarify the difference between the two services, the old version of Flow is now Flow Enterprise, while this new one is branded as Flow Express.

flow-express“Integration and automation add value to business and technical users alike, but there are a range of different needs and technical skillsets out there,” Built.io COO Matthew Baier told me. “After serving an audience comprised of power users, IT admins, and generally more technical users, Built.io saw the opportunity to bring integration to a less technical audience – one without a programming background. Built.io Flow Express helps people without coding skills connect the apps, services, and devices they use every day to automate powerful workflows.”

All of that does sound a lot like existing services like IFTTT, Zapier and Microsoft Flow (yep, same name), but Built.io argues that business processes don’t always lend themselves to the basic “if this then that” model.

Baier says that many of Built.io Enterprise’s users came to the service because they had outgrown what its competitors could offer. “Built.io Flow Express now provides the on-ramp we’d been missing,” he said. “Now with the end-to-end integration capabilities provided by the combination of Built.io Flow Express and Built.io Flow Enterprise, we’re providing an unlimited runway to start small and simple – just like you would with IFTTT – but keep on growing and adding to your integrations, without hitting a wall.”

Flow Express does away with the original drag-and-drop interface. Instead, it uses a basic step-by-step wizard which still allows you to create multi-step flows. The company notes that users who outgrow the Express product, though, will be able to export their integrations and take them to Flow Enterprise. Flow Express currently lets you connect 42 different services. Its competitors generally offer support for a wider range of tools, but all the standard services like Slack, Microsoft Dynamics, Salesforce, Trello, Box, Dropbox, etc. are here.

Built.io offers a few trial for Flow Express. Paid plans with support for up to five workflow start at $9/month. The company also offers a premium plan at $29/month with support for up to 25 workflows (and it’s offering a couple of bonus workflows for users who sign up to either plan before October 24).

Posted on in category News

This Infographic Shows the Common Ways Scammers Try to Phish Your Account

The content below is taken from the original (This Infographic Shows the Common Ways Scammers Try to Phish Your Account), to continue reading please visit the site. Remember to respect the Author & Copyright.

Chances are if your email or social media account has ever been compromised, you accidentally gave your credentials to the scammers yourself. The most common way to infiltrate an account is called phishing, in which people trick you into handing over your login info to false websites that look legitimate.

Phishing attacks aren’t new, of course, and there’s likely a deluge of such emails in your spam folder, but it’s still the leading cause of compromised accounts. This graphic from Digital Guardian highlights how you can spot phishing attempts in your inbox and how to avoid them. Whether it’s weird attachments that prey on your curiosity or spoofed links that take you to a false login page that imitates a familiar brand, there are a variety of techniques that scammers use to engineer their way into your account (often just to proliferate more spam). And it’s not just email; beware of shady text messages from unknown numbers or people posing as IRS agents requesting your private info.

Have a look at the graphic below for a thorough look at common phishing methods.

Don’t Get Hooked: How to Recognize and Avoid Phishing Attacks (Infographic) | Digital Guardian

Posted on in category News

CloudFlare shows Tor users the way out of CAPTCHA hell

The content below is taken from the original (CloudFlare shows Tor users the way out of CAPTCHA hell), to continue reading please visit the site. Remember to respect the Author & Copyright.

CloudFlare has backed up its promise to get rid of the CAPTCHAs that Tor users complain discriminate against them.

The content distribution network’s (CDN’s) hated CAPTCHAs make browsing an unhappy experience for Tor users by offering rather too many challenges. Worse yet, they drop a cookie on validated users’ browsers and thereby create a re-identification risk.

Surfers using Tor have complained for some time that CDNs like CloudFlare discriminate against them. CloudFlare assigns a reputation to a user’s IP address, which means that an innocent Tor user unfairly inherits the reputation of an exit node that might also be serving spam or malware.

Back in February, CEO Matthew Prince told The Register the company was working on ways to get rid of the CAPTCHA. At the time, a couple of CloudFlare engineers had already dropped the first draft-of-the-draft at GitHub.

CloudFlare’s architecture

The pre-Internet Draft draft is here.

At the moment, CAPTCHAs are presented to the user by JavaScript supplied by the CDN – and that can’t be reliably audited, because the code can change at any time.

Putting the challenge in a plugin makes it audit-able, CloudFlare notes.

Next is the problem of the cookie, which the document highlights as a risk: “the challenge page sets a unique cookie to indicate that the user has been verified. Since Cloudflare controls the domains for all of the protected origins, it can potentially link CAPTCHA users across all >2 million Cloudflare sites without violating same-origin policy.”

Instead of the cookie, the plugin would use a blind signature scheme. Here’s how CloudFlare thinks it could work:

“The protocol allows a user to solve a single CAPTCHA and in return learn a specified number of tokens that are blindly signed that can be used for redemption instead of witnessing CAPTCHA challenges in the future. For each request a client makes to a Cloudflare host that would otherwise demand a CAPTCHA solution, a browser plugin will automatically supply a bypass token.

“By issuing a number of tokens per CAPTCHA solution that is suitable for ordinary browsing but too low for attacks, we maintain similar protective guarantees to those of Cloudflare’s current system.”

The blind signature scheme is described at Wikipedia. In CloudFlare’s implementation, the tokens carrying the signatures will be JSON objects in the plugin: “tokens will be a JSON object comprising a single ‘nonce’ field. The ‘nonce’ field will be made up of 30 cryptographically random bytes”.

The plugin will also contain a CA-issued certificate to validate keys, and certificates will be checked against certificate transparency logs. ®

Sponsored:
IBM FlashSystem V9000 product guide

Posted on in category News

HPE, Samsung take clouds to carriers

The content below is taken from the original (HPE, Samsung take clouds to carriers), to continue reading please visit the site. Remember to respect the Author & Copyright.

HPE and Samsung are getting together to give carriers a shove towards a more cloudy future.

The two companies have announced a tie-up with a focus on network function virtualisation (NFV) and virtual network functions (VNF).

The two are part of a whole, but subtly different: NFV refers to taking carrier-grade applications (voice switching, video serving, network firewalls and so on) and turning them into software; VNF refers to the individual functions.

They’ve been on the RSN (real soon now) list for some time, but it’s been difficult to get telcos to shift them out of the laboratory.

HPE and Samsung are hoping that network operators would rather not download a free platform and start cutting the own code. HPE’s announcement pitches the ETSI-compliance of its OpenNFV Infrastructure platform, for example.

ETSI, the European Telecommunications Standards Institute, is building standard use-cases for NFV/VNF.

Samsung will now join HPE’s OpenNFV Partner Program, making sure its VNFs will run on HPE kit.

Those include a virtual evolved packet core (EPC) function for LTE Advanced networks; an IP media subsystem, and a VNF manager; while HPE’s contribution is to tip in its OpenNFV platform, and its management and orchestration system.

The two will offer integration services and third-party solutions. ®

Sponsored:
Fast data protection ROI?

Posted on in category News

Kaleao’s KMAX ARM-based server has legs. How fast can it run?

The content below is taken from the original (Kaleao’s KMAX ARM-based server has legs. How fast can it run?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Kaleao is a startup developing ARM-based servers and hyper-converged appliances under a KMAX brand. Its marketing-speak says it has a “true convergence” approach, it involves “physicalization” and there is a “microvisor” – oh dear, what does this mean?

The KMAX product comes in server and appliance forms.

The servers use 64-bit ARMv8-compatible CPU and employs big.LITLE architecture – ARM’s form of CPU tiering with one or more beefy cores spinning up to handle heavy workloads and smaller lightweight cores (which don’t need quite so much power) taking on other work.

FPGAs – reprogrammable logic chips – are employed and Kaleao says that one “can create the virtual function NIC as an actual PCI device. Each VM can directly map a unique virtual function as a PCI device, so each VM has real hardware resources that can change dynamically like virtual ones.”

What is “true convergence?” Kaleao states: “Traditional converged systems are pre-integrated assembly of storage, servers and network equipment, which normally are separated devices interconnected and provided. True convergence is the technology that allows a native, board-level convergence. With true convergence, any device can be a compute, storage and network server or any of these functions.”

The net net is that Kaleao offers deeper convergence, not true or even untrue convergence.

Physicalization involves software-defined, hardware-accelerated resources, like the virtual NIC above. Here’s a schematic diagram contrasting legacy server architecture with KMAX:

Kaleao_schematic

Kaleao schematic with global compute, network and storage pools

There are three server capabilities – compute, storage and networking.

A compute unit has:

  • 4 x ARM Cortex-A57 cores
  • 4 x ARM Cortex-A53 cored (that’s eight cores in total)
  • 4GB LPDDR4 25GB/sec DRAM
  • 128GB NV cache
  • 20Gbit/s IO bandwidth
  • OpenStack NOVA compute
  • Resource manager
  • Microvisor

It draws less than 15 watts of power and Kaleao is constantly singing a power-saving refrain.

A storage unit is made from a compute unit plus a storage resource manager, an NVMe SSD (PCIe gen 2 x 4) with 500GB, 1TB, 2TB, 4TB or 7.68TB of capacity, and OpenStack CINDER.

The network unit is also based on a compute unit with a network resource manager, 2 x 10GbitE channels and OpenStack NEUTRON. It employs blades with one a single embedded 10/40 Gbit switch and 2 x QSFP 40/10Gbit ports.

There is a deployment hierarchy, from unit through node, blade and chassis to a rack, which gets to some big numbers. Compute, storage and networking units can be deployed in nodes (3-4 x servers, (compute unit) to 7.7TB NVMe SSD (storage unit), and 2 x 10GbitE (network unit)).

A blade can have 4 x nodes. A 3U chassis can have 12 blades, with Kaleao saying this amounts to 192 servers, 1,532 cores, up to 370TB of NVMe SSD, 48 x 40GbitE (960 Gbit/s), and drawing less than 3kW from an external 48V supply. These chassis’ can fit in a rack to deliver more than 21,000 cores for compute, in excess of 5PB of flash storage, and more than 13,000 Gbit/s of network bandwidth. Having racks liquid-cooled is an option.

A KMAX server edition is basically a 3U chassis with IPMI 2.0, web and CLI interfaces and software virtualized resources. The Appliance Edition is a server plus hyper-converged software:

  • Unlimited “physicalized” resources
  • Template-based app and service deployment
  • Software-defined network functions
  • Software-defined distributed storage
  • Embedded OpenStack controller or APIs
  • Orchestration and management tools
  • Multi-tenancy support
  • Centralized management

Kaleao says it has a fabless business model, full control over hardware and software production model, and retention of R&D control. Its KMAX products enable up to 10X more performance than x86 rack, blade and hyper-converged products in the same rackspace with 3-5 x lower CAPEX and 4x more energy efficiency.

It reckons KMAX should be interesting for content delivery and storage, web hosting, data analytics, IoT fabrics and communications infrastructure plus enterprise IT infrastructure for those enterprises that can contemplate moving from their existing infrastructure to an ARM-based, OpenStack one.

This is all very worthy but will it fly? We think it will need a long runway before any flight takes place. What do you think? ®

Sponsored:
HPC and HPDA for the Cognitive Journey with OpenPOWER

Posted on in category News

Custom Keyboard Makes the Case for Concrete

The content below is taken from the original (Custom Keyboard Makes the Case for Concrete), to continue reading please visit the site. Remember to respect the Author & Copyright.

Custom Keyboard Makes the Case for Concrete

One of the worst things about your average modern keyboards is that they have a tendency to slide around on the desk. And why wouldn’t they? They’re just membrane keyboards encased in cheap, thin plastic. Good for portability, bad for actually typing once you get wherever you’re going.

When [ipee9932cd] last built a keyboard, finding the right case was crucial. And it never happened. [ipee9932cd] did what any of us would do and made a custom case out of the heaviest, most widely available casting material: concrete.

To start, [ipee9932cd] made a form out of melamine and poured 12 pounds of concrete over a foam rectangle that represents the keyboard. The edges of the form were caulked so that the case edges would come out round. Here’s the super clever part: adding a couple of LEGO blocks to make space for the USB cable and reset switch. After the concrete cured, it was sanded up to 20,000 grit and sealed to keep out sweat and Mountain Dew Code Red. We can’t imagine that it’s very comfortable to use, but it does look to be cool on the wrists. Check out the gallery after the break.

Concrete is quite the versatile building material. We’ve seen many applications for it from the turntable to the coffee table to the lathe.

Thanks for the tip, [Itay]. Via r/mechanicalkeyboards.

Posted on in category News

Panasonic’s new prototype TV can hide in plain sight

The content below is taken from the original (Panasonic’s new prototype TV can hide in plain sight), to continue reading please visit the site. Remember to respect the Author & Copyright.

Panasonic has shown off a transparent TV before, but the company has since improved the image quality to the extent that the idea of a television built into your furniture’s glass panes is not only possible — it’s right here. The OLED screen is made from a fine mesh, embedded into the glass sliding door. While the TV image is visible even with the backlighting on, once it’s dimmed the image is clear and bright enough to be almost indistinguishable from existing televisions. (The last model was a bit too dim, and required under-shelf lighting to boost the image.) Turn the TV panel off, however, and it’s hard to tell it was ever there to begin with. Want one? Panasonic’s spokesperson says the television is likely to stay in development for a while longer: at least three more years.

Posted on in category News

Samsung’s Touchable Ink prints braille using any laser printer

The content below is taken from the original (Samsung’s Touchable Ink prints braille using any laser printer), to continue reading please visit the site. Remember to respect the Author & Copyright.

touchable_ink
Braille printers can cost thousands of dollars, where as laser printers can be picked up for under $100. So wouldn’t it be great is laser printers could somehow print out braille and completely […]

Posted on in category News

Transforming Exchange Distribution Groups to Office 365 Groups

The content below is taken from the original (Transforming Exchange Distribution Groups to Office 365 Groups), to continue reading please visit the site. Remember to respect the Author & Copyright.

Exchange GOM Office 365 Group

Exchange GOM Office 365 Group

Microsoft says that it doesn’t recommend traditional email distribution groups (within Office 365) anymore. Although the company admits that Office 365 Groups can’t handle all the scenarios that distribution groups deal with today, pressure is building to move to Office 365 Groups whenever possible. Among the tools that exist to help make the change is a one-click option in the Exchange Online Administration Center (EAC). However, the option only works for distribution groups that consist of Exchange Online mailboxes. If a group contains any other mail-enabled object such as a public folder or external contact, the EAC option doesn’t work. That’s a problem because many distribution groups contain other types of email-enabled objects, such as other distribution groups, public folders, and mail contacts — or even Office 365 Groups.

When the time came to upgrade a particularly important distribution group, there was nothing to do but perform a manual conversion. And that’s when some problems arose, mostly caused by synchronization delays between Exchange Online and Azure Active Directory.

Using Modern Collaboration

Email is good at transmitting attachments, but it fails at maintaining an archive of those documents or indeed a record of the discussions that occur inside distribution groups. Since the dawn of Exchange 4.0 in 1996, the problem of needing an archive for discussions inside distribution groups has often been dealt with by adding a mail-enabled public folder to the membership of groups. But public folders are old-time technology now and something modern should be used instead, especially when collaborating in the cloud.

Some of the Exchange MVPs (hereafter known as the GOMs, or “grumpy old men”) use a private distribution group to communicate news, views, and complaints among the team. The distribution group is hosted in my Office 365 tenant and the MVPs are defined as mail contacts to allow them to participate in the distribution group.

We decided to exploit the new guest user access feature for Office 365 Groups and replace the old distribution group with a new Office 365 Group. In addition to email conversations that function very much like distribution groups, using an Office 365 Group allows members to share information in a SharePoint document library and OneNote notebook. There’s also the prospect of being able to tie in Microsoft Planner in the future, if the GOMs ever felt the need to assign tasks to each other.

Converting to an Office 365 Group

The strategy looked good on paper. Implementing it was another matter. The key step is to transform the set of existing mail contacts that form the membership of the distribution group into guest users and add them as members for the new Office 365 Group. There’s no automatic method to do so and as there were only 70-odd members to process, I didn’t feel like figuring out all the complexities that were involved in scripting the task with PowerShell. I figured that I could move the mail contacts over using the GUI as quickly as I would write and test the script, including the need to debug the inevitable errors introduced by my inability to code. The script remains a project for another day.

No magic is needed to transform mail contacts into guest users. The process is as follows:

  1. Note the SMTP email address of the mail contact.
  2. Use EAC or PowerShell to delete the mail contact.
  3. Add the email address of the deleted mail contact as a new guest user for the Office 365 Group. I used OWA for this purpose (as shown in Figure 1).
  4. Save and iterate until the full set of mail contacts are processed.
  5. Note the primary SMTP address of the old distribution group and then assign a new primary SMTP email address to the old distribution group.
  6. Remove the previous primary address from the distribution group and assign it as the primary address of the new Office 365 Group. Switching the addresses makes sure that replies to the old distribution group sent from outside the tenant will be redirected to the Office 365 Group.
Office 365 Group Guest user

Figure 1: Adding an external user as a guest member of an Office 365 Group (image credit: Tony Redmond)

The notion that the plan could be executed quickly was derailed by a couple of small glitches that made the process a little frustrating at times.

The Need for Synchronization

Azure Active Directory (AAD) is the master directory for Office 365. Different workloads have their own directories to hold information about objects that are specific to the workload. Thus, when you remove a mail contact, the delete operation has to be synchronized from EXODS, the directory used by Exchange Online, to AAD.

Synchronization between directories can take a little time to happen. Until the mail contact is purged from AAD, you won’t be able to use the SMTP address for the mail contact to create a new guest user for the Office 365 Group. The error message (shown in Figure 2) is a little misleading, as you’re not trying to add a contact created by your admin. Exchange Online doesn’t allow multiple objects to exist that use the same email address (of any type). In this case, you need to wait for synchronization to settle down and “free” the SMTP address for reuse.

Office 365 Groups Error 1

Figure 2: Whoops! The mail contact still exists in AAD (image credit: Tony Redmond)

Waiting for a minute or so should be sufficient to convince AAD that the SMTP address is available to be assigned to a new guest user account. Sometimes it took longer — it all depends on the current load that exists within Office 365.

As I had 70 new guest users to add to the group, I processed the guest users in batches while watching TV (to fill the gaps when synchronization occurred). After a set of guest users were added, I’d attempt to update the group and sometimes encounter the issue shown in Figure 3.

Office 365 Groups error 2

Figure 3: Some new guest members can be added to the Office 365 Group. Some can’t. Why? (image credit: Tony Redmond)

Once again, I concluded that a need for synchronization to finish is the reason why this problem happens. Two steps are needed to add a guest user to a group. First, the guest user account is created in AAD. Second, the linked list of group members has to be updated with the new guest user accounts. However, those new guest user accounts have to be synchronized from AAD to EXODS before they can be added to the membership list. Once again, waiting for a moment or so resolved the problem.

The Coding Approach

Given the need to wait for synchronization to happen, I was happy with my decision not to attempt to script the transformation. The code required is more complex than appears on the surface. It’s not just a matter of assembling a few cmdlets as follows:

To accommodate synchronization, some waits must be incorporated in the code to ensure that objects are available before they are processed. Some error checking is needed, too. In short, writing and testing the PowerShell code required more effort than I cared to expend to get the group up and running. If anyone else wants to take on the challenge, they are more than welcome!

Keeping Mail Flowing

While I was busy moving objects around, I wanted to allow the Office 365 Group to receive email from external users. Because they are part of the group membership, guest users can send messages to the group like any other member, so nothing needs to be done to allow guest users to contribute to conversations via email. However, I wanted to keep conversations flowing while some people were members of the old distribution group and some add been moved across to the Office 365 Group. For this reason, I opened the Office 365 Group up so that any external user could contribute to conversations via email. Here’s the command I used:

[PS] C:\> Set-UnifiedGroup -Identity ExchangeGOMs -RequireSenderAuthenticationEnabled $False

Once all the group members are added, you can reverse course and set RequireSenderAuthenticationEnabled to $True to secure the group.

The Final Step

After all the members were moved across to the Office 365 Group, I added it as a member of the old distribution list to ensure that any reply sent to the old list would reach the new group. You can’t do this through any GUI, so PowerShell has to be used:

[PS] C:\> Add-DistributionGroupMember -Identity “Exchange MVPs” -Member ExchangeGOMs

A Happy Group Is a Good Group

The outcome is that the move succeeded and we have a perfectly-functioning Office 365 Group. Email conversations continued to flow without missing a heartbeat (or more importantly, a message). The new group is ready to share documents once the GOMs decide what they’d like to share.

We know that some limitations exist. For instance, guest users can’t access the group with the Outlook Groups mobile app, yet. Nor can they use the OneDrive for Business sync client to synchronize the group document library to their PC. However, these are the finer points that Microsoft will get to in time. Until then, the grumpies are happy with their new home.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros,” the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Transforming Exchange Distribution Groups to Office 365 Groups appeared first on Petri.

Posted on in category News

Brad Dickinson | Microsoft Authenticator now lets you generate strong passwords

Microsoft Authenticator now lets you generate strong passwords

The content below is taken from the original ( Microsoft Authenticator now lets you generate strong passwords), to continue reading please visit the site. Remember to respect the Author & Copyright.