Project ‘Sender ID’

The content below is taken from the original (Project ‘Sender ID’), to continue reading please visit the site. Remember to respect the Author & Copyright.

Over 100 Billion SMS messages are sent per year in the U.K., whilst this figure continues to fall due to mobile device users opting to use alternative communication mediums such as WhatsApp, Facebook Messenger and Signal – we are still a nation dependant on this older form of messaging. If you think it’s going to disappear anytime soon you should… Read more →

Portable machine turns salt water into drinking water using solar power

The content below is taken from the original (Portable machine turns salt water into drinking water using solar power), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fresh drinking water is a real problem in some parts of the world, as it is very hard to have access to it. This clever machine, designed by scientist at MIT will turn salty water into drinking water…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Another year!

So with another year almost over and before this years last sunset, before you start enjoying the next year & before the networks gets busy, I wish this coming year would be fantastic for you. HAPPY NEW YEAR…

9 technologies that IT needed but didn’t get in 2016

The content below is taken from the original (9 technologies that IT needed but didn’t get in 2016), to continue reading please visit the site. Remember to respect the Author & Copyright.

Despite some significant arrivals, 2016 also failed to deliver some long-awaited technologies. And some of what we eagerly ripped the wrapping paper off proved to be a letdown.

Here’s a rundown of the gifts IT didn’t get in 2016.

Professional-grade 3D printing

If you want to print out a stand for your phone or a model for a new product, you can easily find a 3D printer for the office that can do that — as long as you want to print them out in plastic. You can spend more and get a 3D printer that can UV cure resin and make small objects like custom-fit earplugs in about 10 minutes (I watched my ACS Custom in-ear monitor headphones get printed from digital scans of my ear canals earlier this year). Even HP’s $140,000 Multi Jet Fusion printers — promised for this year and offering multi-color printing — only just went on sale, and they still only print nylon. You can prototype a (plastic) circuit board with conductive ink circuits with the Voxel8 Developer Kit, as long as you pause the printing and add the chips by hand.

True, on-demand additive manufacturing using a wider range of materials like metal, carbon fiber and ceramic, or using conductive ink to print circuits embedded inside the parts you print, is still out of most companies’ reach. Want to print your own racks with built-in stress sensors, a custom internet of things (IoT) device for your business or spare parts for the office printer? Maybe next year — as long as you have a big budget and a large industrial space in which to put the kit.

Touchscreen Macs

Four years after Windows 8 made touchscreen notebook PCs a real thing, the latest Mac notebooks still don’t have touchscreens — and they’re not going to get them. It took a lot of work by Microsoft to make the interface in traditional desktop applications like Office and Explorer work as well with a finger as with the precise point of a mouse cursor; there’s even machine learning to guess what you’re trying to touch.

It’s not that Apple couldn’t do the same work or push third-party developers into doing it. It’s that it doesn’t fit with the way the company sees computers developing. You just don’t want a big screen on your desk with fingerprints all over it, says Apple (and it’s likely correct there). If you want a portable device with touch and a keyboard, Apple wants to sell you the iPad Pro.

A post-PC world

While sales of pure tablets seem to have peaked (mainly because we’re mostly happy with the ones we have and aren’t ready to buy new ones yet), what is growing is two-in-one, convertible tablets that give you the advantages of both laptops and tablets. Even the new Windows on ARM PCs coming in 2017 will run x86 applications and join the domain for PC-style management rather than get mobile device management — that seems to be what users and IT admins alike want.

And in business, we’re not even done with desktops yet: They’re cheaper, longer lasting and harder to lose or break than laptops. According to Spiceworks’ annual State of IT report, while EMEA businesses will spend less on desktops than laptops in 2017 (for the first time), in North America the budget for desktops is still a little higher than for laptops.

Proper password replacements

The desire to keep joining devices to the domain might reflect a lack of confidence in alternative security options. Multi-factor authentication, biometrics, document classification, information rights management and other security options that apply to people and information rather than devices still aren’t universally deployed. Windows Server 2016 and Windows 10 include strong protections against pass-the-hash and pass-the-ticket attacks moving laterally through your network and that’s one reason many businesses are moving to them at speed. But we still haven’t got away from the password, for everything from web sites to cloud storage services to banking.

The FIDO 2 standards, currently in development, are being built into Windows, adopted by financial organizations like MasterCard, and supported on multiple devices. And there’s a good chance that they can replace passwords with token, biometrics and context-sensitive authentication, but it might take a while. After all, few businesses have stopped expiring passwords regularly, even though the official NIST guidance changed this year to point out that makes them less secure because if you keep making people learn new passwords they either pick less complex passwords or write them down.

Universal docking technology and USB-C

When Apple introduced a MacBook Air with only one USB-C port it looked like a bold move that would kickstart the arrival of a universal, high-speed interconnect: One cable could connect a laptop to a docking station delivering high-speed video, storage and network connections, with multiple protocols going over what looked like a standard USB cable. Instead, it seems to have brought us the year of the dongle because so many devices still plug into to the standard USB port.

As usual, it comes down to cost: USB-C ports are more expensive (and so are the cables). Plus, we still have lots of peripherals that connect to standard USB ports where a USB-C cable won’t fix the problem (like Logitech’s wireless receiver for mice and keyboards). The cost of cables and peripherals that connect to Macs has been an issue for enterprise adoption, and adding USB-C dongles to the shopping list only makes that more of an issue.

Going wireless might be a better solution: The 802.11ad standard has the bandwidth over short distances to support universal docking stations, and it should show up in notebooks in 2017. But for now, you still need to budget for a range of cables and adapters and think about what peripherals will still work with any new laptops you order.

Private app stores

Remember when every business was going to have its own app store (or its own custom version of the big-name app stores) with line of business apps showing up alongside the latest mobile hits? According to the latest CCS Insight decision maker survey, only 11 percent of companies have a private app store. Even for large firms, it’s only 29 percent. And only 28 percent of companies have made any custom apps. That leaves most employees going to public app stores and a good deal of confusion about what they’re allowed to do there. While 69 percent of decision-makers say employees are supposed to get IT approval for buying apps, only 31 percent of employees agree.

Given that the most popular apps are Microsoft Office, Skype, Google Apps, Dropbox and Facebook (if you ask IT teams), and Office, Adobe for PDFs, Skype, LinkedIn and WhatsApp (if you ask employees), private app stores may not be that important, as long as your line of business apps work in mobile web browsers.

Portable, hybrid cloud

OpenStack was going to be a way to have compatible private and public cloud services so you could migrate a workload from your own data center to a cloud service, move it to another cloud service or bring it back in house whenever the costs looked better. With HP and Cisco closing down their OpenStack services, that’s looking less like the direction CIOs want from public cloud. Indeed, it was customer feedback that had Microsoft switch Azure Stack to focus on bringing the higher-level, higher-value PaaS services to hybrid cloud instead of the commodity IaaS services you can switch between.

Those commodity IaaS options aren’t as interchangeable as they could be: You can use a cloud broker to move VMs from cloud to cloud, but you still need different VPNs for each cloud and every cloud has its own resource management interfaces for controlling resources like storage and databases and its own key management services for encryption.

Investing in the complex orchestration platform you’d need to have in order to achieve frictionless cloud mobility would likely cost more than you could save by migration. This is because of how closely the big cloud services price match each other, especially when the real value is leveraging those higher-level PaaS services. If you’re worried about cloud lock-in, think about flexible, decoupled design patterns that you could re-implement on another service, not the intercloud equivalent of lift and shift.

General AI

AI chatbots, business analytics and machine learning matured this year to the point that most businesses can usefully start to take advantage of them for areas like security, CRM and support. Conversational interfaces in chat clients or on devices are becoming useful, at least for quick queries. But what we don’t have, and might never get, is general purpose ‘artificial intelligence’ that can do any job the way a human can. If your Facebook chatbot stops getting calls from customers, you can’t put it to work analyzing phishing emails. You’d have to create a whole new data model and build a whole new system to tackle another information domain.

Plus, the predictions and analyses from machine learning systems are only going to be as good as the data and the data models you feed them. Even then there will be false positives and spurious correlations, as well as deliberate attacks on those systems like the internet trolls who had Microsoft’s Tay bot spewing hate speech. And what you get out of AI-based systems is a prediction with a certainty level, which means we’re going to have to get comfortable dealing with uncertainty, and we need to be prepared for the machine learning failures that are in our future.

Getting off server products before they’re out of support

This one is a gift IT could give itself and rarely does. Support lifecycles are public. It wasn’t a surprise that Microsoft SQL Server 2005 stopped getting support and security updates in April 2016, any more than it would have been a surprise that the Windows Server 2003 you were probably running that database on stopped getting updates in 2015.

Having noticed how bad companies are at this, Microsoft did at least put a price on procrastination. If you have Software Assurance, you can buy Premium Assurance to get critical security fixes for an extra six years (for 5-12 percent of the license cost). Or you can use that cost as an argument to move to the newer version of the software that you’re already paying for with Software Assurance.

Give yourself a gift for next year and start planning what to do about Windows Server 2008 and SQL Server 2008 reaching end-of-life in 2020.

Related video:

This story, “9 technologies that IT needed but didn’t get in 2016” was originally published by

CIO.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Microsoft To Provide Additional Windows Update Installation Options With Creators Update

The content below is taken from the original (Microsoft To Provide Additional Windows Update Installation Options With Creators Update), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

In the Spring of 2017, Microsoft will be releasing a new update for Windows 10 that the company is calling the Creators update. While many of the features the company talked about back in October were for the consumer, there are several features targeted specifically at enterprise customers.

In addition to those that have already been announced, thanks to a leaked build of Windows 10, Microsoft will soon let updates be deferred up to 35 days. Microsoft calls this featuring, “Pause Updates” and will give a small bit of control back to the user about the update process.

This delaying of installation will be helpful for reviewing updates before they are installed to make sure that there will be no compatibility issues. But, honestly, the bigger benefit here is being able to stop updates from installing when it is uncovered that a patch is breaking features of Windows 10. While Microsoft hopes that this scenario never happens, seeing as it has already occurred several times with Windows 10, it’s inevitable it will happen again.

While 35 days may not be enough for some, it should suffice for most to defer upgrades for a short period and if there are any major issues with a patch, it should be uncovered in that window of time.

The build of Windows 10 that leaked is the Enterprise version of the software, so it’s safe to say that this feature is coming to corporate world and is not only for consumers. We don’t have an exact date on when the Creators update will arrive but feature-completion will occur in January and will be followed by several weeks of bug smashing and refinement.

The post Microsoft To Provide Additional Windows Update Installation Options With Creators Update appeared first on Petri.

Will you still be using a RISC PC in 2017?

The content below is taken from the original (Will you still be using a RISC PC in 2017?), to continue reading please visit the site. Remember to respect the Author & Copyright.

The RISC PC was released in the mid 1990s while the Iyonix came out in the early 2000s and was available until the end of that decade. So if you are using a RISC PC, it could well be 20 years old and even your Iyonix is likely to be at least 8 years old.

This equipment is now obsolete in computer terms if it works at all, (you have changed the batteries before they leaked…)

There are FOUR reasons why you might be using a RISC PC (Or Iyonix) in 2017.

Retro
This is (IMHO) a really good reason for using a RISC PC in 2017. There is nothing like the original kit to get the true feel for days gone by. And there is a lively discussion on the Stardot forums on keeping vintage computers like BBC and RISC PCs going. But this is not the same as having a modern, general system.

Nostalgia/Attachment
Many people get very attached to items. In this case the question is whether your real attachment is to the RISC PC (which has not developed) or running RISC OS on a powerful machine (which has).

Backwards support
It may be that you cannot live without a specific piece of software hardware which only runs on these old machines. In which case, we would love to hear what it is. Maybe there are alternatives or interest in providing a more modern alternative?

Inertia
It has always worked and so no need to change.This is true, but computing moves on and you can now get faster machines with more modern versions of RISC OS and get more done on your favourite platform. Ironically, most modern televisions have HDMI inputs, so we can now go back to the 80s with our new Raspberry Pi plugged into the TV!

So what computer will you be using in 2017?

No comments in forum

Handbrake’s video conversion app update was 13 years in the making

The content below is taken from the original (Handbrake’s video conversion app update was 13 years in the making), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the fast-moving world of modern app development, users can often wait days or a small number of weeks for an update. However, if you’re the team behind Handbrake — one of the world’s most popular video conversion apps — years can pass before you’re ready to show off what you’ve been working on. Well, 13 years to be exact. After more than a decade in development and available as a beta release, the Handbrake team has released version 1.0.0 of its transcoding software, which delivers a much-needed set of new features.

One of the most welcome additions is the availability of new video presets, which now include settings for the latest smartphones, tablets, consoles and streaming devices. If you’re looking to (legally) back up your movie collection, the app also includes new Matroska presets, offering support for Google’s VP9 video codec and Opus audio. Other noteworthy inclusions are the ability to select specific DVD titles and chapters to rip, improved subtitle support and the option to queue up multiple encodes.

Although Handbrake is easy to use, it does take time to find what settings work for you. With this is mind, the team has released new online documentation for its Windows, Mac and Linux app, which takes you through the best ways to convert video, create advanced workflows and troubleshoot any problems you might come up against.

Via: iClarified

Source: Handbrake

The Tiny SCSI Emulator

The content below is taken from the original (The Tiny SCSI Emulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

For fans of vintage computers of the 80s and 90s, SCSI can be a real thorn in the side. The stock of functioning hard drives is dwindling, and mysterious termination issues are sure to have you cursing the SCSI voodoo before long. Over the years, this has led to various projects that aim to create new SCSI hardware to fill in where the original equipment is too broken to use, or too rare to find.

[David Kuder]’s tiny SCSI emulator is designed for just this purpose. [David] has combined a Teensy 3.5 with a NCR5380 SCSI interface chip to build his device. With a 120MHz clock and 192K of RAM, the Teensy provides plenty of horsepower to keep up with the SCSI signals, and its DMA features don’t hurt either.

Now, many earlier SCSI emulation or conversion projects have purely focused on storage – such as the SCSI2SD, which emulates a SCSI hard drive using a microSD card for storage. [David]’s pulled that off, maxing out the NCR5380’s throughput with plenty to spare on the SD card end of things. Future work looks to gain more speed through a SCSI controller upgrade.

But that’s not all SCSI’s good for. Back in the wild times that were the 80s, many computers, and particularly the early Macintosh line, were short on expansion options. This led to the development of SCSI Ethernet adapters, which [David] is also trying to emulate by adding a W5100 Ethernet shield to his project. So far the Cabletron EA412 driver [David] is using is causing the Macintosh SE test system to crash after initial setup, but debugging continues.

It’s always great to see projects that aim to keep vintage hardware alive — like this mass repair of six Commodore 64s.

Filed under: computer hacks

The Tiny SCSI Emulator

The content below is taken from the original (The Tiny SCSI Emulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

For fans of vintage computers of the 80s and 90s, SCSI can be a real thorn in the side. The stock of functioning hard drives is dwindling, and mysterious termination issues are sure to have you cursing the SCSI voodoo before long. Over the years, this has led to various projects that aim to create new SCSI hardware to fill in where the original equipment is too broken to use, or too rare to find.

[David Kuder]’s tiny SCSI emulator is designed for just this purpose. [David] has combined a Teensy 3.5 with a NCR5380 SCSI interface chip to build his device. With a 120MHz clock and 192K of RAM, the Teensy provides plenty of horsepower to keep up with the SCSI signals, and its DMA features don’t hurt either.

Now, many earlier SCSI emulation or conversion projects have purely focused on storage – such as the SCSI2SD, which emulates a SCSI hard drive using a microSD card for storage. [David]’s pulled that off, maxing out the NCR5380’s throughput with plenty to spare on the SD card end of things. Future work looks to gain more speed through a SCSI controller upgrade.

But that’s not all SCSI’s good for. Back in the wild times that were the 80s, many computers, and particularly the early Macintosh line, were short on expansion options. This led to the development of SCSI Ethernet adapters, which [David] is also trying to emulate by adding a W5100 Ethernet shield to his project. So far the Cabletron EA412 driver [David] is using is causing the Macintosh SE test system to crash after initial setup, but debugging continues.

It’s always great to see projects that aim to keep vintage hardware alive — like this mass repair of six Commodore 64s.

Filed under: computer hacks

The Tiny SCSI Emulator

The content below is taken from the original (The Tiny SCSI Emulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

For fans of vintage computers of the 80s and 90s, SCSI can be a real thorn in the side. The stock of functioning hard drives is dwindling, and mysterious termination issues are sure to have you cursing the SCSI voodoo before long. Over the years, this has led to various projects that aim to create new SCSI hardware to fill in where the original equipment is too broken to use, or too rare to find.

[David Kuder]’s tiny SCSI emulator is designed for just this purpose. [David] has combined a Teensy 3.5 with a NCR5380 SCSI interface chip to build his device. With a 120MHz clock and 192K of RAM, the Teensy provides plenty of horsepower to keep up with the SCSI signals, and its DMA features don’t hurt either.

Now, many earlier SCSI emulation or conversion projects have purely focused on storage – such as the SCSI2SD, which emulates a SCSI hard drive using a microSD card for storage. [David]’s pulled that off, maxing out the NCR5380’s throughput with plenty to spare on the SD card end of things. Future work looks to gain more speed through a SCSI controller upgrade.

But that’s not all SCSI’s good for. Back in the wild times that were the 80s, many computers, and particularly the early Macintosh line, were short on expansion options. This led to the development of SCSI Ethernet adapters, which [David] is also trying to emulate by adding a W5100 Ethernet shield to his project. So far the Cabletron EA412 driver [David] is using is causing the Macintosh SE test system to crash after initial setup, but debugging continues.

It’s always great to see projects that aim to keep vintage hardware alive — like this mass repair of six Commodore 64s.

Filed under: computer hacks

New MusicBrainz Virtual Machine released

The content below is taken from the original (New MusicBrainz Virtual Machine released), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve finally released a new MusicBrainz virtual machine! This new version has become a lot more automated and is much easier to create and deploy. Hopefully we will be doing monthly releases of the VM from here on out.

lot of things have changed for this new release. If you have used the VM before, you MUST read the instructions again. Before filing a bug or asking us a question, please re-read the documentation first!

Ready to jump in? Read the instructions.

Filed under: musicbrainz

OpenStack Developer Mailing List Digest December 17-23

The content below is taken from the original (OpenStack Developer Mailing List Digest December 17-23), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • AJaeger: We’ve now got the first Deployment guide published for Newton, see http://bit.ly/2i31smp . Congrats to OpenStack Ansible team!
  • clarkb: OpenStack CI has moved off of Ubuntu Trusty and onto Ubuntu Xenial for testing Newton and master.
  • ihrachys: first oslo.privsep patch landed in Neutron.
  • dulek: Cinder now supports ZeroMQ messaging!
  • All

Release Countdown for Week R-8, 26-30 December

  • Feature work and major refactoring should be well under way as we pass the second milestone.
  • Focus:
    • Deadline for non-client library releases is R-5 (19 Jan).
      • Feature freeze exceptions are not granted for libraries.
  • General Notes:
    • Project teams should identify contributors that have a significant impact this cycle who not otherwise qualify for ATC status.
    • Those names should be added to the governance repository for consideration as ATC.
    • The list needs to be approved by the TC by 20 January to qualify for contributor discounts codes for the event.
    • Submit these by 5 January
  • Important Dates:
    • Extra ATCs deadline: 5 January
    • Final release of non-client libraries: 19 January
    • Ocata 3 Milestone, with Feature and Requirements freezes: 26 January
  • Ocata release schedule [1]
  • Full thread

Storyboard Lives

  • There is movement to still move to Storyboard as our task tracker.
  • To spread awareness, some blog posts have been made about it, and it’s capabilities:
    • General over and decision to move from Launchpad [2].
    • Next post will focus on compare and contrast of Launchpad and Storyboard.
  • If you want to hear about something in particular in the blog posts, let the team know on #storyboard IRC channel on Freenode.
  • Attend their weekly meeting [3].
  • Try out Storyboard in the sandbox [4].
  • Storyboard documentation [5]
  • Full thread

 

[1] – http://bit.ly/2fo2lCK

[2] – http://bit.ly/2i32GOl

[3] – http://bit.ly/2iaxkCX

[4] – http://bit.ly/2i2Svt4

[5] – http://bit.ly/2iaquxf

Microsoft Intune: Windows 10 Device Enrollment

The content below is taken from the original (Microsoft Intune: Windows 10 Device Enrollment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

In today’s Ask the Admin, I’ll show you how to enable device enrollment in Microsoft Intune and enroll a Windows 10 PC.

Microsoft Intune is a lightweight cloud-based PC and mobile device management product that uses Mobile Device Management (MDM), a set of standards for managing mobile devices, instead of Active Directory (AD) Group Policy, which is a Windows-only technology. For more information about Intune, see Introduction to Microsoft Intune on the Petri IT Knowledgebase.

 

 

Windows 10 PCs connect with Azure Active Directory and are then automatically enrolled in Intune. Before you can complete the instructions below, you will need both a trial Intune account and Azure Active Directory (Premium) subscription. Although the accounts are free for the trial period, credit card details are required to sign up for Azure AD Premium. I recommend creating an Intune account first, and then using the same account details to create an Azure AD Premium subscription. This will ensure that the Azure AD Directory is associated with your Intune subscription.

Assign User Licenses

The first step is to assign at least one user an Intune license. Licensing is managed from the Office 365 management portal.

  • Log in to the Office 365 management portal here with the admin account for your Intune subscription.
  • In the options on the right of the portal, click Users, and then Active users.
Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

In the list of users, make sure that one of them has Intune A Direct listed in the status column. This might be the admin user for your Intune subscription or another user.

  • To enable an Intune license for a user, click the user in the list of Active Users, and then Edit to the right of Product licenses in the user’s dialog box.
  • Under Product licenses, switch Intune A Direct to On using the slider, and click Save.
  • Close the user’s dialog box.
Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

Configure MDM Auto-Enrollment in Azure AD

To ensure that devices are automatically enrolled with Intune when they join Azure AD, you must configure MDM auto-enrollment for the directory.

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Auto-enrollment in Azure AD (Image Credit: Russell Smith)

  • Log in to the Azure management portal here.
  • Expand on the options on the left of the portal, and click ACTIVE DIRECTORY.
  • Click the directory you see in the list on the right.
  • Switch to the APPLICATIONS tab.
  • In the list of applications, click Microsoft Intune.
  • Click Configure below Assign users to mobile device management application.
  • On the microsoft intune screen, scroll down to manage devices for these users and click ALL. Click Save in the bar at the bottom of the portal window.
Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

In a production environment, you’re more likely want to control which devices are managed using Intune with Azure AD groups.

Enable Windows 10 Device Enrollment

The next step is to enable specific device platforms that can enroll in Intune. This is done from the Intune management portal.

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

  • Open Internet Explorer and go to the Intune management portal here. Note that the portal isn’t currently compatible with Microsoft Edge.
  • Click ADMIN at the bottom of the list of options on the left of the portal.
  • Click Set Mobile Device Management Authority on the Mobile Device Management screen.

Enroll a Windows 10 Device

Now that MDM is set up for Windows devices in Intune, you can connect a Windows 10 device to Azure AD and it will automatically be enrolled to Intune.

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

  • Log in to Windows 10 as a local administrator.
  • Click the Settings icon on the Start menu.
  • In the Settings app, click Accounts.
  • Click Access work or school on the left.
  • Click + Connect on the right.
  • In the Set up a work or school account dialog box, type the email address of a licensed Intune user, and click Next.
  • In the Let’s get you signed in dialog box, type the password for the account, and click Sign in.
  • On the You’re all set! screen, click Done.
  • The new account will appear on the Connect to work or school screen in the Settings app. Click it, and if the device successfully enrolled with Intune, you’ll see the Info button. Click Info.
  • You’ll see the address of the management server and information about the last attempted sync. You can force a sync operation with the management server by pressing Sync.
Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Sponsored

In this article, I showed you how to set up automatic device enrollment in Microsoft Intune, and how to enroll and Windows 10 device.

The post Microsoft Intune: Windows 10 Device Enrollment appeared first on Petri.

Microsoft Intune: Windows 10 Device Enrollment

The content below is taken from the original (Microsoft Intune: Windows 10 Device Enrollment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

In today’s Ask the Admin, I’ll show you how to enable device enrollment in Microsoft Intune and enroll a Windows 10 PC.

Microsoft Intune is a lightweight cloud-based PC and mobile device management product that uses Mobile Device Management (MDM), a set of standards for managing mobile devices, instead of Active Directory (AD) Group Policy, which is a Windows-only technology. For more information about Intune, see Introduction to Microsoft Intune on the Petri IT Knowledgebase.

 

 

Windows 10 PCs connect with Azure Active Directory and are then automatically enrolled in Intune. Before you can complete the instructions below, you will need both a trial Intune account and Azure Active Directory (Premium) subscription. Although the accounts are free for the trial period, credit card details are required to sign up for Azure AD Premium. I recommend creating an Intune account first, and then using the same account details to create an Azure AD Premium subscription. This will ensure that the Azure AD Directory is associated with your Intune subscription.

Assign User Licenses

The first step is to assign at least one user an Intune license. Licensing is managed from the Office 365 management portal.

  • Log in to the Office 365 management portal here with the admin account for your Intune subscription.
  • In the options on the right of the portal, click Users, and then Active users.
Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

In the list of users, make sure that one of them has Intune A Direct listed in the status column. This might be the admin user for your Intune subscription or another user.

  • To enable an Intune license for a user, click the user in the list of Active Users, and then Edit to the right of Product licenses in the user’s dialog box.
  • Under Product licenses, switch Intune A Direct to On using the slider, and click Save.
  • Close the user’s dialog box.
Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

Configure MDM Auto-Enrollment in Azure AD

To ensure that devices are automatically enrolled with Intune when they join Azure AD, you must configure MDM auto-enrollment for the directory.

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Auto-enrollment in Azure AD (Image Credit: Russell Smith)

  • Log in to the Azure management portal here.
  • Expand on the options on the left of the portal, and click ACTIVE DIRECTORY.
  • Click the directory you see in the list on the right.
  • Switch to the APPLICATIONS tab.
  • In the list of applications, click Microsoft Intune.
  • Click Configure below Assign users to mobile device management application.
  • On the microsoft intune screen, scroll down to manage devices for these users and click ALL. Click Save in the bar at the bottom of the portal window.
Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

In a production environment, you’re more likely want to control which devices are managed using Intune with Azure AD groups.

Enable Windows 10 Device Enrollment

The next step is to enable specific device platforms that can enroll in Intune. This is done from the Intune management portal.

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

  • Open Internet Explorer and go to the Intune management portal here. Note that the portal isn’t currently compatible with Microsoft Edge.
  • Click ADMIN at the bottom of the list of options on the left of the portal.
  • Click Set Mobile Device Management Authority on the Mobile Device Management screen.

Enroll a Windows 10 Device

Now that MDM is set up for Windows devices in Intune, you can connect a Windows 10 device to Azure AD and it will automatically be enrolled to Intune.

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

  • Log in to Windows 10 as a local administrator.
  • Click the Settings icon on the Start menu.
  • In the Settings app, click Accounts.
  • Click Access work or school on the left.
  • Click + Connect on the right.
  • In the Set up a work or school account dialog box, type the email address of a licensed Intune user, and click Next.
  • In the Let’s get you signed in dialog box, type the password for the account, and click Sign in.
  • On the You’re all set! screen, click Done.
  • The new account will appear on the Connect to work or school screen in the Settings app. Click it, and if the device successfully enrolled with Intune, you’ll see the Info button. Click Info.
  • You’ll see the address of the management server and information about the last attempted sync. You can force a sync operation with the management server by pressing Sync.
Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Sponsored

In this article, I showed you how to set up automatic device enrollment in Microsoft Intune, and how to enroll and Windows 10 device.

The post Microsoft Intune: Windows 10 Device Enrollment appeared first on Petri.

Rackspace 2017 Predictions: VMware in 2017 – Multiple Clouds Will Need More Experts

The content below is taken from the original (Rackspace 2017 Predictions: VMware in 2017 – Multiple Clouds Will Need More Experts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Virtualization and Cloud executives share their predictions for 2017. Read them in this 9th annual VMblog.com series exclusive. Contributed by… Read more at VMblog.com.

Self-Serve Cloud Tools for Beginners Hit the Market

The content below is taken from the original (Self-Serve Cloud Tools for Beginners Hit the Market), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to you by MSPmentor

A pair of newly released products aim to allow users with no technical knowledge to quickly spin up virtual servers and leverage public cloud services with the simplicity of using a smartphone.

Amazon Web Services’ “Lightsail,” and “Cloud With Me,” a tool developed by a Dublin, Ireland-based AWS partner, suggest that available technology has reached a point where average consumers can now access public cloud directly from vendors.

Lightsail launched on Nov. 30 in northern Virginia and will be rolled out gradually to other regions across the country and worldwide. No firm dates have been announced.

Cloud With Me hit general availability today.

“Our solution allows you to adopt AWS in minutes with zero resources or tech knowledge,” said an entry on the features page of the Cloud With Me website. “And for those who want to connect to AWS directly, our Self Hosting option provides a quick and simple step-by-step guide to help you launch your AWS server in minutes.”

The Lightsail product is touted as a way to leverage the power, reliability and security of AWS public cloud, with the simplicity of a virtual private server.

“As your needs grow, you will have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database, messaging, and content distribution services,” AWS Chief Evangelist Jeff Barr wrote in a blog post. “All in all, Lightsail is the easiest way for you to get started on AWS and jumpstart your cloud projects, while giving you a smooth, clear path into the future.”

A webinar on Lightsail is scheduled for Jan. 17, where the public can receive more information, the blog states.

Both products offer a handful of pre-configured server packages at a flat monthly rate, including DNS management, access to the AWS console, multiple installations and free or premium add-ons.

In addition to being widely available immediately, Cloud With Me officials boast other advantages over the competing product, including out-of-the-box business email, FTP functionality, built-in support for MySQL and intuitive integration with Google Analytics.

Cloud With Me says it plans to expand the tool to integrate with other cloud service providers.

Managed services providers (MSPs) and other channel firms are increasingly tackling the business challenges posed by the explosion of public cloud.

On one hand, migrating and managing cloud workloads and offering strategic IT advice presents potential new revenue opportunities.

At the same time, intense competition by some of tech’s biggest players is flooding the market with cheap cloud computing and innovative self-serve apps and tools.

A recent CompTIA study found that managing the competitive implications of “cloud computing” was the number one concern keeping MSPs up at night.

In another potential threat to the cloud revenue of MSPs, Amazon last week launched AWS Managed Services, which provides a full suite of IT services to large enterprises. Some industry experts have speculated it’s just a matter of time before AWS Managed Services begins to target mid-sized and small organizations.

This article first appeared here, on MSPmentor.

Google Cloud Platform icons and sample architectural diagrams, for your designing pleasure

The content below is taken from the original (Google Cloud Platform icons and sample architectural diagrams, for your designing pleasure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Miles Ward, Global Head of Solutions, Google Cloud Platform

Technology makes more sense when you map it out. That’s why we now have icons and sample architectural diagrams for Google Cloud Platform (GCP) available to download. Using these icons, developers, architects and partners can represent complex cloud designs in white papers, datasheets, presentations and other technical content.

The icons are available in a wide variety of formats, and can be mixed and matched with icons from other cloud and infrastructure providers, to accurately represent hybrid- and multi-cloud configurations. There are icons representing GCP products, diagram elements, services, etc. View them below and at http://bit.ly/2hIP3n8.

We’ll update these files as we launch more products, so please check back.

To give you a flavor, below is one of more than 50 sample diagrams in Slides and Powerpoint. No need to start each diagram from scratch!

Happy diagramming!

OpenStack Developer Mailing List Digest December 10- 16

The content below is taken from the original (OpenStack Developer Mailing List Digest December 10- 16), to continue reading please visit the site. Remember to respect the Author & Copyright.

Updates

  • Release schedule clarification after Ocata [5]
  • Nova placement/resource providers [6][12]
  • Stuart McLaren stepping down from glance core [8]

Allowing Teams Based on Vendor-specific Drivers (cont) [1]

  • Narrowed down options at last TC meeting to following [2]:
    • Soft black (option 2): default option, had no negative feedback, represents the current status quo
    • Soft white (option 4): had some positive feedback, folks liked it’s simple solution
    • Grey (option 5): had the most positive feedback, but also the least amount of detail
  • Other options’ patches are being abandoned
  • Leaning towards an amended version of the ‘Grey’ proposal [10]

Community Goals for Pike (cont.) [3]

  • Need feedback [4]
  • Keep using openstack/governance for documenting goals
    • Make sure to include guides
    • Consider prioritization as it may not be possible to complete all the goals in the release
    • Think about splitting larger goals to things that can be accomplished in a single release
  • Involving users/operators through the Product WG and start face to face discussions on the Forums

Python changes in OpenStack CI [7]

  • Python3.4 on a Trusty VM for older branches: stable/liberty and stable/mitaka
  • Python3.5 on a Xenial VM for newer branches: stable/newton and master
    • Python3.4 testing is disabled for these
    • ACTION:
      • Projects should enable voting for Python3.5 jobs or add them if they don’t exist yet
      • Projects should remove Python3.4 jobs if they run only on master

Golang Technical Requirements [15]

  • Activities to adopt Go into OpenStack are ongoing
  • Areas need more discussion
    • Common Libraries
    • Dependency Management
      • Candidates are govendor, glide and godep
    • Release Deliverables
      • Tags and/or build artifacts?
      • AUTHORS and ChangeLog files can be autogenerated
  • Oaktree has golang bindings and contains generated files

Upgrade readiness check in Nova [11]

  • New, separate service
  • Checks the system state and indicates how much it is ready to start the Ocata upgrade (success, warning, error)

Self-service branch management [13]

  • Through openstack/releases repo
  • Specify your needs in a patch [14] and the rest is automated after it’s merged
  • New stable branch creation is best to happen close to the end of the cycle, when the bug fixing and stabilization activities are slowing down

Architectural discussion about nova-compute interactions [16]

  • How do Nova, Neutron and Cinder interact with nova-compute
  • Should nova-compute become a standalone shared service? [9]

 

[1] http://bit.ly/2h7g2oW

[2] http://bit.ly/2h68irL

[3] http://bit.ly/2h7dpn5

[4] http://bit.ly/2gZunHo

[5] http://bit.ly/2hlkRLB

[6] http://bit.ly/2h63l26

[7] http://bit.ly/2hl9T8M

[8] http://bit.ly/2h69BGU

[9] http://bit.ly/2hlg2Ss

[10] http://bit.ly/2h7iPyi

[11] http://bit.ly/2hlfDzu

[12] http://bit.ly/2h636Ux

[13] http://bit.ly/2hl4b7b

[14] http://bit.ly/2h6iZu6

[15] http://bit.ly/2hl7Qlg

[16] http://bit.ly/2h6dPhZ

Video: Installing a Flush AV Backbox with Syncbox

The content below is taken from the original (Video: Installing a Flush AV Backbox with Syncbox), to continue reading please visit the site. Remember to respect the Author & Copyright.

Syncbox

We noticed a new product making a bit of a stir in the custom install world so we asked the makers to tell us a bit about – Syncbox…



When companies are installing Syncbox we make sure that they take pride in doing so because making something look beautiful and elegant requires care and precision which is what we pride ourselves on.

A Syncbox was recently installed in a stunning property in London that is currently being refurbished. The slim line box adds to the property’s fresh new look which is what the homeowner wanted to implement. The homeowner was very impressed with the amount of cover plate styles and finishes, whether it be “Exclusive Metal” or “Personal Plastic”. The metal cover plates are produced by Focus SB who are very popular for their high quality covers that are used in luxury hotels and high-end residential properties. We were able to produce the “Antique Bronze” cover for the owner which looks really nice in the living room.

Syncbox

The work was for a company called AV Innovation who are extremely efficient when it comes to installations. The large number of AV and electrical installers using Syncbox is continuously growing. They can clearly see that it beats the traditional system on time and cost with the added value of its good looks.

Syncbox

Syncbox is now a complete range of products which can be turned into an advanced wiring build consisting of the four main elements: TV, Media, Audio & Power. Customers are able to adapt the Syncbox to their own unique specification and with large projects we can help with the technical drawings.

Syncbox is now being used by the UK’s largest and most prestigious house developers like Crest Nicholson, Redrow & Berkeley Homes. As well as being backed by successful business owner Deborah Meaden for its unique style and endless potential.

Syncbox

Syncbox is the only choice for a professional TV installation, television manufacturers do not supply a recessed power outlet for plugs or even a recessed power outlet to enable their beautiful screens to fit flush to the wall. With the connection cables plugged in, the cables protrude from anything between 35mm and 40mm from the wall and so we created the first recessed power point. With Syncbox all your cable connections are recessed, so your flat screen doesn’t have to sit away from the wall.

The reason why Syncbox is such an in demand product is due to there being so many benefits compared to the traditional system:

  • Recesses all power & TV Mounting
  • Bespoke cover plates
  • Tidy & protected cabling
  • Ultra-flush TV mounting
  • Easy installation
  • Simple one box system
  • World-wide adaptable
  • Save time & money

Prices start from around £55.

sync-box.com

Want More? – Follow us on Twitter, Like us on Facebook, or subscribe to our RSS feed. You can even get these news stories delivered via email, straight to your inbox every day.

Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel

The content below is taken from the original (Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel

Object storage networked drive JBOD with direct access

OpenIO_WD_HDD_with_ARM_board

Disk drive as server; OpenIO SLS-4U96 disk drive nano-node

OpenIO has launched its SLS-4U96 product, a box of 96 directly addressed drives offering an object-storage interface and per-drive scale-out granularity.

The SLS-4U86 is a 4U box holding up to 96 vertically mounted 3.5-inch disk drives, providing up to 960TB of raw storage with 10TB drives and 1,152 TB with 12TB drives. The disk drives are actually nano-servers, nano-nodes in OpenIO terms, as they each have on-drive data processing capability.

A nano-node contains:

  • ARM cpu – Marvell Armada-3700 Dual core Cortex-A53 ARM v8 @1.2Ghz
  • Hot-swappable 10TB or 12TB 3.5-inch SATA nearline disk drive
  • Dual 2.5Gb/s SGMII (Serial Gigabit Media Independent Interface) ports

The amount of DRAM is not known.

The SLS-4U96 product has no controllers or motherboard in its enclosure, featuring dual 6-port 40Gbit/s Ethernet backend Marvell Prestera switches for access. These are for both client connectivity and direct chassis interconnect, which can scale up to more than 10PB per rack. The chassis has four x N+1 power supplies and 5 removable fan modules, and OpenIO says it has no single point of failure.

OpenIO_WD_HDD_with_ARM_board

OpenIO SLS-4U96 disk drive with ARM board

The failure domain is a single drive which, with in-chassis erasure coding, is survivable. New or replaced nano-nodes join the SLS resource pool without needing a rebalancing operation and the process takes less than 60 seconds and doesn’t impact performance.

SLS stands for server-less storage by the way.

OpenIO CEO Laurent Denel said: “The SLS-4U96 hardware appliance revolutionises the storage landscape by providing network HDDs as an industrial reality.” He claims that with a cost that can be as low as $0.008/GB/month over 36 months for a fully populated configuration and the ease of use of our software, a single SysAdmin can easily manage a large multi-petabytes environment at the lowest TCO.

SLS_4u96_Partially_populated

Partially populated SLS-4U96

The open source SDS software used by SLS has these features;

  • Automatic nano-node discovery, setup and load balancing
  • Easy to use management via a web GUI, CLI and API
  • Local and geo-distributed object replica or erasure coding
  • Quick fault detection and recovery
  • Call-home support notifications
  • S3, Swift and Native object APIs
  • Multiple file sharing access methods: NFS, SMB, FTP, FUSE

It is said to be fully compatible with existing x86 based SDS installations and a 3-node cluster can be deployed ready for use in five minutes. It can be a mixed hardware cluster as well

SLS_4u96_nano_nodes

OpenIO SLS-4u96 nano-nodes ready to be stuffed into the chassis

Okay, very good, but what is this storage for? OpenIO suggests email, video storage and processing in the media and entertainment industry, and enterprise file services. It says there are “dedicated applications connectors such an Email connector for Cyrus, Zimbra, Dovecot and Video connector to sustain high demanding content service around adaptive streaming and event-based transcoding.”

There’s no performance data available yet. Hopefully that will come out in the next few months.

Grab yourself a data sheet here; it’s more of a brochure actually. Get more technical white papers here. ®

Sponsored:
Customer Identity and Access Management

AWS Webinars – January 2017 (Bonus: December Recap)

The content below is taken from the original (AWS Webinars – January 2017 (Bonus: December Recap)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.

Easy Ways to Motivate Yourself to Work When You’re Really Not Feeling It

The content below is taken from the original (Easy Ways to Motivate Yourself to Work When You’re Really Not Feeling It), to continue reading please visit the site. Remember to respect the Author & Copyright.

We all have those days where we’re really not feeling it but we have to get some work done anyway. Whether that’s today or every day, this graphic offers a few tips to help you get energized and tackle your to-do list, project, or drudgework with vigor.

Read more…

Let Your Whole Family Watch This Internet Security Basics Course

The content below is taken from the original (Let Your Whole Family Watch This Internet Security Basics Course), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the holidays get closer, you’re probably going to spend a lot of time with your family, many of whom will get shiny new devices. If they want your help setting up their new toys, consider making this internet security course required viewing.

Read more…

Experts Expose Myths, Offer Best Practices for Office 365 Data Protection

The content below is taken from the original (Experts Expose Myths, Offer Best Practices for Office 365 Data Protection), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eran Fajaran<br/>Asigra

Eran Fajaran

Asigra

Eran Farajun is the Executive Vice President for Asigra.

For many organizations, Microsoft Office 365 has become the essential cloud-based productivity platform. According to Microsoft public filings, it’s used by four out of five Fortune 500 companies, and at the other end of the scale, more than 50,000 small and medium sized companies sign up for the service every month. Its subscriber base grew nearly 80 percent in a 12-month period ending Q3 2016.

However, for many corporate subscribers, Office 365’s popularity and convenience may obscure a critical data retention and compliance requirement: the need for users to take responsibility for protecting their own data in cloud-based platforms such as Microsoft Office 365. While it is a highly secure platform, there is a lot more to comprehensive data protection than encryption and hard passwords.

To learn more about the importance of protecting data in cloud-based platforms, I asked three data protection professionals to join me for a discussion exploring why protection of Office 365 data is mission critical. Accompanying me on the panel were Chad Whaley, CEO of Echopath, an IT services and data backup company based in Indiana; James Chillman, managing director of UK Backup, a provider of cloud backup and disaster recovery services in England; and Jesse Maldonado, director of project services at Centre Technologies, an IT solutions provider out of Texas.

I began by asking the panel to identify the top myths about data protection they encounter when talking to customers about Microsoft Office 365.

Chillman: The top misunderstanding we encounter is that people assume that, by signing up for Office 365, Microsoft has now taken charge of their data. However, that’s not true. Microsoft is responsible for running the service and keeping it secure. They do a great job and aren’t going to destroy your data. However, users are still responsible for managing their data and protecting it from threats such as accidents, malicious behavior and ransomware attacks.

Maldonado: We often run into the perception that Office 365 data is not mission critical, and that only data from enterprise resource planning (ERP) solutions or other line-of-business applications need to be protected. That’s simply not the case. Office 365 is at the heart of business communication, and particularly for organizations with compliance requirements, the data created and stored in Office 365 is vital and must be protected.

Whaley: Many customers are drawn to Office 365 by the potential cost savings, but are surprised to find that there are still costs associated with storing data in the cloud. It’s still your data, whether it’s in your data center or Microsoft’s cloud, and if you want to ensure it’s protected, you will need to have a data protection plan. The fact that you have to manage your data doesn’t change.

Farajun: What consequences have your customers experienced due to insufficient protection of Office 365 data?

Chillman: We’re seeing a huge increase in the number of restores due to ransomware attacks—it’s our main area of focus when it comes to retrieving client data. The consequences of ransomware are very serious, including the cost of downtime, loss of earnings and potential fines from breaking data protection laws. We’ve had customers who believe moving data to Office 365 protects their data from ransomware. But that’s not true. If ransomware has infected your data center and you sync to Office 365, then the ransomware can spread to your cloud-based data too. Microsoft does its best to protect against malware but ransomware is becoming much more advanced and it changes every day. It’s a huge problem.

Whaley: I was looking at a study of unscheduled downtime, and found that two factors – human error and software malfunction—accounted for 40 percent of all downtime. Moving your data to Office 365 doesn’t do anything to change these threats. Human error is still very prevalent, like the proverbial Bob in Accounting who deletes all of his data and doesn’t notice for 45 days, at which point it’s gone. The largest restore we’ve ever done was due to an admin who didn’t use Office 365 properly and ended up purging a massive amount of data. Human error is still very much at the forefront of downtime risks and you have to protect against it. As for software, whether it’s on premises or in the cloud, it’s still Microsoft Office and it’s susceptible to the same glitches in either location.

Maldonado: Without comprehensive data protection, data can be lost or destroyed just as easily in the cloud as in the data center. If a Word document disappears and has to be recreated from the ground up, a company will lose productivity. We’ve seen instances where data loss events have led to organizations going out of business—they were never able to recover from the data loss.

Farajun: What considerations and best practices do you recommend to your customers when discussing Office 365 data protection?

Chillman: We make sure that our customers understand the core data protection capabilities built into Office 365. Then we look at how to address the gaps. We work with customers to define service-level agreements to determine what data retention policies they need for their particular business requirements. We also make sure customers understand that they are still ultimately responsible for their data in the cloud. You need to make sure your data protection solution gives you the power and flexibility to manage it effectively.

Maldonado: We find that a lot of customers haven’t defined the Recovery Time Objective (RTO) or Recovery Point Objective (RPO) for their business, so we help them determine their tolerance for data loss. We also help them understand what data retention requirements they must comply with due to regulation. For instance, healthcare and financial organizations have strict guidelines about what data must be stored and for how long.

Whaley: For Office 365 data protection, the best practice we recommend is to plan your solution before you move your data there. For many businesses, data protection is an afterthought. We recommend that our customers get to know their data, understand what’s critical and what’s not, and make sure they realize, whether it’s in the cloud or on premises, that they are ultimately responsible for it.

Farajun: In conclusion, I would add that Microsoft Office 365 offers great simplicity and cost savings for businesses seeking to place their productivity tools in the cloud. However, email and document retention requirements still apply and must be followed regardless of where your data is stored. Microsoft Office 365 provides basic data recovery and archiving capabilities, but this elemental level of protection may not satisfy your compliance obligations. To mitigate your risk and meet compliance mandates, protect your Office 365 data the same way you would protect your on-premise data to avoid data loss as a result of intentional or accidental user error, ransomware attacks, unplanned data overwrites or other breaches. This requires a comprehensive approach to data protection that protects all enterprise data from any source, including Office 365, with a single, easily managed solution.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Disaster Recovery using Amazon Web Services (AWS)

The content below is taken from the original (Disaster Recovery using Amazon Web Services (AWS)), to continue reading please visit the site. Remember to respect the Author & Copyright.

“You can’t predict a disaster, but you can be prepared for one!” Disaster recovery is one of the biggest challenges for infrastructure. Amazon Web Services allows us to easily tackle this challenge and ensure business continuity. In this post, we’ll take a look at what disaster recovery means, compare traditional disaster recovery versus that in the cloud, and explore essential AWS services for your disaster recovery plan.

What is Disaster Recovery?

There are several disaster scenarios that can impact your infrastructure. These include natural disasters such as an earthquake or fire, as well as those caused by human error such as unauthorized access to data, or malicious attacks.

“Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster.”

In any case, it is crucial to have a tested disaster recovery plan ready. A disaster recovery plan will ensure that our application stays online no matter the circumstances. Ideally, it ensures that users will experience zero, or at worst, minimal issues while using your application.

If we’re talking about on-premise centers, a disaster recovery plan is expensive to maintain and implement. Often, such plans are insufficiently tested or poorly documented. As such, it’s adequate for protecting resources. More often than not, companies with a good disaster recovery plan aren’t capable of conducting it because it was never tested in a real environment. As a result, users cannot access the application and the company suffers significant losses.

Let’s take a closer look at some of the important terminology associated with disaster recovery:

Business Continuity. All of our applications require Business Continuity. Business Continuity ensures that an organization’s critical business functions continue to operate or recover quickly despite serious incidents.

Disaster Recovery. Disaster Recovery (DR) enables recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.

RPO and RTO. Recover Point Objective (RPO) and Recovery Time Objective (RTO) are the two most important parts of a good DR plan for our workflow. Recover Point Objective (RPO) is the maximum targeted period in which data might be lost from an IT service due to a major incident. Recovery Time Objective (RTO) is a targeted time period after which a business process must be restored after a disaster or disruption to service.

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Traditional Disaster Recovery plan (on-premise)

A traditional on-premise Disaster Recovery plan often includes a fully duplicated infrastructure that is physically separate from the infrastructure that contains our production. In this case, an additional financial investment is required to cover expenses related to hardware and for maintenance and testing. When it comes to on-premise data centers, physical access to the infrastructure is often overlooked.

These are the security requirements for an on-premise data center disaster recovery infrastructure:

  • Facilities to house the infrastructure, including power and cooling.
  • Security to ensure the physical protection of assets.
  • Suitable capacity to scale the environment.
  • Support for repairing, replacing, and refreshing the infrastructure.
  • Contractual agreements with an internet service provider (ISP) to provide internet connectivity that can sustain bandwidth utilization for the environment under a full load.
  • Network infrastructure such as firewalls, routers, switches, and load balancers.
  • Enough server capacity to run all mission-critical services. This includes storage appliances for the supporting data, and servers to run applications and backend services such as user authentication, Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), monitoring, and alerting.

Obviously, this kind of disaster recovery plan requires large investments in building disaster recovery sites or data centers (CAPEX). In addition, storage, backup, archival and retrieval tools, and processes (OPEX) are also expensive. And, all of these processes, especially installing new equipment, take time.

An on-premise disaster recovery plan can be challenging to document, test, and verify, especially if you have multiple clients on a single infrastructure. In this scenario, all clients on this infrastructure will experience problems with performance even if only one client’s data is corrupted.

Disaster Recovery plan on AWS

There are many advantages of implementing a disaster recovery plan on AWS.

Financially, we will only need to invest a small amount in advance (CAPEX), and we won’t have to worry about the physical expenses for resources (for example, hardware delivery) that we would have in on an “on-premise” data center.

AWS enables high flexibility, as we don’t need to perform a failover of the entire site in case only one part of our application isn’t working properly. Scaling is fast and easy. Most importantly, AWS allows a “pay as you use” (OPEX) model, so we don’t have to spend a lot in advance.

Also, AWS services allow us to fully automate our disaster recovery plan. This results in much easier testing, maintenance, and documentation of the DR plan itself.

This table shows the AWS service equivalents to an infrastructure inside an on-premise data center.

On premise data center infrastructure AWS Infrastructure
DNS Route 53
Load Balancers ELB/appliance
Web/app servers EC2/Auto Scaling
Database servers RDS
AD/authentication AD failover nodes
Dana centers Availability Zones
Disaster recovery Multi-region

 

Essential AWS Services for Disaster Recovery

While planning and preparing a DR plan, we’ll need to think about the AWS services we can use. Also, we need to understand our selected services support data migration and durable storage. These are some of the key features and services that you should consider when creating your Disaster Recovery plan:

AWS Regions and Availability Zones  –  The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”). A Region is a physical location in the world that has multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity housed in separate facilities. These AZs allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.

Amazon S3Provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities within a region and are designed to provide a durability of 99.999999999% (11 9s).

Amazon Glacier – Provides extremely low-cost storage for data archiving and backup. Objects are optimized for infrequent access, for which retrieval times of several hours are adequate.

Amazon EBS –  Provides the ability to create point-in-time snapshots of data volumes. You can use the snapshots as the starting point for new Amazon EBS volumes. And, you can protect your data for long-term durability because snapshots are stored within Amazon S3.

AWS Import/Export – Accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport. The AWS Import/Export service bypasses the internet and transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network.

AWS Storage Gateway is a service that connects an on-premise software appliance with cloud-based storage. This provides seamless, highly secure integration between your on-premise IT environment and the AWS storage infrastructure.

Amazon EC2 Provides resizable compute capacity in the cloud. In the context of DR, the ability to rapidly create virtual machines that you can control is critical.

Amazon EC2 VM Import Connector enables you to import virtual machine images from your existing environment to Amazon EC2 instances.

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances.

Amazon VPC allows you to provision a private, isolated section of the AWS cloud. Here,  you can launch AWS resources in a virtual network that you define.

Amazon Direct Connect makes it easy to set up a dedicated network connection from your premises to AWS.

Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud.

AWS CloudFormation gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. You can create templates for your environments and deploy associated collections of resources (called a stack) as needed.

Disaster Recovery Scenarios with AWS

There are several strategies that we can use for disaster recovery of our on-premise data center using AWS infrastructure:

  • Backup and Restore
  • Pilot Light
  • Warm Standby
  • Multi-Site

Backup and Restore

The Backup and Restore scenario is an entry level form of disaster recovery on AWS. This approach is the most suitable one in the event that you don’t have a DR plan.

In on-premise data centers, data backup would be stored on tape. Obviously it will take time to recover data from tapes in the event of a disaster. For Backup and Restore scenarios using AWS services, we can store our data on Amazon S3 storage, making them immediately available if a disaster occurs. If we have a large amount of data that needs to be stored on Amazon S3, ideally we would use AWS Export/Import or even AWS Snowball to store our data on S3 as soon as possible.

AWS Storage Gateway enables snapshots of your on-premise data volumes to be transparently copied into Amazon S3 for backup. You can subsequently create local volumes or Amazon EBS volumes from these snapshots.

Backup and Restore scenario

Backup and Restore scenario

The Backup and Restore plan is suitable for lower level business-critical applications. This is also an extremely cost-effective scenario and one that is most often used when we need backup storage. If we use a compression and de-duplication tool, we can further decrease our expenses here. For this scenario, RTO will be as long as it takes to bring up infrastructure and restore the system from backups. RPO will be the time since the last backup.

Pilot Light

The term “Pilot Light” is often used to describe a DR scenario where a minimal version of an environment is always running in the cloud. This scenario is similar to a Backup and Restore scenario. For example, with AWS you can maintain a Pilot Light by configuring and running the most critical core elements of your system in AWS. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

Pilot light scenario

Pilot Light scenario

A Pilot Light scenario is suitable for solutions that require a lower RTO and RPO. This scenario is a mid-range cost DR solution.

Warm Standby

A Warm Standby scenario is an expansion of the Pilot Light scenario where some services are always up and running. As we plan a DR plan, we need to identify crucial points of our on-premise infrastructure and then duplicate it inside the AWS. In most cases, we’re talking about web and app servers running on a minimum-sized fleet. Once a disaster occurs, infrastructure located on AWS takes over the traffic and performs its scaling and converting to a fully functional production environment with minimal RPO and RTO.

Warm standby scenario

Warm standby scenario

The Warm Standby scenario is more expensive than Backup and Restore and Pilot Light because in this case, our infrastructure is up and running on AWS. This is a suitable solution for core business-critical functions and in cases where RTO and RPO need to be measured in minutes.

Multi-Site

The Multi-Site scenario is a solution for an infrastructure that is up and running completely on AWS as well as on an “on-premise” data center. By using the weighted route policy on Amazon Route 53 DNS, part of the traffic is redirected to the AWS infrastructure, while the other part is redirected to the on-premise infrastructure.

Data is replicated or mirrored to the AWS infrastructure.

Multi site scenario

Multi-Site scenario

In a disaster event, all traffic will be redirected to the AWS infrastructure. This scenario is also the most expensive option, and it presents the last step toward full migration to an AWS infrastructure. Here, RTO and RPO are very low, and this scenario is intended for critical applications that demand minimal or no downtime.

Wrap up

There are many options and scenarios for Disaster Recovery planning on AWS.

The scope of possibilities has been expanded further with AWS’ announcement of its strategic partnership with VMware. Thanks to this partnership, users can expand their on-premise infrastructure (virtualized using VMware tools) to AWS, and create a DR plan via resources provided by AWS using VMware tools that they are already accustomed to using.

Don’t allow any kind of disaster to take you by surprise. Be proactive and create the DR plan that best suits your needs.

Currently working as CLOUDWEBOPS OÜ AWS Consultant as well as Data Architect/DBA and DevOps @WizardHealth. Proud owner of AWS Solutions and DevOps Associate certificates. Besides databases and cloud computing, interested in automatization and security. Founder and co-organizer of the first AWS User Group in Bosnia and Herzegovina .

More Posts