New wearable tracker can transmit vital signs from a soft, tiny package

The content below is taken from the original (New wearable tracker can transmit vital signs from a soft, tiny package), to continue reading please visit the site. Remember to respect the Author & Copyright.

Body sensors have long been bulky, hard to wear, and obtrusive. Now they can be as thin as a Band-Aid and about as big as a coin. The new sensors, created by Kyung-In Jang, professor of robotics engineering at South Korea’s Daegu Gyeongbuk Institute of Science and Technology, and John A. Rogers, Northwestern University, consists of a silicone case that contains “50 components connected by a network of 250 tiny wire coils.” The silicone conforms to the body and transmits data on “movement and respiration, as well as electrical activity in the heart, muscles, eyes and brain.”

This tiny package replaces many bulky sensor systems and because the wires are suspended in the silicone you are able to create a denser electronic. From the release:

Unlike flat sensors, the tiny wires coils in this device are three-dimensional, which maximizes flexibility. The coils can stretch and contract like a spring without breaking. The coils and sensor components are also configured in an unusual spider web pattern that ensures “uniform and extreme levels of stretchability and bendability in any direction.” It also enables tighter packing of components, minimizing size. The researchers liken the design to a winding, curling vine, connecting sensors, circuits and radios like individual leaves on the vine.

The researchers can power the device wirelessly which means it can sit almost anywhere on the body. Further, the team expects to be able to use this system inside of robotics where a softer, squishier connector is needed.

“Combining big data and artificial intelligence technologies, the wireless biosensors can be developed into an entire medical system which allows portable access to collection, storage, and analysis of health signals and information,” said Jang. “We will continue further studies to develop electronic skins which can support interactive telemedicine and treatment systems for patients in blind areas for medical services such as rural houses in mountain village.”

Affordable Raspberry Pi 3D Body Scanner

The content below is taken from the original (Affordable Raspberry Pi 3D Body Scanner), to continue reading please visit the site. Remember to respect the Author & Copyright.

With a £1000 grant from Santander, Poppy Mosbacher set out to build a full-body 3D body scanner with the intention of creating an affordable setup for makespaces and similar community groups.

First Scan from DIY Raspberry Pi Scanner

Head and Shoulders Scan with 29 Raspberry Pi Cameras

Uses for full-body 3D scanning

Poppy herself wanted to use the scanner in her work as a fashion designer. With the help of 3D scans of her models, she would be able to create custom cardboard dressmakers dummy to ensure her designs fit perfectly. This is a brilliant way of incorporating digital tech into another industry – and it’s not the only application for this sort of build. Growing numbers of businesses use 3D body scanning, for example the stores around the world where customers can 3D scan and print themselves as action-figure-sized replicas.

Print your own family right on the high street!
image c/o Tom’s Guide and Shapify

We’ve also seen the same technology used in video games for more immersive virtual reality. Moreover, there are various uses for it in healthcare and fitness, such as monitoring the effect of exercise regimes or physiotherapy on body shape or posture.

Within a makespace environment, a 3D body scanner opens the door to including new groups of people in community make projects: imagine 3D printing miniatures of a theatrical cast to allow more realistic blocking of stage productions and better set design, or annually sending grandparents a print of their grandchild so they can compare the child’s year-on-year growth in a hands-on way.

Raspberry Pi 3d Body Scan

The Germany-based clothing business Outfittery uses full body scanners to take the stress out of finding clothes that fits well.
image c/o Outfittery

As cheesy as it sounds, the only limit for the use of 3D scanning is your imagination…and maybe storage space for miniature prints.

Poppy’s Raspberry Pi 3D Body Scanner

For her build, Poppy acquired 27 Raspberry Pi Zeros and 27 Raspberry Pi Camera Modules. With various other components, some 3D-printed or made of cardboard, Poppy got to work. She was helped by members of Build Brighton and by her friend Arthur Guy, who also wrote the code for the scanner.

Raspberry Pi 3D Body Scanner

The Pi Zeros run Raspbian Lite, and are connected to a main server running a node application. Each is fitted into its own laser-cut cardboard case, and secured to a structure of cardboard tubing and 3D-printed connectors.

Raspberry Pi 3D Body Scanner

In the finished build, the person to be scanned stands within the centre of the structure, and the press of a button sends the signal for all Pis to take a photo. The images are sent back to the server, and processed through Autocade ReMake, a freemium software available for the PC (Poppy discovered part-way through the project that the Mac version has recently lost support).

Build your own

Obviously there’s a lot more to the process of building this full-body 3D scanner than what I’ve reported in these few paragraphs. And since it was Poppy’s goal to make a readily available and affordable scanner that anyone can recreate, she’s provided all the instructions and code for it on her Instructables page.

Projects like this, in which people use the Raspberry Pi to create affordable and interesting tech for communities, are exactly the type of thing we love to see. Always make sure to share your Pi-based projects with us on social media, so we can boost their visibility!

If you’re a member of a makespace, run a workshop in a school or club, or simply love to tinker and create, this build could be the perfect addition to your workshop. And if you recreate Poppy’s scanner, or build something similar, we’d love to see the results in the comments below.

The post Affordable Raspberry Pi 3D Body Scanner appeared first on Raspberry Pi.

Amazon UK now lets you easily convert cash into online credit

The content below is taken from the original (Amazon UK now lets you easily convert cash into online credit), to continue reading please visit the site. Remember to respect the Author & Copyright.

The beauty of shopping online is that you can browse and buy without ever leaving the comfort of your sofa. Amazon accepts all major debit and credit cards online, but what if you’ve got a bundle of cash lying around you’d rather use instead? Enter "Amazon Top Up – In Store," a mouthful of a new service that lets you convert cash into online credit.

First, you’ll need to find a local shop, petrol station or what have you with a Paypoint register. These are the ones through which you can pay bills, renew your TV licence and add funds to your pay-as-you-go mobile or prepaid card then and there. Head to Amazon’s top-up site on your phone, or track down the equivalent page on the retailer’s mobile app, and grab yourself a unique barcode.

Get the shopkeeper to scan that barcode, hand over between £5 and £250, and it’ll immediately show up in your Amazon gift card balance. It’s tantamount to using the money to buy a gift card, but with less steps and thus, less hassle — but beware any deposits are non-refundable. You may remember Amazon launched a similar service in the US earlier this year, albeit with the catchier title of "Amazon Cash."

It may seem strange to go through this whole process when it’s infinitely more convenient to make card payments, but not everyone has a bank account and some people would still just rather not hand over financial details to a website. Amazon is in the business of selling stuff, so if there’s a demographic that can or will only use cash, you bet Amazon is gonna make sure they can spend it online.

Source: Amazon, Paypoint

Microsoft rolls its own hyperconverged appliance program

The content below is taken from the original (Microsoft rolls its own hyperconverged appliance program), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s revealed it’s signed up several server vendors to make hyperconverged appliances running Windows Server natively.

Hyperconverged appliances have, to date, nearly always been about giving vSphere a nice place to live. But plenty of those who use vSphere in any environment do it to run Windows Server. Which is a little odd given that Redmond’s server platform includes Hyper-V for compute virtualization, VXLAN for network virtualization and plus Storage Spaces for software-defined storage.

Those three ingredients are the basis of any software-defined data centre and indeed also any hyperconverged appliance. That’s not passed Microsoft by, but the company has been rather busy getting Azure Stack out the door.

Now the company has revealed an effort called “Windows Server Software-Defined” that sees approved hardware partners offer “validated solutions” that wraps Windows Server into three packages, namely

  • Hyper-Converged Infrastructure Standard: Compute and storage in one cluster;
  • Hyper-Converged Infrastructure Premium: Billed as a “software-defined data center in a box” because it adds software-defined networking and Security Assurance features to HCI Standard.
  • Software-Defined Storage: Software-defined storage built on commodity servers and billed as a replacement for standalone arrays, with support for all-flash NVMe configurations and scale-out.

HPE, Lenovo, Fujitsu, Supermicro and QCT have signed up as partners, as has Windows-centric software-defined storage concern DataON.

Will anyone care? The Register understands that Microsoft people and Redmond’s partners will all emerge with fatter pay packets if they sell Azure capacity rather than anything licensed to run on-premises. Microsoft’s also making rather more noise about Azure Stack than Windows Server these days. And let’s not forget that Hyper-V has been more-or-less free for years, yet VMware kept the majority of the virtualization and hyperconverged infrastructure markets.

The Register‘s virtualization desk therefore expects the Windows Server Software-Defined will be appreciated by some buyers, but won’t markedly change the hyperconverged infrastructure market. That’s Azure Stack’s job and when it lands in September it looks like doing it well. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Sofa-jockeys given crack at virtual Formula 1 world championship

The content below is taken from the original (Sofa-jockeys given crack at virtual Formula 1 world championship), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sofa-jockeys given crack at virtual Formula 1 world championship

This whole ‘Esport’ thing looks serious – for games developers looking to boost sales

F1 2017 game

Screen shot from Formula 1 2017, the game/Esport arena

Formula 1 has announced it’s getting into “e-sports”, the preferred phrase for competitive computer gaming, with a new “Formula 1 Esports Series” that will see a virtual F1 champion crowned later this year.

Many gamers think their pastime’s enormous following and facilitation of networked competitions makes a legitimate sport that can stand beside other competitions that test speed, strength and skill. That argument was accepted by the Asian Games, which has made gaming a medal sport at its 2022 event, with a little marketing and promotional help from Alibaba’s gaming tentacle. Even the Summer Olympics is keen on the idea: Paris’ bid for the 2024 games reportedly proposed including e-sports.

Formula 1’s announcement details a virtual race to be run from September, which is co-incidentally when developer Codemasters will release the 2017 edition of the F1 game on PlayStation, XBOX and PC. And guess what? You can pre-order it now.

Players rated among the world’s 40 fastest racers will be summoned to London in October. The top 20 will then score another trip, this time to Abu Dhabi, where the crowning of the first virtual F1 champ will take place at the same time as the winner of that nation’s physical race is revealed.

We’re not told if the virtual champ gets to spray champagne around or drink it from their shoe, but F1 will make the winner a “character in the F1 2018 game” as “a timeless reminder of his or her accomplishment.”

F1 says one reason it’s doing this is a “continued ambition to build a greater connection with wider audiences, especially younger fans.” It’s hard to fault that logic: kids these days raised with on-demand-everything are less likely to tune in to F1’s travelling circus at all hours, potentially shrinking TV audiences and in turn likely lessening the fees F1 can charge broadcasters. All of which makes tapping into a cash stream from games sensible business, even if the timing if the launch of this virtual race series looks like a tawdry tie-in. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Microsoft Has Problems as They Work to Improve Office 365 Support

The content below is taken from the original (Microsoft Has Problems as They Work to Improve Office 365 Support), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Challenge of Office 365 Support

A vast variety of organizations use Office 365 from 5-people businesses to the largest multinationals. The experience of the people working in those tenants range from zero IT experience to some of the most experienced technologists. The upshot is that Microsoft receives a massive stream of support requests daily in over 100 languages looking for help with anything from very basic questions to some very deep and specific problems that might take weeks to solve. Microsoft needs to record, work on, and solve all those requests. It is truly a Herculean task.

A team within the Office 365 product organization, called “Satisfy”, is responsible for making the support experience as good as possible for customers. Some of those changes are flowing out to tenants now, but it has been a bumpy ride.

Flawed Switchover for Office 365 Support UI

Over the past few months, several posts appeared in the Microsoft Technical Community to describe problems that people had with Office 365 support. To be fair to Microsoft, any system has a certain volume of ongoing support problems. And the sheer size of Office 365 means that its support load and support issues can be large. Given the scale of Office 365, there are always likely to be some support snafus ongoing at any time.

An April 19 post describes how Microsoft had changed the “support experience” or the UI in the Office 365 Admin Center to allow tenant administrators to file support requests. For some tenants, the change restricted administrators to be able to open no more than a single support request at a time and removed their ability to view open support requests and historical cases.

Anyone who has ever run a moderately large Office 365 tenant knows that it is normal to have multiple open support incidents, some of which stay open for many weeks or even months. Removing this capability, even for a brief time, was not an improvement. Thankfully, you can now log multiple incidents again.

Two weeks after the original post (May 3), Microsoft replied to say that they are updating the support experience (UI) inside Office 365 and that: “For a short time, some customers will not be able to open multiple support tickets or see their support ticket history.“

Poor Support Experiences

Roll forward to July 17 and a response from another user to the original post said that: “I’m sorry but this is by far the biggest fiasco I have ever seen in IT Service Management — ever.” The author also posted a comprehensive note pointing out the flaws in the current Office 365 support experience and recommending that Microsoft use the same support approach as for Azure.

That post focuses in on the lack of knowledge displayed by some first-line Office 365 support engineers: “Finally, while Microsoft justifies that working with one engineer is better because this one engineer (or concierge) “have breadth of knowledge in all services” is wishful thinking to say the least. My experience has proven otherwise. They seem more like 1st level support reps who are often forwarding our queries to the more capable engineers in the background. The resulting experience, at least for me, has been: a) those concierges failing to understand some fundamental technical aspects of the issue at hand; b) confusing issues; c) providing incorrect or insufficient resolutions; and d) taking a significant amount of time to come back with a solution.

I think some truth is here. The size and diversity of Office 365 is such now that it is very difficult for a support engineer to have more than a passing acquaintance with more than one basic workload (Exchange or SharePoint) and some of the applications. As someone who has run support organizations in my time, I know the problems that exist in hiring, training, and keeping good support personnel, especially those who can cope with ever-changing environments. And Office 365 changes faster and in more ways than any on-premises environment.

A Cloud of Darkness Descends

Clearly Office 365 support ran into stormy waters with the introduction of the new support experience and the quality of the support received by some tenants.

Compounding the problem, apart from replies in the Technical Community, no one from Microsoft cared to explain what the change meant and how the updated support processes change (and improve) what tenants used before. Nothing appeared in the Office Blog and no mention of any change to support is in the Office 365 roadmap.

How Microsoft Wants to Improve Support

I spoke about the issues with the “Satisfy” team, a very committed group within Microsoft dedicated to improving all aspects of support within Office 365. They acknowledge that they have not communicated the change well and that some parts of the experience do not work as well as they should.

Microsoft says they have four goals in changing the way the support UI works inside Office 365:

  1. Reducing the friction to open a support ticket.
  2. Reducing the time for initial contact from support.
  3. Reducing overall time to resolve the issue.
  4. Providing self-help that is personalized using telemetry.

The old UI asks administrators to enter some information about the problem (Figure 1) before they can create a service request (support ticket). The UI is straightforward and clear and I do not see much friction. Some administrators might not know what feature is affected or how best to describe a problem, but apart from that, completing the form should not be a problem.

Office 365 Support old

Figure 1: Creating an Office 365 support request the old way (image credit: Tony Redmond)

The new UI (Figure 2) has two tabs. One to create a new service request and the other to look through open service requests or to check the historical record of requests for the tenant.

Searching for Solutions

The immediate thing that strikes you about the new (reduced friction) interface is its demand that the administrator searches for a solution before they click Get Help to file a service request. This is a perfectly acceptable step for a part-time administrator to take because they might not know where to look for help with a problem. However, asking experienced Office 365 Administrators to search for solutions before filing a support request is treating professionals like children, especially as the search only covers solutions found in the support.office.com site. To be fair to Microsoft, the search is not just a document search as it also takes in telemetry that Microsoft has about a tenant.

Office 365 support new

Figure 2: Creating an Office 365 support request – the new way (image credit: Tony Redmond)

I can appreciate that some people need or want to search for solutions, but when I need help with Office 365, you can bet that I consult the search oracle multiple times to look for a solution and gather the necessary evidence to prove that the problem is real before I go near Microsoft. All I want to do is get to the point where I can log the problem with Microsoft, which is what happens in the old UI. The new UI might be prettier, but it is less effective for experts.

Another annoying feature of the new UI is that it does not allow administrators to upload data relating to a problem to Microsoft when they create a support request. You cannot attach logs, screen shots, documents, or anything else. Thankfully, Microsoft says that they are adding this functionality back.

Greater Use of Tenant Telemetry

Microsoft says that a big advantage of the new UI is the way that incorporates background checking against telemetry data for the tenant to help detect problems and offer solutions. In the example given by Microsoft (Figure 3), an administrator reports that email does not work. A quick check shows that the MX records for the tenant are not configured correctly.

Office 365 Support Telemetry

Figure 3: Telemetry finds a problem (image credit: Microsoft)

I see value in trying to resolve inbound service requests quickly and efficiently by using the telemetry and information available to Microsoft about tenant usage and configurations. If this approach stops 5% of potential requests turning into service requests, it will be good for those administrators.

However, I think this approach only works for part-time or not-very-experienced administrators. I also wonder how effective it can be when dealing with problems in integrated applications like Teams, which relies on components drawn from other parts of Office 365.

Apart from not asking administrators to search for solutions, the UI for support requests in the Office 365 Admin mobile app is consistent with the new browser UI.

Sponsored

The Key to Better Support

Microsoft is aware of the need to continue to improve the quality and effectiveness of Office 365 support. Reducing the number of clicks needed to create a support request is good, insofar as it goes, and using machine learning to make sense of tenant telemetry to find problems is intelligent.

I think Microsoft also needs to allow some way for expert administrators to get right to the point when they create a service request to accelerate the problem resolution process. It would also be good to use the telemetry gathered by Microsoft to give administrators regular health reports about their tenants to proactively identify lurking issues.

In terms of a really great support experience for Office 365 tenants, the real impact comes in the back-end where support engineers pick up the problems submitted by customers and work through to resolution. Unless those engineers have the right knowledge, experience, background, and systems, the quality of support will always be inconsistent and unsatisfying.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Microsoft Has Problems as They Work to Improve Office 365 Support appeared first on Petri.

HyperGrid Expands Customer Base, Partners and VMware Support in Advance of VMworld

The content below is taken from the original (HyperGrid Expands Customer Base, Partners and VMware Support in Advance of VMworld), to continue reading please visit the site. Remember to respect the Author & Copyright.

HyperGridthe Enterprise Cloud-as-a-Service leader, announced the expansion of its partnerships, customer base and hypervisor support. HyperGrid also revealed its partnership with Westin Building Exchange (WBX), one of the leading colocation, hosting and telecommunications interconnection facilities in the United States as well as news about Tearfund, a UK-based Christian relief and development agency, that recently consolidated its IT infrastructure to a single private cloud platform with HyperCloud, ushering in a $1.3 million dollar savings for the agency. HyperCloud will be on display at VMworld 2017 showcasing HyperCloud’s support for VMware vSphere.

CloudWest Powered by HyperGrid

HyperGrid partner, WBX, provides neutral interconnection points to Asian, Canadian, European and American network service providers, carriers, and internet service providers. HyperCloud, the industry’s only, fully on-premises, “pay-as-you-use” Infrastructure-as-a-Service platform will help deploy WBX’s CloudWest offering, delivering a true unified cloud service to its customers.

The service empowers WBX to provide the resources developers need on-demand, while remaining in control and avoiding any interruption in innovation. With the underlying infrastructure managed by experts, IT can focus on what matters to the business applications and workloads, optimizing performance to accelerate revenue generating activities.

“Our digitally connected world is currently undergoing a massive sea-change of content, connectivity, capacity and consumption. In order to better address this transformation, the Westin Building Exchange (WBX) is partnering with HyperGrid to deploy our CloudWest service offering,” said Michael Boyle, Strategic Planning Director at WBX. “HyperGrid has been a real partner in this process helping us utilize the carrier neutral interconnection points to Asian, European and North American network service providers, carriers, and internet service providers that are moving workloads to the cloud.”

Tearfund Realizes Almost $1.3 Million in Savings

Tearfund was ready to solve several IT issues including slow processing speed, high electricity consumption and IT staff resources when it began its search for an on-premises, cloud-based solution. The company then found HyperGrid. The HyperCloud platform was able to reduce processing time from 12 hours to just 20 minutes and also delivered a $1.3 million dollar savings for the agency.

“As a charity, we often do not get the opportunity to adopt truly world-leading technology, but HyperGrid has given us that opportunity,” said Stuart Hall, Infrastructure Lead at Tearfund. “I have worked in the IT sector for 25 years and this has been the easiest, smoothest project that I have ever managed. We have had fantastic support from HyperGrid throughout the process from the reassurance and the rigorous testing that we completed in the pre-sales period, to first-class support throughout the project, HyperGrid has met absolutely every requirement we set and we couldn’t be happier with the solution.”

HyperCloud Supports VMware vSphere

VMworld is VMware’s premier thought leadership and education destination for cloud infrastructure and digital workspace technology professionals. At the conference, HyperGrid will be hosting technical demos showcasing the simplicity and ease of use of HyperCloud with VMware, along with theater presentations running throughout the day providing overviews of use cases, customer case studies, and joint solutions with key partners.                                                                         

HyperGrid is an exhibitor level sponsor of VMworld and will be located at booth 218. Meetings with the HyperGrid team may be scheduled here: http://bit.ly/2wvBqjf

Google launches Chrome Enterprise subscription service for Chrome OS

The content below is taken from the original (Google launches Chrome Enterprise subscription service for Chrome OS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google is launching a new enterprise service for large businesses that want to adopt Chrome OS devices. The new Chrome Enterprise subscription, which will cost $50 per device and year, is essentially a rebrand of Chromebooks for Work, but with a number of additional capabilities. Even though the name would make you think this is about the Chrome browser, this program is actually all about Chrome OS. For Chrome users in the enterprise, Google already offers the Chrome Enterprise Bundle for IT, after all.

For enterprises, the main highlights here is that Chrome Enterprise is fully compatible with their existing on-premise Microsoft Active Directory infrastructure. Google senior director of product management for Android and Chrome for Business and Education Rajen Sheth told me that this has long been a stumbling block for many enterprises that were looking at adopting Chrome OS devices. With this update, enterprise users will be able to use their existing credentials to log into their Chrome OS devices and access their Google Cloud services — and IT admins will be able to manage their access to these devices and services.

It’s worth noting that Chrome OS admins could already enable other services that use the SAML standard to enable single sign-on for Chrome devices.

In addition, businesses will now also be able to manage their Chrome OS devices from the same enterprise mobility management solutions they already use, starting with VMware’s AirWatch. Support for similar services will launch in the future.

With this new licence, IT admins will also be able to set up a managed enterprise app store for their users. This feature is currently in beta and focuses on Chrome OS’s ability to run Android apps, which is currently available on many of the most popular Chrome devices in the enterprise.

Other benefits of the Chrome Enterprise subscription include 24/7 enterprise support, managed OS updates and printer management (you may laugh about this last one, but that’s something that still matters in many offices).

It’s no secret that Google is working hard to get more enterprises to adopt its various cloud-based services. Chromebooks have already found a lucrative niche in verticals like retail and education. To expand its market share, though, features like this new integration with AirWatch were sorely needed.

World’s Largest Open Source Cloud Computing Summit to be Hosted in Sydney

The content below is taken from the original (World’s Largest Open Source Cloud Computing Summit to be Hosted in Sydney), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sydney, Australia, will host thousands of cloud computing experts from more than 50 nations for the OpenStack Summit,
November 6-8, at the Sydney Convention and Exhibition Center. This will
be the first time the must-attend open infrastructure event has been
held in Australia where users like American Airlines, China Railway,
Saudi Telecom, Commonwealth Bank, Defense Advanced Research Projects
Agency (DARPA), Sprint and Tencent/WeChat will talk about multi-cloud
strategies, cost savings and increasing agility with OpenStack.

The
biannual OpenStack Summit-held previously in major cities like Paris,
Tokyo, Vancouver, San Francisco and Barcelona-will draw thousands of
developers, operators, cloud architects, business unit leaders and CIOs
from the world’s centers of IT innovation.

Headline sponsors of
the Summit include Sydney-based Aptira, global networking leader Cisco,
and WeChat provider Tencent, as well as premier sponsors Huawei, IBM,
Intel, Mirantis, Red Hat and VMware.

Speakers
will report on strong OpenStack adoption among financial services,
telecoms and research organizations including: AT&T, Monash
University, The Garvan Institute, DragonFly Data Science, Paddy Power
Betfair, Verizon, Workday, University of Melbourne, PayPal, Catalyst IT,
GoDaddy and Overstock.com.

Discounted Registration

Discounted Early Bird Registration is available until September 8. Sponsorship packages are available until September 27.

“Enterprises
and service providers in Australia were among the earliest adopters of
OpenStack, so it’s exciting to bring the Summit and thousands of
community members globally to Sydney,” said Jonathan Bryce, executive
director of the OpenStack Foundation. “The rapid growth of OpenStack-in
excess of 40 percent year over year-means there’s lots to talk about at
the Summit in terms of compelling new use cases, instructive new user
stories and proven best practices for both developers and cloud
operators.”

Summit Themes for Sydney

Themes
at the Sydney OpenStack Summit include current topics for enterprises
like how to build effective multi-cloud strategies. Attendees from
carriers and service providers will learn more about how OpenStack is
leading in edge computing. Other themes include innovation in open and
composable infrastructure, collaboration with adjacent open source
communities, digital
transformation and infrastructure control. The CFO perspective on
business growth and infrastructure cost reduction will feature
prominently in many sessions.

OpenStack
Summits help IT leaders plan their cloud strategies while sharing
real-world experiences operating or consuming OpenStack clouds. During
the three-day event, attendees will have the opportunity to hear keynote
presentations featuring innovative use cases and live technology
demonstrations. They can choose from hundreds of sessions and hands-on
workshops for every experience level and organizational role. The Summit
offers networking opportunities during the Monday evening Marketplace
booth crawl happy hour and the Tuesday afternoon Melbourne Cup viewing
party. Learn more about OpenStack Summits and users at the official
OpenStack publication, Superuser.

Cloud Application Hackathon to Kick off the Week

The
weekend prior to the Summit, a cloud application hackathon will be
hosted by the Sydney OpenStack User Group to kick off the event. “Hacking Up the Stack
will be held November 3-5, at the Doltone House in the Australian
Technology Park. OpenStack application hackathons are intended to
educate developers on how to build and migrate applications to
distributed cloud environments, as well as showcase the diverse use
cases for OpenStack. Recent application hackathons have been held in Mexico and Taiwan. More information coming soon: http://bit.ly/2v1rvl3.

AWS Cost Explorer Update – Better Filtering & Grouping, Report Management, RI Reports

The content below is taken from the original (AWS Cost Explorer Update – Better Filtering & Grouping, Report Management, RI Reports), to continue reading please visit the site. Remember to respect the Author & Copyright.

Our customers use Cost Explorer to better understand and manage their AWS spending, making heavy use of the reporting, analytics, and visualization tools that it provides. We launched Cost Explorer in 2014 with a focus on simplicity – single click signup, preconfigured default views, and a clean user interface (take a look back at The New AWS Cost Explorer to see where we started). The Cost Explorer has been very popular and we’ve received a lot of great feedback from our customers.

Last week we launched a major upgrade to Cost Explorer. We’ve redesigned the user interface to optimize many common workflows including filtering, report management, selection of date ranges, and grouping of data. We have also included some default reports to make it easier for you to explore the costs related to your use of Reserved Instances.

Looking at Cost Explorer
Since pictures are reportedly worth 1000 words, let’s take a closer look! Cost Explorer is part of the Billing Dashboard so I can start there:

Here’s the Billing Dashboard. I click on Cost Explorer to move ahead:

I can open up Cost Explorer or access one of three preconfigured views. I’ll go for the first option:

The default report shows my EC2 costs and usage (running hours) for the past 3 months:

I can use the Group By menu to break the costs down by EC2 instance type:

I have many other grouping options:

The filtering options are now easier to access and to edit. Here’s the full set:

I can explore my EC2 costs in any set of desired regions:

I can filter and then group by instance type to see how my spending breaks down:

I can click on Download CSV and then process the data locally:

I can also exclude certain instance types from the report. Here’s how I exclude my m4.xlarge, t2.micro, and t2.nano usage:

Report Management
Cost Explorer allows me to customize my existing reports and to create new reports from scratch. I can click on Save As to save my customized report with a new name:

I can see and manage all of my reports on the Saved Reports page (The padlock denotes a default report that cannot be edited and then overwritten):

When I click on New report I can start from a template:

After I click on Create Report, I set up my date range and filters as desired, and click on Save As. I created a report that displays my year-to-date usage of several AWS database services (Amazon Redshift, DynamoDB Accelerator (DAX), Amazon Relational Database Service (RDS), and AWS Database Migration Service):

All of my reports are accessible from the Reports menu so I can check on my costs with a click:

We also simplified the process of selecting a range of dates for a report, including options to select common date ranges:

Reserved Instance Reports
Cost Explorer also includes a pair of reports that will help you to understand and optimize your usage of Reserved Instances. I don’t own an RI’s so I used screen shots supplied by the team.

The RI Utilization report allows you to see how much of your purchased RI capacity is being put to use (the dashed red line represents a utilization target that you can specify):

The RI Coverage report tells you how much of your EC2 usage is being handled by Reserved Instances (this time, the dashed red line represents the desired amount of coverage):

I hope you have enjoyed this tour of the updated Cost Explorer. It is available now and you can start using it today!

Jeff;

Peak Design has Updated and Perfected Their Camera Straps

The content below is taken from the original (Peak Design has Updated and Perfected Their Camera Straps), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even knowing Peak Design makes the best camera bags, backpacks, and totes, we’re surprised by how much we love their latest camera strap and cuff.

Article preview thumbnail

Peak Design’s Everyday Messenger Bag shot past the competition to take the title of your favorite…

Read more

Article preview thumbnail

I have a bag problem. It’s rivaled only by my shoes and my jacket problems. I collect the things. I …

Read more

Article preview thumbnail

It’s not a camera bag that holds a few other things, it’s a bag for everything that holds a camera.

Read more


I’ve never liked a thin camera strap, or one that connected to a camera’s built-in eyelets, so I was justified but very wrong in not expecting much from Peak Design’s updated Leash.

$40

Gizmodo Media Group may get a commission

The Leash is the first camera strap that’s not a mission to cinch and loosen. The built-in adjusters are a revelation in ease of use, and the low-profile Anchor Mount, which is included, negates my second concern above. The variety of configurations cover every preference and go far beyond what other straps can offer, and Peak Design’s famous anchors mean compatibility and easy switching up with the rest of their line.

Murdered out is the photographer default, but think twice, because the “ash” colorway is gorgeous, while still not drawing too much attention.


While the Leash is a welcome evolution of a well-supported category, Peak’s updated Cuff may as well be a new invention, because this is leagues beyond the garbage that kept you from putting your Wiimote through a TV screen.

$30

Gizmodo Media Group may get a commission

There’s so little joy in the world right now- take this opportunity to enjoy small moments of brilliance, like the ability to wear your camera’s wrist strap as a bracelet (that actually looks good), when you seamlessly switch over to your Leash.

The Cuff can be used loose or taut, and will tighten up during a drop, possibly saving your camera in the process.


How To Create Progress Bars in PowerShell

The content below is taken from the original (How To Create Progress Bars in PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2icWO6s

Self-driving truck that’s built to crash comes to Colorado

The content below is taken from the original (Self-driving truck that’s built to crash comes to Colorado), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tech and automotive companies have quietly been trialing autonomous trucks since 2015. Thus far, these tests (from the likes of Daimler and Uber) have been inconspicuous enough to go unnoticed by other drivers. But, a new kind of driverless truck is designed to stick out like a sore thumb. And, if by chance it ends up serving its purpose, it could make an almighty racket. While you read this, an autonomous impact protection vehicle is making its way around Colorado. You know the type: They’re big, yellow, and tend to be deployed behind road workers to prevent you from crashing into them. One more thing we should mention: They’re designed to take the full brunt of a collision. They do this via a massive metal bumper on the back.

Like other self-driving trucks, this modern safety vehicle takes a traditional body (with subtle mods) and adds autonomous tech to the mix. The software comes from Kratos Defense and Security, a company that specializes in military drones and missile targeting systems. On the road, the truck crawls behind a regular car hooked up to precise GPS. This vehicle emits a signal that the robotruck uses to maintain its speed, position, and heading. On top of that, it uses its own radar to avoid obstacles.

Its goal is to save lives and — by going driverless — it also ends up being even safer. How so? Well, by removing its own driver from the equation. Driving a car that’s designed to crash sounds like an insanely dangerous occupation. Yet, to this day, humans are the ones carrying out this task around the country. That’s something the Colorado Department of Transportation (DOT) is intent on changing. With this trial, the state is now the first to test a connected impact protection vehicle without a support driver at the wheel.

"People often talk about the coming job displacement of automated vehicles…well this is actually one job I want to get people out of," Shailen Bhatt, Colorado DOT’s executive director, told Wired. "The idea that we have a truck thats job is to get hit, with someone sitting in it, well that doesn’t make a lot of sense."

Next up, the aim is to get the vehicle performing other tasks. One day, similar robotrucks could lay down road stripes. If Kratos has its way, they’ll also be utilized as tugs in shipping docks, as garbage trucks, and road sweepers. Not quite as fancy as the concept cars showcased at conventions, but a (quiet) revolution in their own lane.

Source: Wired

Why your team needs an Azure Stack Operator

The content below is taken from the original (Why your team needs an Azure Stack Operator), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Stack is an extension of Azure, bringing the agility and fast-paced innovation of cloud computing to on-premises environments. With the great power of Azure in your own datacenter, comes the responsibility of operating the cloud – Azure Stack.

cloud-skills-whitepaperAt the Microsoft 2016 Ignite conference, we announced a set of Modern IT Pro job roles for the cloud era and resources to help organizations to transition to cloud. This year, with a more focused effort in accelerating customer’s readiness for Azure, we’ve published a set of Azure Learning Paths for Azure Administrator, Azure Solution Architect, Node.js Developer on Azure, and .NET Developer on Azure. Associated with each learning path, there is also a set of free online-based self-paced courses to assist you quickly pick up the skills you need to make an impact on the chosen job function.

With the introduction of Azure Stack, we’re adding a new Azure job role – Azure Stack Operator. This is the role who will manage the physical infrastructure of Azure Stack environments. Unlike Azure, where the operators of the cloud environment are Microsoft employees, in Azure Stack, organizations will need to have people with the right skills to run and operate their cloud environment. If you haven’t yet, read the Operating Azure Stack blog post to see what tasks this new role will need to master.

The following four modern IT Pro job roles are most relevant to the success of managing and operating an Azure Stack environment:

  • Azure Stack Operator: Responsible for operating Azure Stack infrastructure end-to-end – planning, deployment and integration, packaging and offering cloud resources and requested services on the infrastructure.
  • Azure Solution Architect: Oversees the cloud computing strategy, including adoption plans, multi-cloud and hybrid cloud strategy, application design, and management and monitoring.
  • Azure Administrator: Responsible for managing the tenant segment of the cloud (whether public, hosted, or hybrid) and providing resources and tools to meet their customers’ requirements.
  • DevOps: Responsible for operationalizing the development of line-of-business apps leveraging cloud resources, cloud platforms, and DevOps practices – infrastructure as code, continuous integration, continuous development, information management, etc.

cloud-operating-model-and-job-roles

In the above graph, the light-brown colored role names (Azure Solution Architect, Azure Administrator, and DevOps) are applicable to both Azure and Azure Stack environments. The role in blue box, Azure Stack Operator, is specially designed for Azure Stack. “Your Customers” encompasses two groups of Azure Stack users: one group is the Azure Admin, who manages subscriptions, plans, offers, etc. in your Azure Stack environment, and the other group is the tenant users of the cloud resources presented by Azure Stack. The tenant users can be DevOps users who either develop or operate the line-of-business applications hosted on an Azure Stack cloud environment. They can also be the tenant users of a service provider or an enterprise, accessing the customer applications hosted on Azure Stack.

As you may have realized, running an instance of a cloud platform requires a set of new skills. To help you speed up the knowledge acquisition process and skill development journey as Azure Stack Operator, we are working to enable multiple learning venues to assist:

  1. We are in the process of developing a 5-day in-classroom training course – “Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack”. This course is currently scheduled to be published in September 2017.
  2. We also plan to release a set of free online courses in the next few months:
  • Azure Stack Fundamentals
  • Azure Stack Planning, Deployment and Configuration
  • Azure Stack Operations

If you want to know more about this exciting new job role, Azure Stack Operator, along with other Azure Stack related roles and their corresponding learning programs, come to Ignite 2017 and attend the theater session “THR2017 – Azure Stack Role Guide and Certifications”.

More information:

At Microsoft Ignite this year in Orlando we will have a series of sessions that will educate you on all aspects of Azure Stack. Be sure to review the planned sessions and register your spot today.

The Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.

If you have already registered for Microsoft Ignite but haven’t yet registered for the Azure Stack pre-day, you can add the pre-day to your activity list. And if you are still planning to register for Microsoft Ignite, now is the time to do so, the conference is filling up fast!

How Azure Security Center aids in detecting good applications being used maliciously

The content below is taken from the original (How Azure Security Center aids in detecting good applications being used maliciously), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve written in the past about how Azure Security Center helps detect malicious activity on compromised VMs, including a post detailing a Bitcoin mining attack and one on an outbound DDoS attack. In many cases, attackers use a set of malicious tools to carry out these and other actions on a compromised machine. However, our team of security researchers have identified a new trend where attackers are using good application to carry out malicious actions. This blog will discuss the use of known hacker tools and those tools that are not nefarious in nature, but are being used maliciously, and how Azure Security Center aids in detecting their use.

Hacker tools aid in exploitation

Generally, the first category of tools we see after a brute force attack are the Port and IP address scanning tools. Most of these tools were not written maliciously, but because of their ease of use, an attacker can scan IP ranges and ports to find vulnerable machines that they can target.

One of the more frequent port scanning tool that we come across is KportScan 3.1, which has the ability to scan for open ports as well as local ports. It has a wide range of uses, including working with any port as well as individual addresses and IP ranges.  It is multithreaded (1200 flows), consuming very few resources on compromised machines, and the best part is that the tool is free.  After running a scan, results are stored by default to a file called “results.txt”. In the example below, KportScan is configured to return all IP’s with the ranges specified that have port 3389 open to the internet.

image

Other scanners that we see dropped on machines after they have been compromised include Masscan, xDedicIPScanner, and D3vSpider.  These tend to be less frequent, but are notable.

Masscan claims to be one of the fastest Internet port scanners out there. It proports to scan the entire internet in under 6 minutes with your own network bandwidth being the only gating factor.  While Linux is its primary platform, it does run on many other operating systems including Windows and Mac OS X.  The below command will scan for open ports on 3389 where the subnet range is 104.208.0.0 to 104.215.255.255, which is 512k worth of addresses.  The results will be stored in a XML file called good.xml.

image

xDedicIPScanner is another port scanner which is based on Masscan. It has many of the same capabilities as Masscan, but does not require a user to learn Linux as it is GUI based.  Some of its features include, scanning of CIDR blocks, automatic loading of country range`s, small foot print, and ability to scan multiple ports or range of ports. It also has some application dependencies to run, they include Winpcap and Microsoft Visual C++ 2010. It also requires Windows 7 or higher. From our observations, xDedicIPScanner appears to be primarily used maliciously. The below example shows a country range being loaded in the tool for scanning.

image

Finally, Pastebin D3vSpider is also a scanner we’ve seen often. D3vSpider is not a port scanner, instead it scans Pastebin repositories which are popular for storing and sharing text (including stolen passwords, usernames, and network data). The tool’s output is based on the user’s input search criteria, and provides information including user names and their passwords. The example below shows a scan resulting in 745 user names and passwords for a single month, this can then be exported to a txt file for future use, with other tools. For example, it could be used with NLBrute, which is a known RDP Brute Force tool.

image

 

Recently, we have begun to see the use of messaging applications being used to drop other malicious and non- malicious tools. The messaging applications are widely used and not malicious in nature. They tend to be cloud-based messaging services, with support for a broad base of devices including both desktop systems and mobile devices. Users of the product can exchange messages, and files of any type with or without end-to-end-encryption, as well as sync the messages across all the user’s devices.  Some of these messaging services even allow messages to be “self-destructed” after delivery, so they are no longer seen on any device. One of the features of these messaging applications is known as “secret” chat. The users exchange encryption keys and after the exchange, they are verified and can then communicate freely without the possibility of being tracked. These features and many more, have made these messaging services a favored addition to some attackers’ tool boxes. 

Their ability to easily drop files on to other machines appears to be one of the main reason attackers use this program. In fact, we started to see the presences of these tool on comprised machines as early as December of 2016.  At first, we dismissed this as coincidence, but after further investigation we started seeing known hacker tools (NLBrute, Dubrute, and D3vSpider) show up on compromised machines after the installation of these messaging applications.

image

Since, these tools are capable of synchronizing messages across all the user’s devices, anyone that is part of the conversation can at a later time revisit a message to download a file or a picture.

image

 

Another method we have seen used is messaging Channels being created for the primary purpose of broadcasting messages to an unlimited number of subscribers. Channels can be either publicly available or private. Public Channels can be joined by anyone. However, for private Channels you need to be added or receive an invite to participate. Due to the encryption that these applications deploy, we see very little activity other than the machine joining a chat Channel, example of what is seen is below:

image

While the joining to a Channel is of interest, the files that appear on the machine after this is what is most interesting.  These tools range from Crack tools, RDP brute force tools, and encryption tools which allow attackers to hide their traffic and obscure the source IP addresses of their activity. Below is an example of what we saw on a host directly after connecting to a private Channel. 

image

What’s next:

We’ve presented some of the more frequently seen tools favored by the attackers, and used on virtual machines in Azure. These tools were, for the most part, created for legitimate usage without malicious intent, however, because of their functionality and ease of use, they are now being used maliciously.

While the presence of any one of these tools may not be reason for alarm, a closer look into other factors will help to determine if they are being used maliciously or not. For example, if we see more than one of them on an Azure virtual machine, the likelihood that the machine is compromised is much greater, and further investigation may be required. Seeing the tool’s usage in the context of other activity on the Azure machine is also very important in determining if these tools are being used maliciously. Ian Hellen’s blog on Azure Security Center Context Alerts describes how much of the tedious security investigation work is automated by Security Center and relevant context is provided about what else was happening on the Azure machine during and immediately before suspicious activity is detected. Tools like KportScan, Masscan, XDedicIPScanner, D3vSpider and malicious messaging services will be detected and alerted on by Azure Security Center Context Alerts.

From the number of incidents investigated, the usage of legitimate tools for malicious purposes appears to be an upward trend. In response, the Azure Security Center’s team of analysts, investigators, and developers are continuing to actively hunt and watch for these types of indicators of compromise (many of which are simply not detected by some AV signatures). We currently detect all of these tools discussed in this blog, and as we find more we are adding them to our Azure Security Center detections.

Recommended remediation and mitigation steps

Microsoft recommends investigating the attack campaign via a review of available log sources, host-based analysis, and if needed, forensic analysis to help build a picture of the compromise. In the case of Azure ‘Infrastructure as a Service’ (IaaS) virtual machines (VMs), several features are present to facilitate the collection of data including the ability to attach data drives to a running machine and disk imaging capabilities.

In cases where the victim machine cannot be confirmed clean, or a root cause of the compromise cannot be identified, Microsoft recommends backing up critical data and migrating to a new virtual machine. It is also recommended that the virtual machines are hardened prior to bringing them on line to prevent compromise or re-infection. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation steps:

  • Review Applications: In cases where there are individual programs that may or may not be used maliciously, it is good practice to review the applications found on the host with the administrators and users. If it is determined that there are applications that may not have been installed by a known user(s), the recommendation would be to take appropriate action, as determined by your administrators.
  • Review Azure Security Center Recommendations: Review and address any security vulnerabilities identified by Security Center, including OS configurations that do not align with the recommended rules for the most hardened version of the OS (for example, do not allow passwords to be saved), machines with missing security updates or without antimalware protection, exposed endpoints, and more.
  • Defender Scan: Run a full antimalware scan using Microsoft Antimalware or another solution, which can flag potential malware.
  • Avoid Use of Cracked Software: Using cracked software introduces unwanted risk of malware and other threats that are associated with pirated software. Microsoft highly recommends not using of cracked software and following legal software policy as recommended by their respective organization.

To learn more about Azure Security Center, see the following:

Get the latest Azure security news and information by reading the Azure Security blog.

What’s driving cloud adoption in 2017?

The content below is taken from the original (What’s driving cloud adoption in 2017?), to continue reading please visit the site. Remember to respect the Author & Copyright.

What's driving cloud adoption in 2017?

What's driving cloud adoption in 2017?
If you’ve moved your enterprise infrastructure into the cloud, to say you’re not alone is an understatement. In researching our latest infographic, it’s clear that more and more enterprises are adopting cloud models and increasing their spending for cloud technologies. It’s also clear that, as cloud options and models increase, building in-house cloud skills and expertise will be necessary to keep pace with advancing cloud technologies and to optimize ROI. So what’s driving cloud adoption in 2017? In this post, we will address this question and explore how enterprises are adopting cloud frameworks. 

Enterprise cloud adoption continues to rise

“The No. 1 trend is ‘here come the enterprises’,” according to Forrester analyst Dave Bartoletti. 
Forrester reported that, of 1,000 technology decision makers for North American and European enterprise infrastructure, 38% were building private clouds, 32% were building public clouds, and 30% would build some form of cloud in the next 12 months.
Last July, Gartner predicted that $111 billion in IT spending would be redirected away from traditional on-premise technology and invested in the cloud (source: Gartner). This amount is expected to increase to $216 billion by 2020. Gartner’s Ed Anderson, research vice president, said:

“Cloud-first strategies are the foundation for staying relevant in a fast-paced world. The market for cloud services has grown to such an extent that it is now a notable percentage of total IT spending, helping to create a new generation of start-ups and ‘born in the cloud’ providers.”

This shift is also seen in the move to put more IT workloads into the cloud. According to McKinsey & Co., enterprises will make a significant shift from building IT infrastructure to consuming IT. Its “IT as a Service Cloud Survey” reported:

“More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.”

As more data is produced by a growing number of people, machines, and things, more of it will be moving to the cloud. Cisco thinks that more than 83% of data center traffic will be cloud-based by 2019, and most of this will be in the public cloud (source: Cisco). Cisco also reported that enterprises using the public or private cloud have increased 61% over 2015, with 68% of enterprises using cloud infrastructure for one to two small applications (source: Cisco).

ROI in the enterprise

Among organizations with mature cloud strategies, Cisco reported that “the most ‘cloud advanced’ organizations see an annual benefit per cloud-based application of $3 million in additional revenues and $1 million in cost savings. These revenue increases have been largely the result of sales of new products and services, gaining new customers faster, or the accelerated ability to sell into new markets.”
In its 2017 State of the Cloud report, Interop ITX, respondents to its cloud computing survey reported thatthey had “benefitted greatly” from cloud computing in the following areas:

Greater scalability, better and/or faster access to tech resources, improved performance, expanded geographic reach, and cost savings.

Building in-house cloud expertise is key

Digital technologies are at the heart of some of the most disruptive changes facing companies today, according to reports by both Deloitte and Microsoft. Deloitte notes that while 90% of CEOs believe that digital technologies are driving disruptive change, 70% report that their organization lacks the skills to adapt. This is compounded by the fact that…

skills are becoming obsolete at an accelerating rate and roles such as software engineers “must now redevelop skills every 12-18 months.”

In its survey of 250 UK tech leaders in medium- to large-sized organizations, the Microsoft Cloud Skills Report found that 83% of businesses ranked cloud skills as critical for digital transformation.
A study by IDG pointed out that “enterprise organizations have added new roles and functions to their business including cloud architects/engineers, cloud systems administrators and security architects/engineers” to ensure return on their cloud investments. (source: IDG
The 2017 Dice Salary Survey also noted the importance of the cloud on the need for skills:

“The migration from hardware-based storage to cloud storage and the explosion of IoT technologies connecting billions of devices are creating a demand for skills to support these transitions and growth. When industries experience transformation at this level, it creates skills demand and increased salaries.”

Solving the skills gap with training

The Deloitte 2017 Global Human Capital Trends report notes that software engineers typically have to upgrade their skills every 12 to 18 months. The report saw “the issue of improving employee careers and transforming corporate learning” move up from fifth to the second most important trend in the report.
The research indicates that acquiring new skills and upskilling existing staff has never been more important.
The Microsoft report noted that 60% of companies are choosing to train existing employees, while 53% anticipate using external training providers and 46% will be looking to hire outside professionals with cloud skills. Workforce solutions provider ManpowerGroup’s U.S. Talent Shortage Survey 2016/2017 found that nearly half of employers are using training programs with existing personnel to augment open positions.
Check out the full Cloud Computing in 2017 infographic in this post.

Multiple Monitors With Multiple Pis

The content below is taken from the original (Multiple Monitors With Multiple Pis), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the most popular uses for the Raspberry Pi in a commercial setting is video walls, digital signage, and media players. Chances are, you’ve probably seen a display or other glowing rectangle displaying an advertisement or tweets, powered by a Raspberry Pi. [Florian] has been working on a project called info-beamer for just this use case, and now he has something spectacular. He can display a video on multiple monitors using multiple Pis, and the configuration is as simple as taking a picture with your phone.

[Florian] created the info-beamer package for the Pi for video playback (including multiple videos at the same time), displaying public transit information, a twitter wall, or a conference information system. A while back, [Florian] was showing off his work on reddit when he got a suggestion for auto-configuration of multiple screens. A few days later, everything worked.

Right now, the process of configuring screens involves displaying fiducials on each display, taking a picture from with your phone and the web interface, and letting the server do a little number crunching. Less than a minute after [Florian] took a picture of all the screens, a movie was playing across three weirdly oriented displays.

Below, you can check out the video of [Florian] configuring three Pis and displays to show a single video, followed by a German language presentation going over the highlights of info-beamer.

Filed under: Raspberry Pi

What GDPR means to Office 365

The content below is taken from the original (What GDPR means to Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

GDPR from Microsoft

GDPR from Microsoft

GDPR Affects All European Businesses

From May 25, 2018, companies with business operations inside the European Union must follow the General Data Protection Regulations (GDPR) to safeguard how they process personal data “wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system.” The penalties set for breeches of GDPR can be up to 4% of a company’s annual global turnover. For companies like Microsoft that have operations within the EU, making sure that IT systems do not contravene GDPR is critical. And as we saw on August 3, even the largest software operations like Office 365 can have a data breach.

Because many applications can store data that might come under the scope of GDPR, the regulation has a considerable influence over how tenants deal with personal data. The definition of personal data is “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

GDPR goes on to define processing of personal data to be “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.”

In effect, individuals have the right to ask companies to tell them what of their personal data a company holds, to correct errors in their personal data, or to erase that data completely. Companies need to know what personal data they hold, make sure that they obtain consents from people to store that data, protect the data, and notify authorities if data breaches occur.

On first reading, this might sound like what companies do – or at least try to do – today. The difference lies in the strength of the regulation and the weight of the penalties should anything go wrong. In other words, GDPR deserves your attention.

Putting GDPR into Context

The definitions used by GDPR are quite broad. To move from the theoretical to practicality, an organization needs to understand what personal data it holds for its business operations and where they use the data within software applications. However, it is easy to imagine examples of where personal information might be inside Office 365 applications, including:

  • Annual reviews written about employees stored in a SharePoint or OneDrive for Business site.
  • A list of applicants for a position in an Excel worksheet attached to an email message.
  • Tables holding data (names, employee numbers, hire dates, salaries) about employees in SharePoint sites.

Other examples might include contract documentation, project files that includes someone’s personal information, and so on.

Data Governance Helps

Fortunately, the work done inside Office 365 in the areas of data governance and compliance help tenants to satisfy the requirements of GDPR. These features include:

  • Classification labels and policies to mark content that holds personal data.
  • Auto-label policies to find and classify personal data as defined by GDPR. Retention processing can then remove items stamped with the GDPR label from mailboxes and sites after a defined period, perhaps after going through a manual disposition process.
  • Content searches to find personal data marked as coming under the scope of GDPR.
  • Alert policies to detect actions that might be violations of the GDPR such as someone downloading multiple documents over a brief period from a SharePoint site that holds confidential documentation.
  • Searches of the Office 365 audit log to discover and report potential GDPR issues.
  • Azure Information Protection labels to encrypt documents and spreadsheets holding personal data by applying RMS templates so that unauthorized parties cannot read the documents even if they leak outside the organization.

Let’s explore some of the technology that exists today within Office 365 that can help with GDPR.

Using Classification Labels

As mentioned above, you can create a classification label to mark personal data coming under the scope of GDPR and then apply that label to relevant content. If you have Office 365 E5 licenses, you can create an auto-label policy to stamp the label on content in Exchange, SharePoint, and OneDrive for Business found because documents and messages hold sensitive data types known to Office 365.

Figure 1 shows how to select from the set of sensitive data types available in Office 365. The set is growing steadily as Microsoft adds new definitions. At the time of writing, 82 types were available, 31 of which are obvious candidates to use in a policy because they are for sensitive data types such as country-specific identity cards or passports.

GDPR sensitive data types

Figure 1: Selecting personal data types for an auto-label policy (image credit: Tony Redmond)

Figure 2 shows the full set of sensitive data types that I selected for the policy. You can also see that the policy applies a label called “GDPR personal data” to any content found in the selected locations that matches any of the 31 data types. Auto-apply policies can cover all Exchange mailboxes and SharePoint and OneDrive for Business sites in a tenant – or a selected sub-set of these locations.

GDPR Policy

Figure 2: The full set of personal data types for a GDPR policy (image credit: Tony Redmond)

Using classification labels to mark GDPR content has another benefit in that you can search for this content using the ComplianceTag keyword (for instance, ComplianceTag:”GDPR personal data”).

It can take up to a full week before auto-label policies apply to all locations. In addition, an auto-label policy will not overwrite a label that already exists on an item. These are small issues. The big problem here is that classification labels only cover some of Office 365. Some examples of popular applications where you cannot use labels are:

  • Teams.
  • Planner.
  • Yammer.

Microsoft has plans to expand the Office 365 data governance framework to other locations (applications) over time. Given the interest in GDPR, hopefully some or all of the locations mentioned above will support data governance by May 2018.

Right of Erasure

Finding GDPR data solves one problem. A further challenge is posed by article 17 of GDPR (the “right of erasure”), which says: “The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay.” In other words, someone has the right to demand that an organization should erase any of their personal data that exists within the company’s records. Content searches can find information about someone using their name, employee number, or other identifiers as search keywords, but erasing the information is something that probably needs manual processing to ensure that the tenant removes the right data and just that data.

You can find and then remove documents and other items holding someone’s name or another identifier belonging to them using tools such as Exchange’s venerable Search-Mailbox cmdlet or Office 365 content searches. However, if the data is on-hold because the company needs to keep items for regulatory or legal purposes, can you then go ahead and remove the items? Remember that the purpose of placing content on-hold is to ensure that no-one, including administrators, can remove that information from Exchange or SharePoint.

The GDPR requirement to erase data on request means that administrators might have to release holds placed on Exchange, SharePoint, and OneDrive for Business locations to remove the specified data. But once you release a hold, you weaken the argument that held data is immutable. The danger exists that background processes or users can then remove or edit previously-held data and so undermine a company’s data governance strategy.

The strict reading of GDPR appears to leave no doubt that organizations must process requests to erase personal data upon request. But what if the company needs to keep some of the data to satisfy regulations governing financial transactions or other interactions? This is not something that IT can solve. Lawyers will have to interpret requests and understand the consequences before making decisions and it is likely that judges will have to decide some test cases in different jurisdictions before full clarity exists.

Hybrid is More Difficult

No doubt exists that Microsoft is working to help Office 365 tenants with GDPR. However, not quite the same effort is going to help on-premises customers. Some documentation exists to deal with certain circumstances (like how to remove messages held in Recoverable Items), but the feeling I have picked up is that on-premises customers feel they have to figure things out for themselves.

In some respects, this is understandable. After all, every on-premises deployment is slightly different and exists inside specific IT environments. Compared to the certainty of Office 365, developing software for on-premises deployment must take the foibles of individual customers into account.

On-premises software is more flexible, but it is also more complicated. Developing solutions to help on-premises customers deal with GDPR might be more of a challenge than Microsoft wants to take on now, especially given their focus of moving everything to the cloud.

Solutions like auto-label; policies are unavailable for on-premises servers. Those running on-premises SharePoint and Exchange systems must therefore come up with their own ways to help the businesses that they serve deal with personal data in a manner that respects GDPR.

SharePoint Online GitHub Hub

If you work with SharePoint Online, you might be interested in the SharePoint GDPR Activity Hub. At present, work is only starting, but it is a nice way to share information and code with similarly-liked people.

ISV Initiatives

Every week I seem to receive an announcement about an ISV-sponsored white paper on GDPR and how their technology can help companies cope with the new regulations (here is an example from Forcepoint). There is no doubt that these white papers are valuable, if only for the introduction and commentary by experts that the papers usually feature. But before you resort to an expensive investment, ask yourself whether the functionality available in Office 365 is enough. If not, then look at the ISV offerings more closely.

Sponsored

Technology Only Part of the Solution

GDPR will effect Office 365 because it will make any organization operating in the European Union aware of new responsibilities to protect personal data. However, technology seldom solves problems on its own. The nature of regulations like GDPR is that training and preparation are as important if not more important than technology to ensure that users recognize and properly deal with personal data in their day-to-day activities. You can deploy Office 365 features to support users in their work, but do not expect Office 365 to be a silver bullet for GDPR.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post What GDPR means to Office 365 appeared first on Petri.

World’s Best Teens Compete in Microsoft Office World Championship

The content below is taken from the original (World’s Best Teens Compete in Microsoft Office World Championship), to continue reading please visit the site. Remember to respect the Author & Copyright.

Photo by Jessica Chou/The Verge

This July, we asked for software tips from the 2017 Microsoft Office National Champions, a set of charming teens who are officially the best at using PowerPoint, Word, and Excel. The Verge recently followed these teens to the World Championship in California, where they tested their Office skills in a contest that out-nerds the spelling bee.

“It was as if the Olympics opening ceremony was replaced by a networking event,” says the Verge. The event demonstrates Microsoft’s global dominance, as forty-nine countries competed, though the Verge found that many international competitors chose to compete in English over their native language, as the company’s localized software is poorly translated.

Article preview thumbnail

Each year, hundreds of thousands of American teenagers compete to be the best at Word, Excel, and…

Read more

Retired teacher Mary Roettgen, who coached the frequent national champions from Green Hope, NC, is proud of her high-achieving team, but she gives the Verge a hack for getting her A students to help teach her D students:

[Roettgen] sets goals for her students — like getting 28 out of 30 kids in the class to pass a certification test — and then gives them a bonus if they hit their marks. “I’d chip in 20 bucks and we’d have a celebration in class.” That had the effect of motivating the A students to help their struggling classmates. “And what’s even neater, is that kid who was the 28th kid — they’ve never been celebrated for anything in the classroom — but they’re the ones who got the whole class the party.”

Roettgen, who’s just trying to make us cry now, goes on: “You take that D or F student, and get them to pass? It can change their life. Somebody’s just got to tell them they’re worth something.”

The Docx Games: Three Days at the Microsoft Office World Championship | The Verge

User Group Newsletter August 2017

The content below is taken from the original (User Group Newsletter August 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sydney Summit

Don’t miss out on your early bird tickets, sales end September 1. Find all you need to know in this Summit guide. 

It includes information about where to stay, featured speakers, a Summit timeline and much more.

An important note regarding travel. All non-Australian residents will need a visa to travel to Australia (including United States citizens). Click here for more information

Final chance to complete the User Survey!

If you haven’t had a chance to complete the OpenStack User Survey, the deadline has been extended to this Friday, August 18. If you’re running OpenStack, this is your chance to anonymously share feedback.

Fill out the survey here. 

OpenDev

Interested in Edge Computing? Join us for OpenDev, a new collaborative working event at DogPatch Studios in San Francisco, September 7-8. Tickets are just $100 for the two-day event, get yourself registered here!

 

The event will feature real world Edge use case demos from Verizon, AT&T and eBay in addition to updates from industry pioneers including Satya from Carnegie Mellon who’s doing cutting “edge” research. Program committee includes Beth Cohen (Verizon) Kandan Kathirvel (AT&T), Chris Price (Ericsson), Andrew Mitry (Walmart) and Gregory Katsaros (Inmarsat). Participating open tech groups include OpenStack, Open Edge Computing, ETSI MEC, OPNFV, Kubernetes and others.

 

As cloud architectures continue to evolve, compute is being pushed to the Edge in telecom, retail, research and other industries. This represents a huge opportunity for open infrastructure, so we’re meeting to talk about how to assemble and integrate the components needed to meet Edge requirements. The event is designed to facilitate collaboration at a technical level with productive, working group discussions, in addition to lightning talk presentations on Edge use cases. There will be a range of sessions that are collaborative, led by moderators who dive into specific topics. View the schedule line-up here.

 

Questions? Contact events [at] openstack.org.

OpenStack Days

Check out the upcoming OpenStack Days around the globe! See the full calendar here.

Great news! The OpenStack Upstream Institute will be running at the OpenStack Days UK and OpenStack Nordic events. 

 

CFP closing soon!

 

  • CloudNativeCon/KubeCon
  • OpenStack Days Canada

 

 

Marketing Portal Content

There is some fantastic OpenStack Foundation content available on the Marketing Portal.

This includes materials like

  • OpenStack 101 slide deck
  • 2017 OpenStack Highlights & Stats presentation
  • Collateral for events (Sticker and T-Shirt designs)

 

Latest from Superuser

 

Some fantastic articles have been published in the last few weeks, with some featuring you, the user groups! Check them out below:

OpenStack’s 7th Birthday

 

OpenStack User Group spotlight: Phoenix

 

OpenStack User Group spotlight: Canada

 

How to navigate new cultures in an open, diverse community

 

OpenStack User spotlight: CERN

 

Learn OpenStack: A new community site

 

Contributing to the User Group Newsletter.

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.

Build a Raspberry Pi Scanner that Tracks the Devices Connected to Your Local Network

The content below is taken from the original (Build a Raspberry Pi Scanner that Tracks the Devices Connected to Your Local Network), to continue reading please visit the site. Remember to respect the Author & Copyright.

Make use of a Raspberry Pi to build a network scanner that will keep track of the hosts connecting to your local network. It’s actually pretty easy to do.

Read more on MAKE

The post Build a Raspberry Pi Scanner that Tracks the Devices Connected to Your Local Network appeared first on Make: DIY Projects and Ideas for Makers.

Quick Reference to Common AWS CLI Commands

The content below is taken from the original (Quick Reference to Common AWS CLI Commands), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post provides an extremely basic “quick reference” to some commonly-used AWS CLI commands. It’s not intended to be a deep dive, nor is it intended to serve as any sort of comprehensive reference (the AWS CLI docs nicely fill that need).

This post does make a couple of important assumptions:

  1. This post assumes you already have a basic understanding of the key AWS concepts and terminology, and therefore doesn’t provide any definitions or explanations of these concepts.

  2. This post assumes the AWS CLI is configured to output in JSON. (If you’re not familiar with JSON, see this introductory article.) If you’ve configured your AWS CLI installation to output in plain text, then you’ll need to adjust these commands accordingly.

I’ll update this post over time to add more “commonly-used” commands, since each reader’s definition of “commonly used” may be different based on the AWS services consumed.

To list SSH keypairs in your default region:

aws ec2 describe-key-pairs

To use jq to grab the name of the first SSH keypair returned:

aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName'

To store the name of the first SSH keypair returned in a variable for use in later commands:

KEY_NAME=$(aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName')

More information on the use of jq can be found in this article or via the jq homepage, so this post doesn’t go into any detail on the use of jq in conjunction with the AWS CLI. Additionally, for all remaining command examples, I’ll leave the assignment of output values into a variable as an exercise for the reader.

To grab a list of instance types in your region, I recommend referring to Rodney “Rodos” Haywood’s post on determining which instances are available in your region.

To list security groups in your default region:

aws ec2 describe-security-groups

To retrieve security groups from a specific region (us-west-1, for example):

aws --region us-west-1 describe-security-groups

To use the jp tool to grab the security group ID of the group named “internal-only”:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only'].GroupId"

The jp command was created by the same folks who work on the AWS CLI; see this site for more information.

To list the subnets in your default region:

aws ec2 describe-subnets

To grab the subnet ID for a subnet in a particular Availability Zone (AZ) in a region using jp:

aws ec2 describe-subnets | jp "Subnets[?AvailabilityZone == 'us-west-2b'].SubnetId"

To describe the Amazon Machine Images (AMIs) you could use to launch an instance:

aws ec2 describe-images

This command alone isn’t all that helpful; it returns too much information. Filtering the information is pretty much required in order to make it useful.

To filter the list of images by owner:

aws ec2 describe-instances --owners 099720109477

To use server-side filters to further restrict the information returned by the aws ec2 describe-images command (this example finds Ubuntu 14.04 “Trusty Tahr” AMIs in your default region):

aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-trusty-14.04*' Name=virtualization-type,Values=hvm

To combine server-side filters and a JMESPath query to further refine the information returned by the aws ec2 describe-images command (this example returns the latest Ubuntu 14.04 “Trusty Tahr” AMI in your default region):

aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-trusty-14.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId'

Naturally, you can manipulate the filter values to find other types of AMIs. To find the latest CentOS Atomic Host AMI in your default region:

aws ec2 describe-images --owners 410186602215 --filter Name=name,Values="*CentOS Atomic*" --query 'sort_by(Images,&CreationDate)[-1].ImageId'

To find the latest CoreOS Container Linux AMI from the Stable channel (in your default region):

aws ec2 describe-images --filters Name=name,Values="*CoreOS-stable*" Name=virtualization-type,Values=hvm --query 'sort_by(Images,&CreationDate)[-1].ImageId'

Further variations on these commands for other AMIs is left as an exercise for the reader.

To launch an instance in your default region (assumes you’ve populated the necessary variables using other AWS CLI commands):

aws ec2 run-instances --image-id $IMAGE_ID --count 1 --instance-type t2.micro --key-name $KEY_NAME --security-group-ids $SEC_GRP_ID --subnet-id $SUBNET_ID

To list the instances in your default region:

aws ec2 describe-instances

To retrieve information about instances in your default region and use jq to return only the Instance ID and public IP address:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | {instance: .InstanceId, publicip: .PublicIpAddress}'

To terminate one or more instances:

aws ec2 terminate-instances --instance-ids $INSTANCE_IDS

To remove a rule from a security group:

aws ec2 revoke-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To add a rule to a security group:

aws ec2 authorize-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To create an Elastic Container Service (ECS) cluster:

aws ecs create-cluster [default|--cluster-name <value>]

If you use a name other than “default”, you’ll need to be sure to add the --cluster <value> parameter to all other ECS commands. This guide assumes a name other than “default”.

To delete an ECS cluster:

aws ecs delete-cluster --cluster <value>

To add container instances (instances running ECS Agent) to a cluster:

aws ec2 run-instances --image-id $IMAGE_ID --count 3 --instance-type t2.medium --key-name $KEY_NAME --subnet-id $SUBNET_ID --security-group-ids $SEC_GROUP_ID --user-data file://user-data --iam-instance-profile ecsInstanceRole

The example above assumes you’ve already created the necessary IAM instance profile, and that the file user-data contains the necessary instructions to prep the instance to join the ECS cluster. Refer to the Amazon ECS documentation for more details.

To register a task definition (assumes the JSON filename referenced contains a valid ECS task definition):

aws ecs register-task-definition --cli-input-json file://<filename>.json --cluster <value>

To create a service:

aws ecs create-service --cluster <value> --service-name <value> --task-definition <family:task:revision> --desired-count 2

To scale down ECS services:

aws ecs update-service --cluster <value> --service <value> --desired-count 0

To delete a service:

aws ecs delete-service --cluster <value> --service <value>

To de-register container instances (instances running ECS Agent):

aws ecs deregister-container-instance --cluster <value> --container-instance $INSTANCE_IDS --force

If there are additional AWS CLI commands you think I should add here, feel free to hit me up on Twitter. Try to keep them as broadly applicable as possible, so that I can maximize the benefit to readers. Thanks!

Supermicro A2SDi-H-TP4F Review 16 Core SoC With Power Consumption

The content below is taken from the original (Supermicro A2SDi-H-TP4F Review 16 Core SoC With Power Consumption), to continue reading please visit the site. Remember to respect the Author & Copyright.

Armed with the Intel Atom C3955 top of the line 16 core SoC the Supermicro A2SDi-H-TP4F has tons of features including 2x 10Gbase-T, 2x SFP+ and 12x SATA

The post Supermicro A2SDi-H-TP4F Review 16 Core SoC With Power Consumption appeared first on ServeTheHome.

Tip: Free Level 200 Azure Stack Training from Opsgility

The content below is taken from the original (Tip: Free Level 200 Azure Stack Training from Opsgility), to continue reading please visit the site. Remember to respect the Author & Copyright.

Training is great, free training is even better…

With the final month-ish countdown to Azure Stack multi-node systems being delivered on-premises underway, training materials and courses have begun to pop up online to help get people up to speed. One of the first is from Opsgility, offering a ten module Level 200 course in Implementing Azure Stack solutions.

The course is available here: http://bit.ly/2vHvWi6

Opsgility is of course a paid for service, however if you sign up for a free Visual Studio Dev Essentials account, three months of Opsgility access is included for free, as well as the ever-useful $25 of free Azure credits every month for a year.

Enjoy! ?

 

The post Tip: Free Level 200 Azure Stack Training from Opsgility appeared first on Cloud and Datacenter Thoughts and Tips.

When is a Barracuda not a Barracuda? When it’s really AWS S3

The content below is taken from the original (When is a Barracuda not a Barracuda? When it’s really AWS S3), to continue reading please visit the site. Remember to respect the Author & Copyright.

Barracuda’s backup appliances can now replicate data to Amazon’s S3 cloud silos.

According to the California-based outfit, its backup appliance is now available in three flavors:

  • On-premises physical server
  • On-premises virtual server
  • In-cloud virtual server
Barracuda_AWS_Chart

Barracuda backup schematic

Data can be backed up from Office 365, physical machines, and virtual machines running in Hyper-V or VMware systems, to an appliance. This box can then replicate its backups to a second appliance, typically at a remote site, providing a form of disaster recovery, or send the data to S3 buckets in AWS. For small and medium businesses with no second data centre, replicating to Amazon’s silos provides an off-site protection resource.

Barracuda’s customers, resellers and managed services providers can add this replicate-to-Amazon option to their Barracuda product set. Users don’t have to deploy compute processes inside AWS or become AWS infrastructure experts; Barracuda taking care of that side of things, apparently. The biz provides the backend service, which piggybacks on AWS, and your boxes connect to it.

Barracuda Backup replication to AWS is available now in North America, with worldwide deployment expected to be rolled out in the coming months. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say