Real World Troubleshooting Tips for OpenStack Operations

In this session we will present troubleshooting tips such as debugging with clients, debugging logs, and a walkthrough with examples of steps in a “nova boot” command, from a team that utilizes these tips daily with their storage and operations teams.

Machine Learning, Recommendation Systems, and Data Analysis at Cloud Academy

The content below is taken from the original (Machine Learning, Recommendation Systems, and Data Analysis at Cloud Academy), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s guest post, Alex Casalboni and Giacomo Marinangeli of Cloud Academy discuss the design and development of their new Inspire system.


Jeff;


Our Challenge
Mixing technology and content has been our mission at Cloud Academy since the very early days. We are builders and we love technology, but we also know content is king. Serving our members with the best content and creating smart technology to automate it is what kept us up at night for a long time.

Companies are always fighting for people’s time and attention and at Cloud Academy, we face those same challenges as well. Our goal is to empower people, help them learn new Cloud skills every month, but we kept asking ourselves: “How much content is enough? How can we understand our customer’s goals and help them select the best learning paths?”

With this vision in mind about six months ago we created a project called Inspire which focuses on machine learning, recommendation systems and data analysis. Inspire solves our problem on two fronts. First, we see an incredible opportunity in improving the way we serve our content to our customers. It will allow us to provide better suggestions and create dedicated learning paths based on an individual’s skills, objectives and industries. Second, Inspire represented an incredible opportunity to improve our operations. We manage content that requires constant updates across multiple platforms with a continuously growing library of new technologies.

For instance, getting a notification to train on a new EC2 scenario that you’re using in your project can really make a difference in the way you learn new skills. By collecting data across our entire product, such as when you watch a video or when you’re completing an AWS quiz, we can gather that information to feed Inspire. Day by day, it keeps personalising your experience through different channels inside our product. The end result is a unique learning experience that will follow you throughout your entire journey and enable a customized continuous training approach based on your skills, job and goals.

Inspire: Powered by AWS
Inspire is heavily based on machine learning and AI technologies, enabled by our internal team of data scientists and engineers. Technically, this involves several machine learning models, which are trained on the huge amount of collected data. Once the Inspire models are fully trained, they need to be deployed in order to serve new predictions, at scale.

Here the challenge has been designing, deploying and managing a multi-model architecture, capable of storing our datasets, automatically training, updating and A/B testing our machine learning models, and ultimately offering a user-friendly and uniform interface to our website and mobile apps (available for iPhone and Android).

From the very beginning, we decided to focus high availability and scalability. With this in mind, we designed an (almost) serverless architecture based on AWS Lambda. Every machine learning model we build is trained offline and then deployed as an independent Lambda function.

Given the current maximum execution time of 5 minutes, we still run the training phase on a separate EC2 Spot instance, which reads the dataset from our data warehouse (hosted on Amazon RDS), but we are looking forward to migrating this step to a Lambda function as well.

We are using Amazon API Gateway to manage RESTful resources and API credentials, by mapping each resource to a specific Lambda function.

The overall architecture is logically represented in the diagram below:

Both our website and mobile app can invoke Inspire with simple HTTPS calls through API Gateway. Each Lambda function logically represents a single model and aims at solving a specific problem. More in detail, each Lambda function loads its configuration by downloading the corresponding machine learning model from Amazon S3 (i.e. a serialized representation of it).

Behind the scenes, and without any impact on scalability or availability, an EC2 instance takes care of periodically updating these S3 objects, as outcome of the offline training phase.

Moreover, we want to A/B test and optimize our machine learning models: this is transparently handled in the Lambda function itself by means of SixPack, an open-source A/B testing framework which uses Redis.

Data Collection Pipeline
As far as data collection is concerned, we use Segment.com as data hub: with a single API call, it allows us to log events into multiple external integrations, such as Google Analytics, Mixpanel, etc. We also developed our own custom integration (via webhook) in order to persistently store the same data in our AWS-powered data warehouse, based on Amazon RDS.

Every event we send to Segment.com is forwarded to a Lambda function – passing through API Gateway – which takes care of storing real-time data into an SQS queue. We use this queue as a temporary buffer in order to avoid scalability and persistency problems, even during downtime or scheduled maintenance. The Lambda function also handles the authenticity of the received data thanks to a signature, uniquely provided by Segment.com.

Once raw data has been written onto the SQS queue, an elastic fleet of EC2 instances reads each individual event – hence removing it from the queue without conflicts – and writes it into our RDS data warehouse, after performing the required data transformations.

The serverless architecture we have chosen drastically reduces the costs and problems of our internal operations, besides providing high availability and scalability by default.

Our Lambda functions have a pretty constant average response time – even during load peaks – and the SQS temporary buffer makes sure we have a fairly unlimited time and storage tolerance before any data gets lost.

At the same time, our machine learning models won’t need to scale up in a vertical or distributed fashion since Lambda takes care of horizontal scaling. Currently, they have an incredibly low average response time of 1ms (or less):

We consider Inspire an enabler for everything we do from a product and content perspective, both for our customers and our operations. We’ve worked to make this the core of our technology, so that its contributions can quickly be adapted and integrated by everyone internally. In the near future, it will be able to independently make decisions for our content team while focusing on our customers’ need.  At the end of the day, Inspire really answers our team’s doubts on which content we should prioritize, what works better and exactly how much of it we need. Our ultimate goal is to improve our customer’s learning experience by making Cloud Academy smarter by building real intelligence.

Join our Webinar
If you would like to learn more about Inspire, please join our April 27th webinar – How we Use AWS for Machine Learning and Data Collection.

Alex Casalboni, Senior Software Engineer, Cloud Academy
Giacomo Marinangeli, CTO, Cloud Academy

PS – Cloud Academy is hiring – check out our open positions!

Migrating to the cloud: A practical, how-to guide

The content below is taken from the original (Migrating to the cloud: A practical, how-to guide), to continue reading please visit the site. Remember to respect the Author & Copyright.

For many companies, the cloud’s economies of scale, flexibility and predictable payment structures are becoming too attractive to ignore. Looking to avoid costly capital outlays for new servers and the high overhead of managing on-premises architecture, many companies have started moving to the cloud as a cost effective option to develop, deploy and manage their IT portfolio.

To be sure, the benefits of cloud computing go well beyond economies of scale and efficiency. Consider that the vast number of servers that help to drive down costs are also on tap to provide virtually inexhaustible levels of compute power, which can redefine the possibilities for virtually every aspect of your business.

But regardless of a company’s size, migrating to the cloud is certainly no small task. When done right, the process will cause a company to reconsider its culture, its processes, individual roles and governance—not to mention engineering.

Download the free Enterprise Cloud Strategy e-book for a road map to navigate your way.

Enterprise Cloud Strategy

For the past several years, Barry Briggs and I have been on the front lines in helping companies, including Microsoft, navigate these challenges of migrating to the cloud. We’ve seen firsthand how companies are using the cloud’s potential to transform and reinvent themselves.

For example, the sales team for Minneapolis-based 3M Parking Systems needed better insight into thousands of new technology installations of which the company had recently taken ownership following its acquisition of parking, tolling, and automatic license plate reader businesses.

In just two days’ time, 3M created a tracking solution that connects multiple types of mobile devices, thousands of machines and data sources, and a cloud platform (using Xamarin Studio, Visual Studio and Azure Mobile Services). Now the 3M sales team can immediately see where equipment is installed, allowing them to work more autonomously and productively while out in the field.

Another example is work done with a London-based financial services firm, Aviva, that wanted to create a first of its kind pricing model that would provide a personalized prices, reducing insurance premiums for appropriate customers. Historically, this would have required installing black boxes in vehicles to collect and transmit telemetry data back to the company’s data center, which also would have required increased storage and compute capacity.

A solution like this would not have penciled out, but with the help of a handful of Microsoft tools and technologies, Aviva was able to design, develop and release the Aviva Drive app in just over a year’s time. The result was a pricing model that gave customers as much as a 20 percent discount on their premiums, and provided Aviva with a significant competitive advantage.

What 3M and Aviva (and many others) have since discovered is the shifting balance between maintenance and innovation: The automation of many day-to-day responsibilities, made possible by the technologies underpinning their cloud computing platform, has freed up IT to devote more time toward creating and administering applications and services that will move the bar for the business.

Based on these findings, and those of many colleagues, Barry and I have written this e-book, Enterprise Cloud Strategy. What you’ll find is an in-depth guide to help you start your own migration, providing practical suggestions for getting started experimenting, assembling a team to drive the process and how to make the most of game-changing technologies such as advanced analytics and machine learning.

Happy moving!

Read James Staten’s Azure blog post on how the Enterprise Cloud Strategy e-book can help you prioritize for a successful hybrid cloud strategy.

Join me on Wednesday, May 11 at 10:00 am PST for the Roadmap to Build your Enterprise Cloud Strategy webinar.

Azure Web Apps Gallery available only in new Azure portal

The content below is taken from the original (Azure Web Apps Gallery available only in new Azure portal), to continue reading please visit the site. Remember to respect the Author & Copyright.

Web App Marketplace allows you to quickly deploy dynamic blogs, CMS platforms, e-commerce sites, and more, with ready-to-use Azure Web Apps and templates including hundreds of popular open source solutions by Microsoft, partners, and the community.

Azure users can create an open source solution by clicking on New -> Web -> From Gallery in the old Azure Management portal and in the new Azure portal using Web Marketplace.

In an effort to improve the web apps user experience, we will no longer support the creation of Gallery applications from the old Azure Management portal, starting June, 2016.

oldportal-gallery

When you click on New -> Web -> From Gallery on Azure Management portal, you will be informed to use the new Azure portal. You can always access the Azure portal Web Marketplace by clicking here.

image

Please share your feedback on how we can improve the Web Apps Marketplace and any new apps you would like to see in the Azure Marketplace on UserVoice .

Hybrid cloud: How you can take advantage of the best of both worlds

The content below is taken from the original (Hybrid cloud: How you can take advantage of the best of both worlds), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s a fact: the hybrid cloud has emerged as a “dominant deployment model.” Indeed, the appeal of hybrid cloud among IT professionals is now “universal.”

Why?

Cloud is here to stay, but CIO’s know that the transition to cloud computing won’t happen overnight. Responsible CIO’s and IT managers will want to ensure that their applications and data in the cloud are secure, that migrations will generate cost savings, and above all that business operations will continue uninterrupted during the migration. Your application users should not know or care if their application is hosted locally or in the cloud.

To achieve this, you should look at the cloud as a model – not a place – where, with thoughtful planning, you have the power to ensure right balance of agility and control for each application. This means you’ll need an action plan for application deployment across a mixed, hybrid environment, a plan that ensures application usage is seamless for users and you as an IT administrator have control (as covered in my last post).

A well-designed hybrid cloud provides seamless, more secure access to applications to your users – no matter where those applications are.

Getting started with your hybrid cloud

Enterprise Applications

How do you accomplish that? To start with, you need a more secure, high-speed interconnect between your data center and the applications you host in the cloud. Microsoft provides several solutions, including virtual private network (VPN) solutions as well as a dedicated line solution (Microsoft ExpressRoute). With ExpressRoute, your data center is linked to Azure via a private, low-latency connection that bypasses the public internet.

Both of these technologies enable IT to set up their DNS addressing so that applications in the cloud continue to appear as part of your local IT data center.

What about identity? You’ll want your users to access applications without having to re-enter credentials again – of course. Single sign-on (SSO), a capability provided by Azure Active Directory, is the final piece in your virtual data center. AAD allows you to synchronize identities with your on-premises Active Directory; and thus your users log on to the (virtual) network once and are transparently provided access to corporate applications without regard to their hosting location.

Even before you begin migrating applications, you can take advantage of the hybrid cloud. A cloud-based marketing application can easily and securely send leads back to an on-premises database – while fully taking advantage of the scale, mobile access, and global reach of the cloud.

Think as well about using the agility that public cloud offers in hybrid management with Azure’s integrated Operations Management Suite, including inexpensive data center-to-cloud backup, and cloud-based disaster recovery. You can also take advantage of cloud services to quickly connect your applications to external commerce systems via the industry standard X.12 Electronic Data Interchange protocol and others.

Extending your hybrid cloud to the future

Emerging technologies provide new and exciting capabilities to the hybrid cloud. Azure Stack, currently in Technical Preview, brings many of the capabilities of the public cloud to your data center. Enterprise IT will then be able to adopt a “write-once, deploy-anywhere” approach to their applications, selecting the public, private, or hosted cloud that makes sense for each application or service based on business rules and technical requirements.

In addition, new application packaging technologies – called containers – make it possible to easily burst from the on-premises data center by adding new instances in the public cloud when additional capacity is necessary.

The best of both worlds

Enterprise Cloud StrategyMicrosoft deeply believes in the importance of hybrid cloud, and in fact, as a hybrid cloud user itself, utilizes all of the technologies and approaches we’ve mentioned above in our own IT environment. With a sound hybrid cloud strategy, you can take advantage of all the exciting cloud technologies now available while preserving your investment in your data centers – and you can migrate applications and data at your own pace.

Interested in hybrid cloud? Check out Enterprise Cloud Strategy, by former Microsoft IT CTO and technology thought leader Barry Briggs and Eduardo Kassner, the executive in charge of Microsoft’s field Cloud Solutions Architects. It’s free!

Four stops on your journey to the cloud: Experiment, migrate, transform, repeat

The content below is taken from the original (Four stops on your journey to the cloud: Experiment, migrate, transform, repeat), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a period of anticipation at the start of any journey when you’re researching your intended destination to find the right spots to stay, places to eat and things to do. For organizations migrating to the cloud, that phase is known in the industry as experimentation.

The experimentation phase is important because cloud computing is an unknown for many businesses. As I explained in my last post, developing an enterprise cloud strategy isn’t simply about finding a more affordable way to manage IT. It’s about finding ways to get more out of many facets of your business. Experimentation is useful in helping choose where to get started on the road to the cloud, and envisioning your ultimate destination.

When Microsoft was moving to the cloud, we did a little “tire-kicking” of our own. We picked a project that was non-essential to the company — a cloud-based version of an app that we built every year for our month-long auction to raise money for charities. It was a great opportunity to see the scalability of the application over time as the end of the auction drew near and usage of the app increased.

Concurrent with engineers experimenting with how to build, test and deploy apps in the cloud, business and IT departments need to envision how they can help their company leap ahead through services or applications that are broader in scope, that create agility, and that take advantage of cloud services such as machine learning, big data and streaming analytics. This exercise helps to crystallize where moving to the cloud could take the company. It can also help mentally prepare for the next phase: migration.

Migration is when the bulk of the IT portfolio is moved to the cloud. It’s also during migration that technical staff, operations, the executive team, business sponsors, security professionals, regulatory compliance staff, legal and HR must all cooperate and collaborate.

Almost simultaneous with the migration, the transformation process will begin. Some of your apps will move to the cloud as virtual machines, leaving them more or less intact. In other cases, you might opt for a PaaS deployment model, in which you build a new application from the ground up to take better advantage of capabilities such as data replication, scalability and cloud services like Microsoft Azure Active Directory, which provides robust identity management.

Enterprise Cloud Strategy

It won’t be long before you’re back for more.

One thing that we learned very early in Microsoft’s own migration to the cloud is the value of a culture of continuous experimentation. In the cloud era, success requires that businesses experiment, fail fast and learn fast. The lessons learned from your failures and successes will help your company gain greater value and create disruptive innovation.

Read more on moving to the cloud and transforming your business in Enterprise Cloud Strategy, an e-book that I co-published with Barry Briggs. You’ll find a wealth of knowledge from Microsoft and some of its customers, all intended to help you succeed in your migration to the cloud.

Consider it the travel guide for your journey to the cloud, without the fancy pictures.

Join me on Wednesday, May 11 at 10:00 am PST for the Roadmap to Build your Enterprise Cloud Strategy webinar.

3 options for applications that don’t fit the cloud

The content below is taken from the original (3 options for applications that don’t fit the cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Not every application and data set is right for private or public cloud computing. Indeed, as much as 50 percent of today’s workloads are not a good fit for the cloud.

However, all is not lost. You have alternatives that can drive as much value as cloud computing from legacy workloads. As we continue to move the low-hanging fruit to the cloud, we need to find new and more efficient homes for the misfits. Here are three options.

1. Find a SaaS alternative

More than 2,000 SaaS companies provide everything from HR automation to automotive garage management applications. If your legacy application is not right for the cloud, consider a SaaS replacement that not only fits but exceeds expectations.

2. Use a managed service provider

If the application and data can’t move to the cloud, managed services providers (MSPs) may provide a better home, hosted on someone else’s hardware in someone else’s data center. Many of these MSPs are tightly integrated with public clouds, such as Amazon Web Services, and can provide a happy, cost-effective home for your workload, plus have those workloads work and play well with workloads in the public clouds.

3. Consider refactoring

Although refactoring means recoding, which means money and time, it sometimes makes sense to redo an application so that it’s cloud-compatible.

Do You Live in The Countryside? You Might Have to Ask the Government to Give You Broadband

The content below is taken from the original (Do You Live in The Countryside? You Might Have to Ask the Government to Give You Broadband), to continue reading please visit the site. Remember to respect the Author & Copyright.

And you could actually be waiting years for it ever to come to your doorstep.

How to Reuse Waste Heat from Data Centers Intelligently

The content below is taken from the original (How to Reuse Waste Heat from Data Centers Intelligently), to continue reading please visit the site. Remember to respect the Author & Copyright.

Data centers worldwide are energy transformation devices. They draw in raw electric power on one side, spin a few electrons around, spit out a bit of useful work, and then shed more than 98 percent of the electricity as not-so-useful low-grade heat energy. They are almost the opposite of hydroelectric dams and wind turbines, which transform kinetic energy of moving fluids into clean, cheap, highly transportable electricity to be consumed tens or hundreds of miles away.

Monroe heat reuse fig 1

Image: Energetic Consulting

But maybe data centers don’t have to be the complete opposite of generation facilities. Energy transformation is not inherently a bad thing. Cradle-to-Cradle author and thought leader William McDonough teaches companies how to think differently, so that process waste isn’t just reduced, but actively reused. This same thinking can be applied to data center design so that heat-creating operations like data centers might be paired with heat-consuming operations like district energy systems, creating a closed-loop system that has no waste.

It’s not a new idea for data centers. There are dozens of examples around the globe of data centers cooperating with businesses in the area to turn waste heat into great heat. Lots of people know about IBM in Switzerland reusing data center heat to warm a local swimming pool. In Finland, data centers by Yandex and Academica share heat with local residents, replacing the heat energy used by 500-1000 homes with data center energy that would have been vented to the atmosphere. There are heat-reuse data centers in Canada, England, even the US. Cloud computing giant Amazon has gotten great visibility from reuse of a nearby data center’s heat at the biosphere project in downtown Seattle.

Rendering of an Amazon campus currently under construction in Seattle's Denny Triangle neighborhood (Image:

Rendering of an Amazon campus currently under construction in Seattle’s Denny Triangle neighborhood

Crank Up the Temperature

There are two big issues with data center waste heat reuse: the relatively low temperatures involved and the difficulty of transporting heat. Many of the reuse applications to date have used the low-grade server exhaust heat in an application physically adjacent to the data center, such as a greenhouse or swimming pool in the building next door. This is reasonable given the relatively low temperatures of data center return air, usually between 28o and 35oC (80-95oF), and the difficulty in moving heat around. Moving heat energy frequently requires insulated ducting or plumbing instead of cheap, convenient electrical cables. Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot. Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project. There’s currently not much that can be done to reduce this cost.

To address the low-temperature issue, some data center operators have started using heat pumps to increase the temperature of waste heat, making the thermal energy much more valuable, and marketable. Waste heat coming out of heat pumps at temperatures in the range of 55o to 70oC (130-160oF) can be transferred to a liquid medium for easier transport and can be used in district heating, commercial laundry, industrial process heat, and many more. There are even High Temperature (HT) and Very High Temperature (VHT) heat pumps capable of moving low-grade data center heat up to 140oC.

Monroe heat reuse fig 3

Image: Energetic Consulting

The heat pumps appropriate for this type of work are highly efficient, with Coefficient of Performance (COP) of 3.0 to 6.0, and the energy used by the heat pumps gets added to the stream of energy moving to the heat user, as shown in the diagram below. If a data center is using heat pumps with a COP of 5.0, running on electricity that costs $0.10 per kWh, the energy can be moved up to higher temperatures for as little as $0.0083 per kWh.

Waste heat could be a source of income for the data center. New York’s Con Edison produces steam heat at $0.07 per kWh (€0.06 per kWh), and there have been examples of heat-and-power systems selling waste heat to district heating systems for €0.1-€0.3 per kWh. For a 1.2MW data centers that sells all of its waste heat, that could translate into more than $350,000 (€300,000) per year. That may be as much as 14% of the annual gross rental income from a data center that size, with very high profit margins.

Closing the Loop

There’s also the possibility of combining data centers with power plants for increased efficiency and reuse of waste heat. Not just in the CHP-data center sense described by Christian Mueller in this publication in February, or the purpose-built complex like The Data Centers LLC proposed in Delaware. Building data centers in close proximity to existing power plants could be beneficial in several ways. In the US, transmission losses of 8-10% are typical across the grid. Co-locating data centers right next to power plants would eliminate this loss and the capital expense of transporting large amounts of power.

Second, power plants make “dumb” electrons, general-purpose packets of energy that need to be processed by data centers to turn into “smart” electrons that are part of someone’s Facebook update screen, a weather model graphic output, or digital music streaming across the internet. Why transport the dumb electrons all the way to the data center to be converted?

Third, a co-located data center could transfer heat pump-boosted thermal energy back to the power plant for use in the feed water heater or low-pressure turbine stages, creating a neat closed-loop system.

There are important carbon footprint benefits in addition to the financial perks. Using the US national average of 1.23 lb CO2 per kWh, a 1.2MW data center could save nearly 6,000 metric tons of CO2 per year by recycling the waste heat.

These applications are starting to appear in small and large projects around the world. The key is to find an application that needs waste heat year round, use efficient, high-temperature heat pumps, and find a way to actively convert this wasted resource into revenue and carbon savings.

About the author: Mark Monroe is president at Energetic Consulting. His past endeavors include executive director of The Green Grid and CTO of DLB Associates. His 30 years’ experience in the IT industry includes data center design and operations, software development, , professional services, sales, program management, and outsourcing management. He works on sustainability advisory boards with the University of Colorado and local Colorado governments, and is a Six Sigma Master Black Belt.

RightScale Cuts Own Cloud Costs by Switching to Docker

The content below is taken from the original (RightScale Cuts Own Cloud Costs by Switching to Docker), to continue reading please visit the site. Remember to respect the Author & Copyright.

Less than two months ago, the engineering team behind the cloud management platform RackScale kicked off a project to rethink the entire infrastructure its services run on. They decided to package as much of its backend as possible in Docker containers, the method of deploying software whose popularity spiked over the last couple of years, becoming one of the most talked about technology shifts in IT.

It took the team seven weeks to complete most of the project, and Tom Miller, RightScale’s VP of engineering, declared the project a success in a blog post Tuesday, saying they achieved both goals they had set out to achieve: reduced cost and accelerated development.

There are two Dockers. There is the Docker container, which is a standard, open source way to package a piece of software in a filesystem with everything that piece of software needs to run: code, runtime, system tools, system libraries, etc. There is also Docker Inc., the company that created the open source technology and that has built a substantial set of tools for developers and IT teams to build, test, and deploy applications using Docker containers.

In the sense that a container can contain an application that can be moved from one host to another, Docker containers are similar to VMs. Docker argues that they are a more efficient, lighter-weight way to package software than VMs, since each VM has its own OS instance, while Docker runs on top of a single OS, and countless individual containers can be spun up in that single environment.

Another big advantage of containers is portability. Because containers are standardized and contain everything the application needs to run, they can reportedly be easily moved from server to server, VM to VM (they can and do run in VMs), cloud to cloud, server to laptop, etc.

Google uses a technology similar to Docker containers to power its services, and many of the world’s largest enterprises have been evaluating and adopting containers since Docker came on the scene about two years ago.

Read more: Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge

RightScale offers a Software-as-a-Service application that helps users manage their cloud resources. It supports all major cloud providers, including Amazon, Microsoft, Google, Rackspace, and IBM SoftLayer, and key private cloud platforms, such as VMware vSphere, OpenStack, and Apache CloudStack.

Its entire platform consists of 52 services that used to run on 1,028 cloud instances. Over the past seven weeks, the engineering team containerized 48 of those services in an initiative they dubbed “Project Sherpa.”

They only migrated 670 cloud instances to Docker containers. That’s how many instances ran dynamic apps. Static apps – things like SQL databases, Cassandra rings, MogoDB clusters Redis, Memcached, etc. – wouldn’t benefit much from switching to containers, Miller wrote.

The instances running static apps now support containers running dynamic apps in a hybrid environment. “We believe that this will be a common model for many companies that are using Docker because some components (such as storage systems) may not always benefit from containerization and may even incur a performance or maintenance penalty​ if containerized,” he wrote.

As a result the number of cloud instances running dynamic apps was reduced by 55 percent and the cloud infrastructure costs of running those apps came down by 53 percent on average.

RightScale has also already noticed an improvement in development speed. Standardization and portability containers offer help developers with debugging, working on applications they have no experience with, and flexibility in accessing integration systems. Product managers can check out features that are being developed without getting developers involved.

“There are certainly more improvements that we will make in our use of Docker, but we would definitely consider Project Sherpa a success based on the early results, Miller wrote.

Cisco suggests network administration is fun and games

The content below is taken from the original (Cisco suggests network administration is fun and games), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cisco’s trying to get into the games business.

A recent trademark application for ”Cisco Geek Factor” suggests the Borg wants to brand its own “Computer game programs; computer game software for use on mobile and cellular phones.”

United States Patent and Trademark Office filings are very brief: the only other information we have to go on is that the Trademark will cover the following:

“Entertainment services, namely, providing online computer games; providing an on-line computer game in the field of information technology and computer networking.”

That hardly sounds like something kids are going to avoid homework to play.

Cisco is, however, nearly always looking for a way to get more folks learning how to to wrangle its routers and sling its switches. Young people, your forty-something correspondent understands, quite like games. Might The Borg be cooking up some kind of edu-tainment that millennials can experience on their smartphones? Or is Cisco gamifying the experience of network management, turning PING PING PING into PEW PEW PEW? ®

Sponsored:
Implementing high availability and disaster recovery in IBM PureApplication systems V2

Portal router aims to deliver us from congested WiFi

The content below is taken from the original (Portal router aims to deliver us from congested WiFi), to continue reading please visit the site. Remember to respect the Author & Copyright.

What happens when former Qualcomm engineers decide to build a router of their own? You get something like Portal, an innocuous looking device that aims to speed up WiFI networks using technology never before seen in consumer routers. It supports 802.11AC WiFi, but it works on all six channels of the 5GHz spectrum, whereas today’s routers only work on two channels. That’s a big deal — it means Portal is well-suited to delivering fast WiFi in places like dense apartment buildings.

It used to be that simply hopping onto a 5Ghz network was enough to avoid the overcrowding in the 2.4GHz spectrum. But with more people upgrading their routers, even speedy 5GHz spectrum is getting filled up today.

"The fundamental problem is that as WiFi becomes more popular and applications becomes more demanding, your problem is not going to be ‘how fast does my router go?’," said Terry Ngo, CEO and co-founder of Ignition Design Labs, the company behind Portal. Instead, the real issue will become, "How does it survive in an increasingly connected environment?"

Portal uses a combination of features to deal with that dilemma. For one, it packs in nine antennas inside of its sleek, curved case, as well as 10 "advanced" radios. Ngo points out that there’s no need for giant bug-like antennas we’re seeing on consumer routers today, like Netgear’s massive Nitehawk line. (Most of those long antennas are usually just empty plastic.) The Portal is also smart enough to hop between different 5Ghz channels (check out a diagram of channels above) if it detects things are getting crowded. Most routers today pick a channel when they boot up and never move off of it.

In a brief demonstration, Ngo and his crew showed off just how capable the Portal is. While standing around 50 feet away from the router, with a few walls between us, the Portal clocked in 25 Mbps download speeds and 5 Mbps uploads, with a latency of around 3ms. In comparison, a Netgear Nitehawk router saw download speeds of 2Mbps and upload speeds of 5Mbps from the same location, with 30ms of latency.

You’d still be able to stream 4K video streams from the Portal in that spot, whereas the Netgear might even give you trouble with an HD stream, depending on how congested the reception is. Portal was also able to stream three separate 4K videos at once, and, surprisingly, they didn’t even skip when the router changed wireless channels.

One of Portal’s features is particularly surprising: radar detection. That’s necessary to let it use a part of the 5GHz spectrum typically reserved for weather systems in the US. Most devices just avoid that spectrum entirely to avoid the ire of the FCC. By implementing continuous radar detection, Portal is able to turn off access to that spectrum for the few people who live near weather radars (usually, it’s only near airports and certain coastal areas). But even if you’re locked out from that bit of spectrum, Portal still gives you three more 5Ghz channels than other consumer routers.

Just like Google’s OnHub router, which is also trying to solve our WiFi woes, Portal also relies on the cloud to optimize your network. For example, Portal will be able to know in advance if certain locations won’t have access to the 5Ghz spectrum reserved for radar. It’ll also be able to keep track of how crowded WiFi channels get in your neighborhood, and it could optimize which channels are being used at different times of the day. There’s a bit of a privacy concern there, for sure, but using the cloud also lets Ignition Design Labs bring new wireless features to Portal without the need for expensive hardware.

Portal also includes five gigabit Ethernet ports, as well as two USB ports for streaming your content. That’s a notable difference from OnHub, which limited Ethernet ports in favor of a simpler design. Ignition Design Labs has also developed a mobile app for setting up and managing Portal, but you can also log onto its setup page just like a typical router.

While new routers like the Eero and Luma are great WiFi solutions for large homes, where reception range is a bigger issue, the Portal makes more sense for people living in apartments and other dense areas. But Portal also has an extended range solution, if you need it: You can just connect two units together in a mesh network (the company claims it also does this more efficiently than Eero and Luma).

Portal is launching on Kickstarter today, with the hopes of raising $160,000 over the next 60 days. You can snag one for yourself starting at $139, but, as usual, expect the final retail price to be higher. While I’m not very confident about gadget Kickstarters these days, the fact that the Ignition Design Labs folks have many years of experience dealing with wireless hardware gives me hope for the Portal. We’ll be getting a unit to test out soon, so stay tuned for updates.

Designing a DMZ for Azure Virtual Machines

The content below is taken from the original (Designing a DMZ for Azure Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

tiny-people-working-on-computer-hero-img

tiny-people-working-on-computer-hero-img

This article will show you three designs, each building on the other, for a demilitarized zone (DMZ) or perimeter network for Internet facing n-tier applications based on Azure virtual machines and networking.

The DMZ

The concept of a DMZ or perimeter network is not new; it’s a classic design that uses a layered network security approach to minimize the attack footprint of an application.

In a simple design:

  1. Web servers are placed in one VLAN, with just TCP 80 and TCP 443 accessible from the Internet.
  2. Application servers are in another VLAN. Web servers can communicate with the application servers using just the application protocol. The Internet has no access to this VLAN.
  3. Database servers are in a third VLAN. Application servers can communicate with the database servers using only the database communications protocol. Web servers and the Internet have no access to this VLAN.

You can modify this design in many ways, including:

  • Adding additional application layer security.
  • Including reverse proxies.
  • Using logical implementations of multiple VLANs by using other methods of network isolation, such as network security groups (NSGs) in Azure.
The concept of a DMZ with n-tier applications (Image Credit: Aidan Finn)

The concept of a DMZ with n-tier applications (Image Credit: Aidan Finn)

So how do you recreate this concept in Azure for virtual machines? I’ll present you with three designs from Microsoft, each of which builds on the concepts of the previous ones.

Network Security Groups

The first and simplest way to build a DMZ in Azure is to use network security groups (NSGs). An NSG is a five-tuple rule that will allow or block TCP or UDP traffic between designated addresses on a virtual network.

You can deploy an n-tier solution into a single virtual network that is split into two or more subnets; each subnet plays the role of a VLAN, as shown above. NSG rules are then created to restrict network traffic. In the below diagram, NSGs will:

  • Allow web traffic into the FrontEnd subnet.
  • All application traffic to flow from the FrontEnd subnet to the BackEnd subnet.
  • Block all other traffic.
A DMZ using Azure network security groups (Image Credit: Microsoft)

A DMZ using Azure network security groups (Image Credit: Microsoft)

The benefit of this design is that it is very simple. The drawback of this design is that it assumes that your potential hackers are stuck in the 1990s; a modern attack tries to compromise the application layer. A port scan of the above from an external point will reveal that TCP 80/443 are open, so an attacker will try to attack those ports. A simple five-tuple rule will not block that traffic, so the hacker can either flood the target with a DDOS attack or compromise application vulnerabilities.

NSGs and a Firewall

Modern edge network devices can protect and enhance hosted applications with applications layer scanning and/or reverse proxy services. The Azure Marketplace allows you to deploy these kinds of devices from multiple vendors into your Azure virtual networks.

The following design below uses a virtual network appliance to protect an application from threats; this offers more than just simple protocol filtering because the appliance understands the allowed traffic and can identify encapsulated risks.

Using a firewall virtual appliance with NSGs to create a DMZ (Image Credit: Microsoft)

Using a firewall virtual appliance with NSGs to create a DMZ (Image Credit: Microsoft)

Sponsored

NSGS are deployed to enforce that all communications from the Internet must flow through the virtual appliance. NSGs will also control the protocols and ports that are allowed for internal communications between the subnets.

Ideally, we’d like to have all communications inside of the virtual network to flow through the virtual appliance, but the default routing rules of the network will prevent this from happening.

User Defined Routes, NSGs, and a Firewall

We can override the default routes of a virtual network using user-defined routes (UDRs). The following design uses one subnet in a single virtual network for each layer of the n-tier application. An additional subnet is created just for the virtual firewall appliance, which will secure the application.

UDRs are created to override the default routes between the subnets, forcing all traffic between subnets to route via the virtual firewall appliance. NSGs are created to enforce this routing and block traffic via the default routes.

An Azure DMZ made from user-defined routes, a virtual appliance firewall and NSGs (Image Credit: Microsoft)

An Azure DMZ made from user-defined routes, a virtual appliance firewall and NSGs (Image Credit: Microsoft)

The result is a DMZ where the virtual appliance controls all traffic to/from the Internet and between the subnets.

Sponsored

Tip: Try to use a next generation firewall and compliment this with defense with additional security products that will work with the Azure Security Center so that you have a single view of all trends and risks.

The post Designing a DMZ for Azure Virtual Machines appeared first on Petri.

DARPA is building acoustic GPS for submarines and UUVs

The content below is taken from the original (DARPA is building acoustic GPS for submarines and UUVs), to continue reading please visit the site. Remember to respect the Author & Copyright.

For all the benefits that the Global Positioning System provides to landlubbers and surface ships, GPS signals can’t penetrate seawater and therefore can’t be used by oceangoing vehicles like submarines or UUVs. That’s why DARPA is creating an acoustic navigation system, dubbed POSYDON (Positioning System for Deep Ocean Navigation), and has awarded the Draper group with its development contract.

The space-based GPS system relies on a constellation of satellites that remain in a fixed position relative to the surface of the Earth. The GPS receiver in your phone or car’s navigation system triangulates the signals it receives from those satellites to determine your position. The POSYDON system will perform the same basic function, just with sound instead. The plan is to set up a small number of long-range acoustic sources that a submarine or UUV could use to similarly triangulate its position without having to surface.

The system should be ready for sea trials by 2018. It will initially be utilized exclusively for military and government operations but, like conventional GPS before it, will eventually be opened up to civilians as well.

Cujo is a firewall for the connected smart home network

The content below is taken from the original (Cujo is a firewall for the connected smart home network), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Cujo protects everything on your network,” the company’s CEO, Einaras Gravrock says, describing his product in the simplest terms possible ahead of its Disrupt NY launch this week. “Think of it as an immunity system for your network.”

The Cujo is surprisingly unassuming, a small plastic stump with light up eyes that stands in adorable contrast to its mad dog name and home security mission statement.

The product is designed to bring the enterprise-level security to the home network, helping to protect against attacks to the increasingly vulnerable world of networked devices, from laptops to smart light bulbs.

“Cujo is, for all intents and purposes, a smart firewall,” explains Gravrock. “It’s very seamless. It’s made for an average user to understand easily. You see every single thing on your network through your app. If you got to bad places or bad things come to you, we will block bad behavior and we will send you a friendly notification that someone tried to access your camera.”

  1. CUJO 3 x 1

  2. CUJO at night

  3. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING – 1

  4. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING 4

  5. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING 8

  6. cujo-darkdesk

The company demoed the product at Disrupt today by hacking a baby camera. On a page displaying all of the devices connected to the network, a warning popped up: “We blocked an unauthorized attempt to access device ‘IP camera’ from [IP number].” From there access to the feed can be cut off – or not, if there is no actual threat.

The $99 device (plus an $8.99 monthly service fee for unlimited devices) serves as a peer to a home router, monitoring all network connected devices for malicious activity and sending notifications when something happens, like suspicious file transfers or communications with far away IP addresses. It’s a bit like the Nest app, only for networked security, rather than fire alarms.

Gravrock stresses that exploits are less about individual devices than they are about opening up the entire network through a small and seemingly harmless smart gadget. “You may think, ‘so what, my light bulb is going to get hacked,’ ” the executive explains. “The challenge is what happens next. Once they’re in the network, they can get to the other devices. They can get to your camera, they can get to your PC and extract files, they can film you. The FBI director is on records as taping over his webcam when he goes home. That tells you that we’re very exposed.”

Part of the company’s current mission is highlighting those exploits for consumers who are likely versed in the threat of PC malware but may be unaware of the growing threat posed by the vulnerability of the Internet of Things.

Though Gravrock adds that in the beta testing the company has been conducting since August, consumer interest/concern had increased notably.

“We’ve sold about 5,000 units directly already,” he explains. “The biggest surprise for me has been that it’s your average user who no longer feels private at home, may put the duct tape over his webcam and just wants something that works — doesn’t want to spend days and months changing things.”

HPE Cloud Optimizer Overview

Wouldn’t you like to optimize capacity and troubleshoot performance issues in any virtual and cloud environment? HPE Cloud Optimizer enables comprehensive capacity planning and management with quick and efficient performance monitoring, troubleshooting and optimization of physical, virtual and cloud environments. Start a free trial of HPE Cloud Optimizer: http://bit.ly/1ZzPix3

Subscribe for more videos like this: http://bit.ly/1T0FGLd

Visit our website: https://www.hpe.com

What is Hybrid Infrastructure? Glad you asked…

The content below is taken from the original (What is Hybrid Infrastructure? Glad you asked…), to continue reading please visit the site. Remember to respect the Author & Copyright.

As part of its recent split, Hewlett Packard Enterprise announced “four areas of transformation”, among them the buzzword-heavy “hybrid infrastructure”.

But what exactly is hybrid infrastructure? Each company seems to have a different idea of what it could mean. What does HPE mean when they say “hybrid infrastructure”? How does this differ from other definitions?

The official blurb from official website says “Today’s broad variety of apps and data require different delivery models for each business outcome. A hybrid infrastructure combines traditional IT, private, managed and public clouds, so you can enable your right mix to power 100% of the workloads that drive your enterprise.”

Let us decode. “A hybrid infrastructure combines traditional IT, private, managed and public clouds” is straightforward. HPE is clearly separating traditional IT from private clouds. It also separates service provider (managed) clouds from public clouds. This is the split that I personally use and advocate,so I think we’re off to a grand start.

A little bit of digging shows that HPE agrees with my nomenclature – mostly. To HPE, a cloud is emphatically not “just virtualization”. A public cloud is emphatically not “just someone else’s computer”. The trite sayings of the disaffected sysadmin ancien d’hier are neatly rejected. This leaves us with an important definition for cloud.

A cloud – public, private or managed – is a pool of IT resources – virtualized, containerized and/or physical – that can be provisioned by resource consumers using a self-service portal and/or an API. As I have advocated for some time, you don’t get to say you have a cloud just because you managed to install ESXi on a couple of hosts and run the vSphere virtual appliance.

HPE also says on its ‘hybrid infrastructure’ website: ” Today’s broad variety of apps and data require different delivery models for each business outcome.” This is the slightly more dog whistle part of the marketing and what it means is that HPE wants to sell you services to help you pick the right infrastructure to run your applications on, get your applications onto that infrastructure and then teach you how to keep it all going.

Hybrid infrastructure in practice

For HPE watchers, none of this should be particularly surprising. HPE has come out with an excellent Azure in a can offering for the midmarket and is also working on its new Synergy strategy, aimed at larger organizations. It’s not hard to see why: cloud expenditure has reached 110 billion dollars a year and doesn’t look set to slow down any time soon.

Both technology pushes can credibly be called part separate and distinct hybrid infrastructure plays. HPE has also made some solid bets on its Helion private cloud technology and services, including one with Deutsche Bank where HPE lose money if the cloud provided doesn’t meet expectations.

This is a pretty big change for HPE, which for decades was predominantly a tin-shifter that dabbled in software and services. HPE’s experiment in running its own public cloud was not a success. It never became a software, services or Google-esque data-hoovering power house. And tin-shifting margins everywhere are evaporating.

Everyone knows how to do traditional IT, so there’s no real money to be made in helping people do it. Not so with things cloudy: few organisations have to skills to build, manage and maintain a cloud.

Fewer still who have made the cultural and business practices changes necessary to take advantage of cloud computing’s self service, automation and orchestration so that they can see the very real benefits it has to offer over traditional IT. There are only a handful of organisations who can then make a private cloud work with managed clouds and public clouds, finally reaching that promised for utopia of true utility computing.

Sounds like a business opportunity to me.

Of course, it’s easy to mock and jeer. So many vendors tout “hybrid cloud” this or “hybrid cloud” that and ultimately delivering nothing of the sort.

Of all companies, will it really be HPE that manages to deliver private clouds that connect up to public and managed clouds, move workloads back and forth and do so without breaking the bank? As astonishing as it sounds, yes.

HPE has been doing this for some time now. I am unaware if they can claim a huge number of wins, but I’ve taken the time to talk to several Helion private cloud customers and they have done nothing but gush contentment about it. HPE has been enabling Helion private cloud customers to move workloads off of the private cloud in several instances and the results also seem to be quite promising.

In short, HPE has done this hybrid cloud infrastructure in practice already. It has done it successfully and is now ready to train its workforce more broadly in the tools, techniques and lessons learned. As I see it, HPE has settled on the right strategy. It has built up a portfolio of technologies and experience that support this strategy and has a nice list of customer wins to go to market with.

Seemingly, it’s only major vendor – so far – muttering “hybrid” that actually seems to know what it is doing. Let’s see how that translates into execution. ®

Sponsored:
Designing and building an open ITOA architecture

Arduino Srl adds wireless-ready “Arduino Uno WiFi”

The content below is taken from the original (Arduino Srl adds wireless-ready “Arduino Uno WiFi”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino Srl unveiled a version of the Arduino Uno that adds onboard WiFi via an ESP8266 module, but otherwise appears to be identical. Not much has been heard from Arduino Srl, located at Arduino.org, since it forked off from the main Arduino group over a year ago. In April, the rival Arduino LLC released an […]

OpenStack Developer Mailing List Digest April 23 – May 6

The content below is taken from the original (OpenStack Developer Mailing List Digest April 23 – May 6), to continue reading please visit the site. Remember to respect the Author & Copyright.

Success Bot Says

  • Sdague: nova-network is deprecated [1]
  • Ajaeger: OpenStack content on Transifex has been removed, Zanata on translate.openstack.org has proven to be stable platform for all translators and thus Transifex is not needed anymore.
  • All

Backwards Compatibility Follow-up

  • Agreements from recent backwards compatibility for clients and libraries session:
    • Clients need to talk to all versions of OpenStack. Clouds.
    • Oslo libraries already do need to do backwards compatibility.
    • Some fraction of our deploys between 1% to 50% are trying to do in place upgrades where for example Nova is upgrade, and Neutron later. But now Neutron has to work with the upgraded libraries from the Nova upgrade.
  • Should we support in-place upgrades? If we do, we need at least 1 or more versions of compatibility where Mitaka Nova can run Newton Oslo+client libraries.
    • If we don’t support in-place upgrades, deployment methods must be architected to avoid ever encountering where a client or one of N services is going to be upgraded on a single python environment. All clients and services must be upgraded together on a single python environment or none.
  • If we decide to support in-place upgrades, we need to figure out how to test that effectively; its a linear growth with the number of stable releases we choose to support.
  • If we decide not to, we have no further requirement to have any cross-over compatibility between OpenStack releases.
  • We still have to be backwards compatible on individual changes.
  • Full thread

Installation Guide Plans for Newton

  • Continuing from a previous Dev Digest [2], big tent is growing and our documentation team would like for projects to maintain their own installation documentation. This should be done while still providing quality in valid working installation information and consistency the team strives for.
  • The installation guide team held a session at the summit that was packed and walked away with some solid goals to achieve for Newton.
  • Two issues being discussed:
    • What to do with the existing install guide.
    • Create a way for projects to write installation documentation in their own repository.
  • All guides will be rendered from individual repositories and appear in docs.openstack.org.
  • The Documentation team has recommendations for projects writing their install guides:
    • Build on existing install guide architecture, so there is no reinventing the wheel.
    • Follow documentation conventions [3].
    • Use the same theme called openstackdocstheme.
    • Use the same distributions as the install guide does. Installation from source is an alternative.
    • Guides should be versioned.
    • RST is the preferred documentation format. RST is also easy for translations.
    • Common naming scheme: “X Service Install Guide” – where X is your service name.
  • The chosen URL format is http://bit.ly/1ZywxtK.
  • Plenty of work items to follow [4] and volunteers are welcome!
  • Full thread

Proposed Revision To Magnum’s Mission

  • From a summit discussion, there was a proposed revision to Magnum’s mission statement [5].
  • The idea is to narrow the scope of Magnum to allow the team to focus on making popular container orchestration engines (COE) software work great with OpenStack. Allowing users to setup fleets of cloud capacity managed by COE’s such as Swarm, Kubernetes, Mesos, etc.
  • Deprecate /containers resource from Magnum’s API. Any new project may take on the goal of creating an API service that abstracts one or more COE’s.
  • Full thread

Supporting the Go Programming Language

  • The Swift community has a git branch feature/hummingbird that contains some parts of Swift reimplemented in Go. [6]
  • The goal is to have a reasonably read-to-merge feature branch ready by the Barcelona summit. Shortly after the summit, the plan is to merge the Go code into master.
  • An amended Technical Committee resolution will follow to suggest Go as a supported language in OpenStack projects [7].
  • Some Technical Committee members have expressed wanting to see technical benefits that outweigh the community fragmentation and increase in infrastructure tasks that result from adding that language.
  • Some open questions:
    • How do we run unit tests?
    • How do we provide code coverage?
    • How do we manage dependencies?
    • How do we build source packages?
    • Should we build binary packages in some format?
    • How to manage in tree documentation?
    • How do we handle log and message string translations?
    • How will DevStack install the project as part of a gate job?
  • Designate is also looking into moving a single component into Go.
    • It would be good to have two cases to help avoid baking any project specific assumptions into testing and building interfaces.
  • Full thread

Release Countdown for Week R-21, May 9-13

  • Focus
    • Teams should be focusing on wrapping up incomplete work left over from the end of the Mitaka cycle.
    • Announce plans from the summit.
    • Completing specs and blueprints.
  • General Notes
    • Project teams that want to change their release model tag should do so before the Newton-1 milestone. This can be done by submitting a patch to governance repository in the projects.yaml file.
    • Release announcement emails are being proposed to have their tag switched from “release” to “newrel” [8].
  • Release Actions
    • Release liaisons should add their name to and contact information to this list [9].
    • Release liaisons should have their IRC clients join #openstack-release.
  • Important Dates
    • Newton 1 Milestone: R-18 June 2nd
    • Newton release schedule [10]
  • Full thread

Discussion of Image Building in Trove

  • A common question the Trove team receives from new users is how and where to get guest images to experiment with Trove.
    • Documentation exists in multiple places for this today [11][12], but things can still be improved.
  • Trove has a spec proposal [13] for using libguestfs approach to building images instead of using the current diskimage-builder (DIB).
    • All alternatives should be equivalent and interchangable.
    • Trove already has elements for all supported databases using DIB, but these elements are not packaged for customer use. Doing this would be a small effort of providing an element to install the guest agent software from a fixed location.
    • We should understand the deficiencies if any in DIBof switching tool chains. This can be be based on Trove and Sahara’s experiences.
  • The OpenStack Infrastructure team has been using DIB successfully for a while as it is a flexible tool.
    • By default Nova disables file injection [14]
    • DevStack doesn’t allow you to enable Nova file injection, and hard sets it off [15].
    • Allows to bootstrap with yum of debootstrap
    • Pick the filesystem for an existing image.
  • Lets fix the problems with DIB that Trove is having and avoid reinventing the wheel.
  • What are the problems with DIB, and how do they prevent Trove/Sahara users from building images today?
    • Libguestfs manipulates images in a clean helper VM created by libguestfs in a predictable way.
      • Isolation is something DIB gives up in order to provide speed/lower resource usage.
    • In-place image manipulation can occur (package installs, configuration declarations) without uncompressing or recompressing an entire image.
      • It’s trivial to make a DIB element which modifies an existing image and making it in-place.
    • DIB scripts’ configuration settings passed in freeform environment variables can be difficult to understand document for new users. Libguestfs demands more formal formal parameter passing.
    • Ease of “just give me an image. I don’t care about twiddling knobs”.
      • OpenStack Infra team already has a wrapper for this [16].
  • Sahara has support for several image generation-related cases:
    • Packing an image pre-cluster spawn in Nova.
    • Building clusters from a “clean” operating system image post-Nova spawn.
    • Validating images after Nova spawn.
  • In a Sahara summit session, there was a discussed plan to use libguestfs rather than DIB with an intent to define a linear, idempotent set of steps to package images for any plugin.
  • Having two sets of image building code to maintain would be a huge downside.
  • What’s stopping us a few releases down the line deciding that libguestfs doesn’t perform well and we decide on a new tool? Since DIB is an OpenStack project, Trove should consider support a standard way of building images.
  • Trove summit discussion resulted in agreement of advancing the image builder by making it easier to build guest images leveraging DIB.
    • Project repository proposals have been made [17][18]
  • Full thread

 

OpenStack VDI: The What, the Why, and the How

The content below is taken from the original (OpenStack VDI: The What, the Why, and the How), to continue reading please visit the site. Remember to respect the Author & Copyright.

Karen Gondoly is CEO of Leostream.

Moving desktops out from under the users’ desks and into the data center is no longer a groundbreaking concept. Virtual Desktop Infrastructure (VDI) and its cousin, Desktops-as-a-Service (DaaS) have been around for quite sometime and are employed to enable mobility, centralize resources, and secure data.

For as long as VDI has been around, so have industry old-timers VMware and Citrix — the two big players in the virtual desktop space. But, as Bob Dylan would say, the times, they are a-changing.

OpenStack has been climbing up through the ranks, and this newcomer is poised for a slice of the VDI pie. If you’re looking for an alternative to running desktops on dedicated hardware in the data center, open source software may be the name of the game.

What is OpenStack?

OpenStack, an open source cloud operating system and community founded by Rackspace and NASA, has graduated from a platform used solely by DevOps to an important solution for managing entire enterprise-grade data centers. By moving your virtual desktop infrastructure (VDI) workloads into your OpenStack cloud, you can eliminate expensive, legacy VDI stacks and provide cloud-based, on-demand desktops to users across your organization. Consisting of over ten different projects, OpenStack hits on several of the major must-haves to deliver VDI and/or Desktops-as-a-Service (DaaS), including networking, storage, compute, multi-tenancy, and cost control.

Why VDI and Why OpenStack?

Generally speaking, the benefits of moving users’ desktops into the data center as part of a virtual desktop infrastructure are well documented: your IT staff can patch and manage desktops more efficiently; your data is secure in the data center, instead of on the users’ clients; and your users can access their desktop from anywhere and from any device, supporting a bring-your-own-device initiative.

Many organizations considered moving their workforce to VDI, only to find that the hurdles of doing so outweighed the benefits. The existing, legacy VDI stacks are expensive and complicated, placing VDI out of reach for all but the largest, most tech-savvy companies.

By leveraging an OpenStack cloud for VDI, an organization reaps the benefits of VDI at a much lower cost. And, by wrapping VDI into the organization’s complete cloud strategy, IT manages a single OpenStack environment across the entire data center, instead of maintaining separate stacks and working with multiple vendors.

How to Leverage OpenStack Clouds for Virtual Desktops

Now, “simplification” is not a benefit for building OpenStack VDI and DaaS. If you’re not an OpenStack expert, then you may want to partner with someone who is. Companies like SUSE, Mirantis, Canonical, and Cisco Metapod, can help ease your migration to the cloud. Keep in mind that your hosted desktop environment will need to be resistant to failure and flexible enough to meet individual user needs.

So, if you’re really serious about VDI/DaaS, then you’ll need to leverage a hypervisor, display protocol, and a connection broker. A recent blueprint dives into the details of the solution components and several important usability factors.

Here’s the Reader’s Digest version:

  • Hypervisor: A hypervisor allows you to host several different virtual machines on a single hardware. KVM is noted in the OpenStack documentation as being the most highly tested and supported hypervisor for OpenStack. To successfully manage VDI or DaaS, the feature sets provided by any of the hypervisors are adequate.
  • Display Protocol: A display protocol provides end users with a graphical interface to view a desktop that re- sides in the data center or cloud. Some of the popular options include Teradici PCoIP, HP RGS, or Microsoft RDP.
  • Connection Broker: A connection broker focuses on desktop provisioning and connection management. It also provides the interface that your end users will use to log in. The key in choosing a connect broker is to ensure that it integrates with the OpenStack API. That API allows you to inventory instances in OpenStack. These instances are your desktops. It also makes it easy to provision new instances from existing images, and assigns correct IP addresses to instances.

How do you bring everything together? The process can be summarized into four basic steps.

  1. First, you’ll want to determine the architecture for your OpenStack Cloud. As mentioned, there are a number of solid experts that can help you with this step, if you’re not an expert yourself.
  2. Then as you onboard new groups of users, make sure to place each in their own OpenStack project, which means defining the project and the network.
  3. Next, you’ll want to build a master desktop and image, which can be used to streamline the provisioning of desktops to users. At this stage, you’ll want to explore display protocols and select a solution(s) that delivers the performance that your end-users need.
  4. The final step is to configure your connection broker to manage the day-to-day activities.

Conclusion and Takeaways

When it comes to leveraging OpenStack clouds to host desktops, there’s a lot to think about and several moving parts. For those looking outside the box of traditional virtualization platforms, OpenStack may be your golden ticket. Key to delivering desktops is choosing an adequate display protocol and connection broker.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Kaseya Announces Traverse 9.3 Enabling MSPs to Manage Complex Public and Hybrid Cloud-Based Apps Running on Amazon Cloud

The content below is taken from the original (Kaseya Announces Traverse 9.3 Enabling MSPs to Manage Complex Public and Hybrid Cloud-Based Apps Running on Amazon Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Kaseya , the leading provider of complete IT management solutions for Managed Service Providers (MSPs) and small to midsized businesses, today… Read more at VMblog.com.

What’s New in Windows Server 2016 Technical Preview 5: Hyper-V Features

The content below is taken from the original (What’s New in Windows Server 2016 Technical Preview 5: Hyper-V Features), to continue reading please visit the site. Remember to respect the Author & Copyright.

cloud-computing-hands-hero

cloud-computing-hands-hero

This article will discuss new Hyper-V features and their impact in the Windows Server 2016 (WS2016) Technical Preview 5 (TP5), which is available to download now.

Windows Server 2016 Technical Preview 5 New Features

Microsoft recently released the latest public preview of Windows Server 2016 and Hyper-V Server 2016 (the free version). There are lots of new Hyper-V features to evaluate, learn, and use.

If I had to give you a theme to this release, it would be cloud. Much of what is in the 2016 release is geared toward building private, hosted, or public cloud, with a lot of the management being offered either by Azure or Microsoft Azure Stack. When evaluating WS2016, you’ll need to consider:

  • Nano Server, with administration via Remote Server Management Tools
  • Hyper-V
  • Failover Clustering
  • Storage
  • Networking, particularly the Network Controller
  • Microsoft Azure Stack
  • Containers

In this post, I’m going to focus on the improvements to Hyper-V.

Connected Standby

This is a feature for Windows 10 users and the few presenters that run Windows Server Hyper-V on their laptop or hybrid device. You might even say that this new feature was dedicated to Paul Thurrott when the feature was announced at TechEd Europe 2014, mainly because Paul was one of the more vocal sufferers of the lack of compatibility between Hyper-V and the heralded Windows 8/hardware feature. Hyper-V had issues with Connected Standby, and these issues have been solved in Windows 10 and Windows Server 2016.

Discrete Device Assignment (DDA)

Discrete Device Assignment is a relatively new feature in the list of improvements. DDA allows you to give a virtual machine direct and exclusive access to some PCI devices in the host. The concept is that a virtual machine can talk directly to a graphics card. My suspicion is that the primary customer for this feature are those who are using Microsoft Azure with an N-Series virtual machine. But anyone looking for something better than RemoteFX or compute intensive workloads should be interested in this feature, too.

Host Resource Protection

This is a security feature to protect a host and other virtual machines from resource abuse by another virtual machine. Imagine that a virtual machine goes awry or is compromised; in the latter case, the attacker will want to attack the hypervisor, either looking for a vulnerability or to launch a DOS attack. Hyper-V can be configured to starve the virtual machine of resources when unusual behaviour starts, thus limiting the damage and giving you a chance to intervene.

Add-Add/Remove of Virtual Memory and Network Adapters

Is that applause and cheering that I hear? This heavily demanded feature is coming to Hyper-V; you can add and remove memory and virtual NICs to and from running virtual machines running Windows 10 or WS2016.

Hyper-V Manager

The old tool is looking long in the tooth, but improvements are coming. You can launch Hyper-V manager with alternate credentials, supporting the secure dual-identity approach that some companies enforce. You can also manage older versions of Hyper-V from WS2016 (WS2012, WS2012 R2, Windows 8.1, and Windows 8). The management protocol has been updated to use WS-MAN (TCP 80) with remote hosts. A benefit is that you can connect to a remote host with CredSSP and perform a live migration without enabled messy Active Directory constrained delegation.

Sponsored

Integration Services via Windows Update

Another cheering moment here; the Hyper-V integration components in the guest OS will be updated via Windows Update; Linux uses different methods. Note that the VMGUEST.ISO is no longer required, so it is not included in the installation.

Linux Secure Boot

You can protect Linux guests from root kits by enabling secure boot on Generation 2 virtual machines. You must use a compliant version of Linux and enable the use of the Microsoft UEFI Certificate Authority.

Nested Virtualization

Here’s a very big cheering moment because we finally get a feature we’ve been asking for since Windows Server 2008. Those with constrained hardware can finally run Hyper-V virtual machines inside of Hyper-V virtual machines — there are blog posts on how to get vSphere running inside of Hyper-V!

You can even run Hyper-V virtual machines inside of Hyper-V virtual machines, insider of Hyper-V virtual machines, inside of Hyper-V virtual machines, and on and on. This is great for training classes and presenters (failover clustering and Live Migration on a single physical machine), but the real winner here is Windows Server Containers, which gets the more secure Hyper-V Container.

The architecture of nested Hyper-V (Image Credit: Microsoft)

The architecture of nested Hyper-V (Image Credit: Microsoft)

I can’t wait until the day that I can run Hyper-V inside of an Azure virtual machine! The performance of nested Hyper-V should be pretty good, thanks to Microsoft’s micro-kernalized architecture, as opposed to the alternative monolithic approach.

Networking

This is a huge area that deserves its own article, so watch out for a post that’s coming soon.

Production Checkpoints

Checkpoints, what some of you still call Hyper-V snapshots, were improved to the point where Microsoft can say “yes, use them with production workloads.” A production checkpoint uses the backup infrastructure of Hyper-V to create a checkpoint of a running virtual machine. When you restore the checkpoint, it’s as if the virtual machine is being restored from a backup. You can use legacy standard checkpoints, but production checkpoints are on by default. This improvement is a recognition that:

  • People were using checkpoints in production, even though it wasn’t recommended.
  • The nature of systems management has changed, and self-service administrators are going to use checkpoints.

Shielded Virtual Machines

This is actually a huge topic; Microsoft’s core hypervisor team did a lot of work to:

  • Harden the hypervisor.
  • Create a new model and architecture (Host Guardian Server) for creating trusted virtual machines with limited or no access for hypervisor administrators.
  • Enabling tenant-managed BitLocker with a virtual TPM chip.

Yes, you can replicate these virtual machines to hosts that are similarly secured in the secondary location using Hyper-V Replica.

Virtual Machine Configuration

The old XML files of the past that were used to describe virtual machine metadata are gone. WS2016 will use binary files that you cannot edit:

  • .VMCX: The virtual machine configuration file.
  • .VMRS: The virtual machine runtime state data file.

The benefits of this change are:

  • Improved performance on larger and denser hosts.
  • Reduced risk of data corruption caused by storage failure.

This change has proven to be controversial. I don’t get why; it was completely unsupported and unsafe to directly edit the XML files in the past.

Virtual Machine Configuration Version

Most of us did not know that virtual machines have always had versions, based on the Hyper-V version that they were created and running on. You will be able to update the version of a virtual machine in WS2016, which will be useful thanks to some of the improvements in failover clustering.

PowerShell Direct

I’m not a day-to-day administrator, but I use PowerShell a lot when doing presentations and demos of Hyper-V. I’ve done lots of PowerShell remoting, which can get quite complicated. The same scenario exists for a virtualization admin; they have lots of machines they want to work with, but they want PowerShell admin inside of the machines to be easier.

Sponsored

PowerShell Direct allows you to do work inside of a virtual machine, from the host, without any network or firewall access to the guest OS. For example, if a virtual machine loses an IP configuration, you can reconfigure the network stack using PowerShell Direct to get the machine back to an operational status.

 

The post What’s New in Windows Server 2016 Technical Preview 5: Hyper-V Features appeared first on Petri.

ICANN knifes Africa’s internet: New top-level domains terminated

The content below is taken from the original (ICANN knifes Africa’s internet: New top-level domains terminated), to continue reading please visit the site. Remember to respect the Author & Copyright.

Domain-name overseer ICANN has killed off the majority of Africa’s new internet.

California-based ICANN, which has faced repeated criticism for its failure to reach beyond a North American audience, saw just 17 applications from Africa out of just over 1,900 applications for new dot-word domains back in 2012.

Of those 17, nine this week received termination notices from ICANN, despite them having paid $185,000 for the right to their dot-names and having run through an evaluation and contract process.

The applicants for .naspers, .supersport, .mzansimagic, .mnet, .kyknet, .africamagic, .multichoice, .dstv and .gotv – all based in South Africa – signed contracts with ICANN in 2015 but failed to put their generic top-level domains live within a 12-month window. As a result, ICANN is now rescinding their rights.

The only company from Africa that has successful put new top-level domains online is the registry operator for South Africa’s .za, ZACR, which applied for and runs three city TLDs: .capetown, .durban and .joburg.

Meanwhile, ICANN is in active dispute with the biggest application from Africa: .africa. One of the two applicants for the name, DCA Trust, is suing ICANN in a Los Angeles court for running a sham process in which it attempted to hand over the name to its preferred bidder, ZACR.

Trust issues

DCA Trust has already won an independent appeal against ICANN in which the organization was found to have broken its own bylaws. Documents from that case revealed ICANN’s staff had actively interfered with the process to ensure that their preferred bidder won, then sought to cover up their involvement and then knowingly misled people about the coverup.

Last month, an injunction was filed against ICANN after it attempted for a second time to hand the .africa domain to ZACR despite the ongoing legal fight. ICANN has until today to file a response to DCA Trust’s allegations.

Of the remaining three applications from Africa, one has been withdrawn and the other two have until November to put their names live or they face the same fate.

Dot-brand to dot-banned

Last month, ICANN warned more than 200 companies that it will kill off their rights to new top-level domains if they didn’t put them live on the internet.

The majority were big brands, including Intel, Netflix, Lego and Nike which paid for the rights to their dot-brand name but have no current plans to use them. Most don’t wish to cover the cost of putting the names live or to start paying ICANN’s annual fees.

It was perhaps unfortunate that of the first 10 termination notices than ICANN has sent out, nine of them come from Africa.

The self-absorbed DNS overseer was heavily criticized when it rolled out its “new gTLD” program for failing to communicate its plans beyond the mostly North American audience that attends its conferences. Of the 1,930 applications, 844 came from the United States. A further 150 or so came from US organizations based in offshore tax havens such as the British Virgin Islands and the Cayman Islands.

In response, the organization launched a belated “applicant support program” and set aside $2m to fund it. But again, it failed to communicate the program’s existence to people outside its own halls. As a result, the program received just three applications, two of which were rejected [PDF] for not meeting the criteria. ®

Sponsored:
Middleware for the modern age

Software Defined Radio App Store

The content below is taken from the original (Software Defined Radio App Store), to continue reading please visit the site. Remember to respect the Author & Copyright.

Software defined radios (SDRs) can–in theory–do almost anything you need a radio to do. Voice? Data? Frequency hopping? Trunking? No problem, you just write the correct software, and you are in.

That’s the problem, though. You need to know how to write the software. LimeSDR is an open source SDR with a crowdfunding campaign. By itself, that’s not anything special. There are plenty of SDR devices available. What makes LimeSDR interesting is that it is using Snappy Ubuntu Core as a sort of app store. Developers can make code available, and end-users can easily download and install that code.

Of course, the real value will be if people actually fill the store with meaningful applications. It certainly worked for smartphones. How many people would need a smartphone if they had to write their own code? Even finding software scattered around the Internet and installing it is beyond some users.

On the other hand, we couldn’t help but think that for radios, just apps only gets you so far. What you really need is components that you can easily integrate. This is the idea behind GNU Radio (we’ve covered GNU Radio before). Granted, LimeSDR supports GNU Radio, too. However, an app store that can bundle GNU Radio applications and also allow installation of modules easily would be widely applicable and useful.

On the other hand, maybe people who will use GNU Radio won’t have a problem just downloading stuff themselves. That begs the question of how many consumers need an SDR that allows them to download many applications? Time will tell.

You can see a video about LimeSDR below. We have talked a lot about SDRs over the last few years, especially the RTL-SDR dongles. LimeSDR is a big step up in price and performance from an RTL-SDR dongle, though.

5 Free VMware Flings for your Virtualization Toolbox

The content below is taken from the original (5 Free VMware Flings for your Virtualization Toolbox), to continue reading please visit the site. Remember to respect the Author & Copyright.




ToolboxFor the last few years, VMware engineers have been turning out a
number of free experimental tools that operate within the company’s server virtualization platform. Dubbed Flings,
these VMware Lab creations are intended to be a “short-term thing.” While these
interesting freebie tools are not part of any official product offering,
they have been well received over the years within VMware’s community of
virtualization users.

There is one important caveat to these
Flings — useful though they may be — that needs to be mentioned over
and over again: VMware clearly states that these tools are intended to
be played with and explored, but they do not come with VMware support
and therefore shouldn’t be used in production environments. But then again, when has that really stopped anyone?  🙂 Honestly, these Flings have really helped fill in the gaps of missing functionality across numerous products put out by VMware.  In fact, VMware has already amassed a
list of 107 different Flings currently made available online for download.

VMblog recently talked about one of the newest Flings, a useful addition to make backups of your VMware App Volumes possible. App Volumes users have been asking for a way to
back up their AppStacks and writable volumes. VMware knows that normal virtual-machine
backup tools cannot back up App Volumes AppStacks and writable volumes
because the AppStacks and writable volumes are not part of the vCenter
inventory unless they are connected to a user’s virtual machine (VM). Enter the App Volumes Backup Utility Fling.

But there are so many more!  Here are just 5 of the recent Flings that really stand out and should be considered worthy of being added to your virtualization toolbox.

  1. VMware vSphere HTML5 Client – The old vSphere Web Client is based on Flash, which as you know, is no longer the technology of choice. The new Web client is written using HTML5 and Javascript and is designed
    to work with your existing vSphere 6.0 environments (sorry to those
    still running 5.x, but it isn’t “supported”). By removing the dependency on Flash, VMware hopes to improve performance, stability, and security. The Fling isn’t feature complete yet, but it does have what VMware
    believes to be the most commonly used actions/views ready to go.
  2. VMware OS Optimization Tool – If you are running VMware Horizon View, this will be a very popular addition to your toolbox. The VMware OS Optimization Tool helps optimize Windows 7/8/2008/2012/10
    systems for use with VMware Horizon View. The optimization tool includes
    customizable templates to enable or disable Windows system services and
    features, per VMware recommendations and best practices, across
    multiple systems. Since most Windows system services are enabled by
    default, the optimization tool can be used to easily disable unnecessary
    services and features to improve performance.
  3. Horizon View Configuration Tool – Here’s another handy addition to VMware Horizon View.  The Horizon View Configuration Tool automates Horizon View 6.2
    installations and deployments. It removes the complexities and manual
    steps required for setting up a basic Horizon View deployment. So if you need little assistance with the setup, grab this Fling.
  4. Horizon Toolbox 2 – Once more, let’s circle back and add some assistance to your Horizon View environment. This time, the Toolbox 2 Fling adds a Web portal that acts as an extension to View Administrator in VMware Horizon 6 or above. The tool assists with monitoring, managing and administration of Horizon and is a must add to fully engage with a Horizon environment.
  5. VSAN Hardware Compatibility List Checker – In an effort to make life easier, this VSAN Hardware Compatibility List Checker Fling helps verify the
    installed storage adapters against the VSAN supported storage
    controller list. The tool will verify if the model and firmware version
    of the storage adapter are supported. For firmware version validation,
    the VSAN Hardware Compatibility List Checker supports LSI and HP, and
    their OEM variants storage adapters.

In the end, Flings remain a community fan favorite. They remain
extremely useful and, perhaps best of all, they remain free without any nagging registration to go along with it. And judging
by its latest track record, it certainly looks like VMware will keep
them around for the foreseeable future.

So if you’re a VMware vSphere or View admin, remember to check out the full list of Flings if you’re searching for a freebie tools that helps make your life easier or performs a missing function that VMware hasn’t yet implemented themselves within the product.