Decide who lives and who dies. The Moral Machine

The content below is taken from the original (Decide who lives and who dies. The Moral Machine), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2cNWWpW

Roll over Beethoven: HPE Synergy compositions oughta get Meg singing for joy

The content below is taken from the original (Roll over Beethoven: HPE Synergy compositions oughta get Meg singing for joy), to continue reading please visit the site. Remember to respect the Author & Copyright.

Comment HPE’s Synergy is, it thinks, the next great advance in servers and is far more capable than hyper-converged infrastructure systems, being able to provision bare metal as well as servers for virtual workloads as well as containerised ones.

Getting a grip on this beast is tricky. Is it a form of dynamically reconfigurable hyperconverged infrastructure (HCI) with separately scalable compute, storage and networking? Yes, in a way it is, but HPE would position it apart from HCI, as we shall see later.

The basic Synergy element is a 10U rackmount chassis, somewhat similar to its C-Class chassis but taking specific servers, not standard ones. This chassis can take a mix of server and storage frames which slot in from the front and has a set of network nodes at the back, with a single master node. These nodes obviate the need for a separate top of rack networking switch.

The chassis also has a front-mount Composer, a dedicated server with storage and networking. This runs a version of Linux, auto-discovers Synergy compute, storage and networking (fabric) elements, and provides the OneView command facility for managing Synergy.

Synergy_rack_views

Front and rear views of Synergy rack. At the rear the blue cables interconnect Composer modules in each frame

Synergy frames can scale out, with four per rack, if your floors can stand the weight, and then via adding another rack. The Composers in each Synergy chassis hook up with each other via a dedicated management net, providing a control plane for the system separate from the operational compute and data plane.

Synergy composer module

HPE UK’s FSI Strategic Accounts Presales Consultant Joe Hardy with Synergy Composer module.

Synergy composer module_detail

Composer module with lid removed.

Users set up named server profiles, defining compute, firmware, OS, storage and networking resources for example, and then these can be simply provisioned to workloads: for example, a database workload template. Synergy can be managed through RESTful APIs as well.

Synergy_OneView_3

Synergy OneView 3 screenshot. Note server profile area middle left.

Server nodes can have direct-attached on-blade storage. This can be, for example, SAS 2.5-inch disk or NVMe SSDs.

As well as the in-frame storage nodes Synergy chassis’ can use external SAN or NAS storage, such as 3PAR StorServe, connected by Fibre Channel or Ethernet, part or all of which gets allocated to the Synergy storage resource pool via, say, volumes allocated to workloads running on Synergy.

Both Synergy storage and networking are virtualised into Synergy-wide pools and compute blades can be aggregated for a workload. But individual CPUs on a server blade can’t be separately dedicated to workloads with other server nodes; the server blade is as granular as compute gets for now. Similarly memory can’t be separately virtualised into a Synergy-wide pool. When Intel’s Silicon Photonics is adopted by Synergy in the future then this, with operating system help, might become possible.

Synergy_server_node

Joe Hardy showing Synergy server node with lid removed.

As servers adopt non-volatile storage-class memory then Synergy will adopt them too via Synergy server blades using non-volatile Memristor, XPoint or some other media technology. It will also adopt HPE’s developing Machine concept, with Synergy in one sense, being a Machine precursor.

Server inventiveness

HPE says it has some 100 beta customers testing Synergy. As you look at the hardware, the 10U chassis and its components, the Composer module and the OneView v3 software, you realise this is a substantial and distinctive effort. A lot of development money is being spent here. No other server vendor has anything like Synergy; not Cisco, not Dell, not Fujitsu, Hitachi, Huawei, Lenovo nor any other mainstream server vendor you can think of.

Paul Miller, HPE’s marketing VP for Converged Data Centre Infrastructure, bangs the HPE server inventiveness drum, saying HPE was first with x86 servers, first with a 1U pizza box-format server, first to launch blade servers, and is now the first to launch composable infrastructure servers. Everyone else, he says, including Dell, follows HPE.

Synergy_storage_module

Synergy storage module.

Synergy and HCI

Hyper-converged infrastructure systems converged and integrate server, storage and networking with server virtualisation software to apps in virtual machines in a scale-out system environment using virtual SAN software, such as VMware’s VSAN. HPE has its hyper-converged HC 380 product, ones that use StoreVirtual virtual SAN software and ProLiant DL380 servers.

Synergy opens up that idea, expands on it, to have a Synergy frame as the basic element, within which you can mix and match variable compute, server and network elements. The HCIA single SKU buying simplicity goes away but the operational flexibility and simplicity benefits gained instead could be huge, quite apart from Synergy providing bare-metal servers as well as support for containerisation.

Also Synergy is built as a platform with a10 year-plus life. Different compute, storage and networking elements and technologies can come along and be adopted by the Synergy chassis and software; that’s the theory. So, for example, Intel’s Silicon Photonics can be adopted inside Synergy, and also outside. Once the3PAR StorServe arrays get Silicon Photonics support then Synergy links to external storage will run at 100Gbit/s speed.

How this would work alongside or with NVMe over Fabrics is unclear. It could simply be an alternative to 100Gbit/s Ethernet cables and still carry Ethernet protocol signals and data, in which case NVMeF running across Silicon Photonics cables will be theoretically okay.

Innovation buzzwordery? Yes – and no

Synergy is a multi-million dollar investment by HPE involving server, storage and networking hardware, adapted C-Class chassis, development of the Composer module, and extension of the OneView management software facility, Looking at the racks of Synergy hardware and nodes within the frames it’s obvious that this is a big iron effort. The thing looks like a software-defined private cloud, mini data centres in 10U cans with dynamically re-usable elements.

Marketing buzzwordery surrounds it; infrastructure as code, the ideas economy, innovating fearlessly in an age of business disruption, IT must build innovation engines, and IT must become value creators and deliver business outcomes. Synergy is about accelerating the digital enterprise to innovate fearlessly. There’s even a Composable Infrastructure for Dummies book – no, really – which tells you how to turn your data centre into an a revenue generator with composable infrastructure.

But step aside from this slick deluge of wordsmithing and Synergy looks pretty damn good. Will it take off? That’s the question, and HPE is giving its best shot, pouring EVPs into marketing events and demo’ing the hardware.

If the operational flexibility, CAPEX and OPEX benefits are real and replicable from customer to customer then HPE will have a good and distinctive story to tell and pass through its channel. And Meg Whitman will have product technology to smile about. ®

Sponsored:
Fast data protection ROI?

Windows Server 2016: What’s in It for Small Businesses?

The content below is taken from the original (Windows Server 2016: What’s in It for Small Businesses?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Server Hero

Server Hero

I examine if there are any features in Windows Server 2016 that might be worth SMEs upgrading for.

In September 2015, I wrote in What Does Windows Server 2016 Mean for Small Businesses? on the Petri IT Knowledgebase that if your small or medium business is already running Windows Server 2012 R2, that Windows Server 2016 would be unlikely to be an enticing upgrade, with some exceptions — for instance, businesses requiring virtualization with failover capability. But now that Microsoft has announced general availability of Windows Server 2016 at Ignite, let’s take a more detailed look at what it means for SMEs.

Much of the new functionality in Windows Server 2016 is aimed at large enterprises looking to support complex hybrid private/public cloud infrastructures and that need container technology as popularized by Docker. But some features might be of interest to SMEs, and it’s also worth noting that it’s not just about what’s new, but Windows Server 2016 is a more robust OS and has improved security out-of-the-box.

Windows Server Essentials

Just like previous versions of Windows Server, there are several editions of Windows Server 2016, including Standard, Datacenter, and Essentials. Foundation edition has been knocked on the head for this release. Windows Server Essentials was introduced after the demise of Small Business Server (SBS), the stalwart solution that SMEs relied on for the better part of a decade to get discounted access to Exchange and SQL Server, along with some specific server features designed to make Windows Server deployment easier.

Unlike SBS 2011, which supported up to 75 users, Windows Server Essentials (WSE) supports 25 users and 50 devices, and includes features such as Remote Web Access and remote client deployment, which aren’t in other editions of server unless you install the WSE Experience server role.

WSE is a bridge to the cloud, providing SMEs with the best of an on-premises server and Office 365. Users can access on-site storage and applications running on Essentials, but it also integrates with Office 365, including hosted Exchange, and Azure for secure off-site backup.

Despite all of this, I don’t see any significant differences in core functionality of Essentials in Windows Server 2012 R2 and 2016. Microsoft hasn’t released any documentation on Windows Server 2016 Essentials, yet, and when they do, if I find any important details, I’ll update this article.

Security Improvements

Not a reason to upgrade in itself but not to be sniffed at, either: Windows Defender now comes enabled out-of-the-box in Windows Server, and according to Microsoft, it’s optimized for Windows Server, so should behave itself no matter what server roles you have installed.

Other security improvements include Credential Guard, which isolates privileged domain credentials using hardware virtualization, and Control Flow, which helps to prevent memory corruption vulnerabilities. For more information on Credential Guard, see Windows 10 Enterprise Feature: Credential Guard on Petri.

Windows Containers (Docker)

This feature might be handy for SMEs with in-house developers, because Windows Server Containers can be used with Docker. But depending on your goals, Windows 10 Anniversary Update also includes container support for Hyper-V containers. For more information on Docker and Windows containers, see Deploy and Manage Windows Server Containers using Docker and What is Docker? on Petri IT Knowledgebase.

Azure Inspired Upgrade

Windows Server 2016 is, as Microsoft puts it, an “Azure inspired upgrade,” and unless you have specific requirements in app development, security, or virtualization, there’s little need to rush into upgrading to this release of Windows Server.

The post Windows Server 2016: What’s in It for Small Businesses? appeared first on Petri.

Solar road tiles get their first public test

The content below is taken from the original (Solar road tiles get their first public test), to continue reading please visit the site. Remember to respect the Author & Copyright.

No, that’s not an elaborate new Lite-Brite kit– that’s the possible future of energy. After years of work (and some last-minute delays), Solar Roadways has installed its first public energy tiles in Sandpoint, Idaho as part of a test. On top of producing a light show, the panels will generate power for the fountain and restrooms in a public square. They have heating elements, too, so they should keep running even in the heart of winter. And if you’re not sure how well they’ll work in practice, you can check on them yourself — Sandpoint has a live webcam pointed at the tiles.

It’s a modest dry run with just 30 panels, and it’ll be a long while before you see them on the streets they were designed for. However, it shows that they’re more than just theoretical exercises. And if a small number of tiles can power a town square by themselves, it’s easy to imagine full-fledged solar roads shouldering a significant amount of the energy demand for whole cities.

Via: KREM2

Source: Solar Roadways (Twitter), Sandpoint

Solar road tiles get their first public test

The content below is taken from the original (Solar road tiles get their first public test), to continue reading please visit the site. Remember to respect the Author & Copyright.

No, that’s not an elaborate new Lite-Brite kit– that’s the possible future of energy. After years of work (and some last-minute delays), Solar Roadways has installed its first public energy tiles in Sandpoint, Idaho as part of a test. On top of producing a light show, the panels will generate power for the fountain and restrooms in a public square. They have heating elements, too, so they should keep running even in the heart of winter. And if you’re not sure how well they’ll work in practice, you can check on them yourself — Sandpoint has a live webcam pointed at the tiles.

It’s a modest dry run with just 30 panels, and it’ll be a long while before you see them on the streets they were designed for. However, it shows that they’re more than just theoretical exercises. And if a small number of tiles can power a town square by themselves, it’s easy to imagine full-fledged solar roads shouldering a significant amount of the energy demand for whole cities.

Via: KREM2

Source: Solar Roadways (Twitter), Sandpoint

OpenStack Developer Mailing List Digest September 24-30

The content below is taken from the original (OpenStack Developer Mailing List Digest September 24-30), to continue reading please visit the site. Remember to respect the Author & Copyright.

Candidate Proposals for TC are now open

  • Candidate proposals for the Technical committee (6 positions) are open and will remain open until 2016-10-01, 23:45 UTC.
  • Candidacies must submit a text file to the openstack/election repository [1].
  • Candidates for the Technical Committee can be any foundation individual member, except the seven TC members who were elected for a one year seat in April [2].
  • The election will be held from October 3rd through to 23:45 October 8th.
  • The electorate are foundation individual members that are committers to one of the official programs projects [3] over the Mitaka-Newton timeframe (September 5, 2015 00:00 UTC to September 4, 2016 23:59 UTC).
  • Current accepted candidates [4]
  • Full thread

Release countdown for week R-0, 3-7 October

  • Focus: Final release week. Most project teams should be preparing for the summit in Barcelona.
  • General notes:
    • Release management team will tag the final Newton release on October 6th.
      • Project teams don’t have to do anything. The release management team will re-tag the commit used in the most recent release candidate listed in openstack/releases.
    • Projects not following the milestone model will not be re-tagged.
    • Cycle-trailing projects will be skipped until the trailing deadline.
  • Release actions
    • Projects not follow the milestone-based release model who want stable/newton branches created should talk to the release team about their needs. Unbranched projects include:
      • cloudkitty
      • fuel
      • monasca
      • openstackansible
      • senlin
      • solum
      • tripleo
  • Important dates:
    • Newton final release: October 6th
    • Newton cycle-trailing deadline: October 20th
    • Ocata Design Summit: October 24-28
  • Full thread

Removal of Security and OpenStackSalt Project Teams From the Big Tent (cont.)

  • The change to remove Astara from the big tent was approval by the TC [4].
  • The TC has appointed Piet Kruithof as PTL of the UX team [5].
  • Based on the thread discussion [6] and engagements of the team, the Security project team will be kept as is and Rob Clark continuing as PTL [7].
  • The OpenStackSalt team did not produce any deliverable within the Newton cycle. The removal was approved by the current Salt team PTL and the TC [8].
  • Full thread

 

[1] – http://bit.ly/2cjtN3W

[2] – http://bit.ly/2ddBxos

[3] – http://bit.ly/2dC4jBJ

[4] – http://bit.ly/2dC3nx8

[5] – http://bit.ly/2ddBadA

[6] – http://bit.ly/2d7xOdc

[7] – http://bit.ly/2ddBadA

[8] – http://bit.ly/2ddBitw

Introducing Google Container-VM Image

The content below is taken from the original (Introducing Google Container-VM Image), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Aditya Kali and Amey Deshpande, Software Engineers

This spring, we announced Container-VM Image as a beta product under Google Cloud Platform (GCP). If you’re a developer interested in deploying your application or a service provider on Google Compute Engine, we recommend taking a few moments to understand how it can help you.

Linux containers help developers to focus on their application without worrying about the underlying infrastructure. A secure and up-to-date base image is a critical building block of any container-based infrastructure. Container-VM Image represents the best practices we here at Google have learned over the past decade running containers at scale.

Container-VM Image design philosophy

Container-VM Image is designed from the ground up to be a modern operating system for running containers on GCP. Read on for more information about the design choices behind Container-VM Image and its attributes.

Build environment

Container-VM Image is based on the open-source Chromium OS project. Chromium OS is a reliable and vetted source code base for this new operating system. In addition, its allows us to use the powerful build and test infrastructure built by the ChromeOS team.


Designed for containers

The Docker container runtime is pre-installed on Container-VM Image. A key feature of containers is that the software dependencies can be packaged in the container image along with the application. With this in mind, Container-VM Image’s root file system is kept to a minimum by only including the software that’s necessary to run containers.

More secure by design

Container-VM Image is designed with security in mind, rather than as an afterthought. The minimal root file system keeps the attack surface small. The root file system is mounted as read-only, and its integrity is verified by the kernel during boot up. Such hardening features make it difficult for attackers to permanently exploit the system.

Software updates

Having full control over the build infrastructure combined with a minimal root file system allows us to patch vulnerabilities and ship updated software versions very quickly. Container-VM Image also ships with an optional “in-place update” feature that allows users to stay up-to-date with minimal manual intervention.

Getting started

The Container-VM Images are available in the “google-containers” GCP project. Here are a few commands to get you started:

Here’s how to list currently available images:

$ gcloud compute images list --project google-containers --no-standard-images

Note: All new Container-VM Images have “gci-” prefix in their names.

Here’s how to start a new instance:

$ gcloud compute instances create 

Once the instance is ready, you can ssh into it:

$ gcloud compute ssh 

You can also start an instance using Cloud-Config, the primary API for configuring an instance running Container-VM Image. You can create users, configure firewalls, start Docker containers and even run arbitrary commands required to configure your instance from the Cloud-Config file.

You can specify Cloud-Config as Compute Engine metadata at the time of instance creation with the special `user-data` key:

$ gcloud compute instances create 

What’s next

We’re working hard on improving and adding new features to Container-VM Image to make it the best way to run containers on GCP.  Stay tuned for future blogs and announcements. In the meantime, you can find more documentation and examples at the Container-VM Image homepage, and send us your feedback at [email protected] .

LILEE Systems’ new fog computing platform is well suited to distributed enterprises  

The content below is taken from the original (LILEE Systems’ new fog computing platform is well suited to distributed enterprises  ), to continue reading please visit the site. Remember to respect the Author & Copyright.

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  

Location, location, location! It turns out that mantra is not just for the real estate market. Location is a critical aspect of fog computing as well.

Cisco introduced the notion of fog computing about two and a half years ago. (See Cisco unveils ‘fog computing’ to bridge clouds and the Internet of Things.) This distributed computing architecture addresses the challenge of backhauling a lot of raw data generated in the field –say from thousands or millions of IoT devices – to the cloud for analysis.

Fog computing, also called edge computing, has some of the data processing and analysis take place at the edge location, close to the devices that are generating the data. The edge nodes have sufficient processing capabilities to capture, distil and analyze data and send only the most relevant information up to the cloud where further action can take place; for example, sending an alert that a piece of equipment in the field is about to fail.

It’s easy to conceive how this works when the edge node is in a permanent location, say on a factory floor, and it can communicate with the corresponding cloud component over Wi-Fi with relative stability. But what if the location of the edge node is always changing because it is, quite literally, in motion? For example, consider the edge node being on a bus, on a megaship full of shipping containers, or on a first responder vehicle like a police car or ambulance.

Unlike in real estate, a questionable location doesn’t have to be a show stopper. LILEE Systems has recently announced a fog computing platform to address the broad needs of mobile deployments in distributed enterprises.

Since its founding in 2009, LILEE has focused on the challenges of enterprise organizations deploying and managing equipment and keeping people, machines and IoT devices connected to the corporate network as they became more mobile. The company honed its fog computing solutions on the railroad industry, where trains and working crews present a mobile component but there is also equipment on the wayside that is doing sensor monitoring of the track conditions. LILEE has put network equipment on trains and on the wayside, and has software management tools to manage M2M communications and look for alerts and conduct analytics.

The company says it does business with five of the seven Class I railroads in the U.S., and is now branching out to other markets that can benefit from having fog nodes at the edge. Specifically, the company is targeting markets such as freight and supply chain; industrial, with remote facilities and monitoring needs; smart cities, including distributed traffic lights and video surveillance; first responders such as police, ambulance and fire trucks; retail where there is a need for point-of-sale backup, digital displays and other in-store services; commercial fleets with a vast mobile workforce; and education, both at schools and in buses.

In mid September, LILEE announced a series of hardened fog computing gateway devices and a cloud management platform designed to make it easier for distributed enterprises to deploy and manage their nodes at the edge.

The LILEE TransAir STS product series is a “5 in 1” gateway—basically an industrial PC with routing capabilities. The platform is comprised of cloud management, communication, and fog computing capabilities, along with application interfaces and sensors. In designing this product, LILEE says it considered the needs of the IT group to have the gateway device be as easy as possible to set up and configure so that remote deployment doesn’t require significant IT skills.

LILEE provides cellular and Wi-Fi router capability in the box that enables connection to the cloud. As the device is powered up, it connects to LILEE’s backend cloud-based management offering, T-cloud, to get its configuration and provisioning information. Once the device identifies itself, the applications it needs can be downloaded. For example, if this edge device is going into a coffee shop, it might need applications for point-of-sale, digital signage and customer Wi-Fi. All of that can come straight from the cloud platform, run on the local platform, and be managed from the cloud.

The gateway supports a variety of local interfaces, including Ethernet, serial port, USB port, HDMI for display, and more. It also supports sensors such as OBD for vehicle analytics, gyroscope and accelerometer for anything that is changing location, and digital I/O for any sort of binary emergency or panic switches.

LILEE 5 in 1 Gateway

LILEE 5 in 1 Gateway

Software that runs on the fog computing’s industrial PC can enable the applications to interact through customized interfaces as well as through the various sensors, and all of that information can be analyzed on that fog computing engine locally or up in the cloud. The real benefit is being able to distribute that analysis between the IT department that is looking at that from the cloud or from the branch side that is actually managing their own environment locally.

LILEE Platform Vendor supplied

LILEE Platform

The value of this solution can be better understood through a couple of use cases.

A long-haul bus company has a video system installed to monitor the passengers and make sure everyone is sitting calmly. The bus is moving so the only way to get that information to the cloud is over LTE. It’s too expensive to funnel all that video straight to the cloud for analysis, so an on-board fog platform does the video analysis locally. Most of the data is going to be benign as people sit quietly, but if some passengers get into an argument and cause a stir that might give the driver some concern, the analytics software on the fog node sends up an alert and starts sending the video stream back to the bus company’s cloud instance. There it can be permanently recorded in the event that an incident ensues.

In another bussing scenario, there’s a company that provides private bussing services to corporate clients who have employees that commute long distances. The passengers are all picked up in one location and may spend two or more hours on the bus daily to get to and from a work location. The bus company has a fog computing node on the vehicle that enables the corporate client’s business applications for these workers, so they can use a VPN to get to email, videoconferences, etc. Again, bandwidth matters, so some of the processing is offloaded to the local node before being passed off to the cloud.

In a first responder situation, there is a mission critical aspect of video surveillance, tracking license plates, sending a patient’s vital signs to a hospital, or detecting where all the firefighters are in a fire. These kinds of things can be core applications that are deployed locally to the police, fire or EMT vehicles. In addition, the vehicles themselves may need to be monitored to make sure they are running well. LILEE’s solution provides connectivity to the OBD2 port which monitors and manages all the sensors across the vehicle. Relevant data can be sent to the cloud as needed, say to notify a dispatch group of dangerous situations in the field.

As mobility increases and IoT proliferates, distributed enterprises will be challenged to support applications in their remote locations. LILEE’s fog computing platform looks like a good way to support them.

Lock Up Your Raspberry Pi with Google Authenticator

The content below is taken from the original (Lock Up Your Raspberry Pi with Google Authenticator), to continue reading please visit the site. Remember to respect the Author & Copyright.

Raspberry Pi boards (or any of the many similar boards) are handy to leave at odd places to talk to the network and collect data, control things, or do whatever other tasks you need a tiny fanless computer to do. Of course, any time you have a computer on a network, you are inviting hackers (and not our kind of hackers) to break in.

We recently looked at how to tunnel ssh using a reverse proxy via Pagekite so you can connect to a Pi even through firewalls and at dynamic IP addresses. How do you stop a bad guy from trying to log in repeatedly until they have access? This can work on any Linux machine, but for this tutorial I’ll use Raspberry Pi as the example device. In all cases, knowing how to set up adequate ssh security is paramount for anything you drop onto a network.

Better than Password Security

Experts tell you to use a good password. However, with ssh, the best method is to disallow passwords completely. To make this work, you need to create a private and public certificate on the machine you want to use to connect. Then you copy the public key over to the Raspberry Pi. Once it is set up, your ssh client will automatically authenticate to the server. That’s great if you always log in using the same machine and you never lose your keys.

You need to create a personal key pair if you haven’t already. You can use the ssh-keygen command to do that on Linux. You can require a passphrase to unlock the key or, if you are sure only you have access to your machine, you can leave it blank.

Once you have the key it is easy to send the public key over to the server with the ssh-copy-id command. For example:

ssh-copy-id [email protected]

You log in with your password one last time and the command copies your public key to the server. From then on, when you use ssh on that host, you’ll be automatically authenticated. Very handy.

Once you have keys set up, you can disable using regular passwords by editing /etc/ssh/sshd_config. You need the following settings:

ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no

That prevents anyone from breaking in by brute force guessing of passwords. It also makes it harder to set up new users or log in from a new computer.

Save the Passwords; Use Two Types

For those reasons, it is not always a good idea to turn off passwords. A better idea is to use two-factor authentication. That requires you to enter a password and also a “one time” verification code. Where do you get that code?

There are several options, but the one I’ll use is from the Google Authenticator application. You can get the application for Apple devices, Blackberries, and–of course–Android devices. You install it in the usual way for your device. The trick is how to make the ssh server on the Pi use it.

Luckily the Raspian repos have a package called libpam-google-authenticator that will do the trick. Installing it with apt-get is only part of the trick, though. You need two things. First, you need to set up your account.

Set Up a Google Authenticator Account

To set up your account, you need to log into your Pi and issue the command google-authenticator. The program will ask you a few questions and then generate a URL that will show you a QR code. It will also provide you a numeric code. You can use either of these to set up your phone. The command will also provide you a few one-time scratch codes you need to save in case you lose your authenticator device. You need to do this for any user ID that can log in via ssh (even ones where you normally use a certificate).

Tell Your Pi to Require Two-Factor Login

The other part of the puzzle requires you to make changes to /etc/pam.d/sshd and /etc/ssh/sshd_config. The first line of /etc/pam.d/sshd should be:

auth required pam_google_authenticator.so

In /etc/ssh/sshd_config you need to make sure passwords are on:

ChallengeResponseAuthentication yes
PasswordAuthentication yes
UsePAM yes

Just make sure you don’t mess anything up. Losing the ssh server could stop you from being able to access the machine. I haven’t messed one up yet, but the advice I hear is to keep an ssh session open while you restart the ssh server (/etc/init.d/sshd restart) so if something goes wrong, you’ll still have a shell prompt open. You might also consider running:

/usr/sbin/sshd -t

This will verify your configuration before you pull the trigger.

By the way, if you already use certificates to log in, this won’t change anything for you. The certificate authentication takes priority over passwords. That does make it tricky to test your setup. You can force ssh not to use your certificate like this:

ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no [email protected]

Even for accounts where you use certificates, adding the two-factor log in will prevent brute force attacks on your password, so be sure to set up for all accounts you can use over ssh.

Other Protections

There are several other things you can do to help secure your ssh connection:

  • Disallow root logins (edit the PermitRootLogin line in /etc/ssh/sshd_config; you can use sudo from a normal account if you want to become root)
  • Use a non-standard ssh port (edit port in sshd_config)
  • Consider installing fail2ban which will block IP addresses that exhibit suspicious behavior
  • Disallow any users that don’t need ssh access (use AllowUsers or DenyUsers in the sshd_config file)
  • Set PermitEmptyPasswords in sshd_config to ‘no’

Not everyone will agree with disallowing root logins. However, empirically, a lot of attacks will try logging in as root since virtually every Linux system has that account. It is harder for a random attacker to know that my user ID is WetSnoopy.

There are many other techniques ranging from port knocking to locking users to their home directories. You can rate limit connection attempts on the ssh port. Only you can decide how much security is enough. After all, you lock up your cash box better than you lock up your supply closet. However, convenient and free two-factor authentication can add a high level of security to your Raspberry Pi or other Linux-based projects. If you are really concerned, by the way, you can also force two-factor for accessing sudo, as well.

Which is cheaper: Containers or virtual machines?

The content below is taken from the original (Which is cheaper: Containers or virtual machines?), to continue reading please visit the site. Remember to respect the Author & Copyright.

The emergence of application containers has come with questions about where this technology fits in the enterprise technology landscape, and more specifically how it compares to virtual machines.

+MORE AT NETWORK WORLD: Are containers VM killers?

A new report from 451 Research has some provocative findings on just how advantageous containers could be, not just for developers and operators, but for the finance team too.

“451 Research believes containers are better placed, at least theoretically, to achieve lower TCO (total cost of ownership) than traditional hardware virtualization,” 451 Researchers Owen Rogers and Jay Lyman write. “In fact, we have found that double-digit resource savings are achievable even with relatively simple implementations.”

+MORE AT NETWORK WORLD: 10 Tips for a successful cloud plan +

The key to Rogers and Lyman’s research is the notion that containers are an operating system-level virtualization. Hardware virtualization uses a hypervisor to create virtual machines and each of those VMs has its own operating system. Containers, on the other hand, virtualize the operating system so multiple containers share the OS. This creates efficiencies.

“Containers have the same consolidation benefits of any virtualization technology, but with one major benefit – there is less need to reproduce operating system code,” Owens and Lyman write in the report. You can pack more on to a server that is running containers and one version of an OS compared to a server running a hypervisor with multiple copies of an OS.

For example, a server that is hosting 10 applications in 10 virtual machines would have 10 copies of the OS running in each VM. In a containerized model without virtual machines, those 10 applications could all –theoretically – share a single version of the OS. Rogers and Lyman say this create double-digit percentage resource savings of server processing using containers versus VMs. “Operating system virtualization does a better job of consolidation than hardware virtualization, simply because there is less duplication and therefore less resource consumption,” the researchers found. Having fewer operating systems yields other benefits. For example management is less cumbersome because there are fewer OS security updates.

So should containers replace all VMs?

VMs aren’t going away

Virtual machines still have an important place in enterprise technology, Rogers and Lyman say. “Anytime new technology comes along, it’s not as simple as flipping a switch and start seeing savings,” says Sushil Kumar, CMO at container-based software defined networking startup Robin Systems. Kumar notes that when virtualization was first gaining traction in the early 2000s, it was first implemented in test and development environments. Even today, Kumar says some applications like virtual desktops and enterprise applications like SharePoint run just fine in virtual machines and benefit from having hypervisor isolation of VMs.

There are other factors to consider beyond cost. Using containers outside of VMs requires the ability to provision and manage bare metal servers, for instance, which could involve some additional cost.

“There are benefits to VMs in more mature security, tooling, management and process, which is why we expect – just as there are still physical machines alongside today’s VMs – there will likely be VMs alongside containers,” Rogers and Lyman write. “Nevertheless, efficiency, scalability and performance factors (including the cost advantage) will undoubtedly mean more disruption to VMs from the advent of containers.”

What to do when hackers break into your cloud

The content below is taken from the original (What to do when hackers break into your cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are two major types of public cloud computing attacks: single-tenant and cross-tenant. A cross-tenant attack is the stuff of IT nightmares, but it has not yet occurred. (In a cross-tenant attack, the hackers gain root-level access to the cloud and thus access to most or all of the tenants — including you.)

Single-tenant breaches are more likely to occur. In these attacks, the hacker has compromised one or more machine instance, but can’t go beyond that. The most likely cause of a single-tenant breach is that user IDs and passwords have been compromised. That’s typically due to malware or phishing attacks on client devices. In this case, it’s all on you; the cloud provider has done its job, but you haven’t done yours. 

When such breaches occur, hopefully you’ll figure it out quickly. When you recognize the breach, the best response is to invoke a prebuilt set of processes that can do the following:

  1. Shut down the instances — computer, storage, or both — that have been compromised. That prevents any activity, whether good or bad, until the problem has been corrected.
  2. Audit the security system to determine how the attackers gained access and what they did while in the system. Isolate the hackers and remove their access from the system.
  3. Resecure the system and make users change their passwords before they are granted renewed access.

Of course, this does not address the core problem — it only fixes a single intrusion. To address the core vulnerabilities of single-tenant attacks:

  1. Establish proactive monitoring mechanisms to ensure that odd activity is spotted quickly, and the relevant cloud instances are defended. For example, monitor for access from a foreign IP address and for multiple login failures.
  2. Consider using encryption, at least with your data at rest. That way, even if hackers gain access, your data remains protected.
  3. Implement identity and access management.
  4. Consider using multifactor authentication and other types of access mechanisms that provide better protection at the user-access level.
  5. Review the security services that your cloud provider offers, and consider using any that apply. It can be better to use the native security capabilities than to bolt on your own or those of third parties.

As more workloads move into the public cloud, we’ll see more attacks. That’s what happens when any platform, cloud or not, gains popularity. But if you’re proactive and invest in modern security mechanisms, you’ll discover that the cloud is a more secure place for your applications and data than your datacenter has been.

Google announces eight new cloud regions, new support model

The content below is taken from the original (Google announces eight new cloud regions, new support model), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has announced a big expansion of its cloud, with new regions planned for Mumbai, Singapore, Sydney, Northern Virginia, São Paulo, London, Finland and Frankfurt … and those are just the ones it plans to turn on during 2017. The company’s also planning to announce more regions in the future.

Perhaps more importantly, Google is also putting some consulting muscle behind its cloud, announcing it’s created new “Customer Reliability Engineering” roles “designed to deepen our partnership with customers. CRE is comprised of Google engineers who integrate with a customer’s operations teams to share the reliability responsibilities for critical cloud applications.”

“This integration represents a new model in which we share and apply our nearly two decades of expertise in cloud computing as an embedded part of a customer’s organization.”

Google’s cloud veep Brian Stevens says, “We’ll have more to share about this soon.”

But back to those new bit barns. Google says they’re needed because a billion people rely on services delivered from its cloud, and more capacity will mean lower latency. The company looks to have put as much effort into connectivity as it has into the data centres themselves: its shiny new map (see top of story or here for those of you on our mobile site) shows network links between bit barns around the globe.

The company is also starting to emphasise the role it thinks Kubernetes can play in the hybrid cloud, pointing out that the new version 1.4 now allows clustered implementations that span “multiple clouds, whether they are on-prem, on a different public cloud vendor, or a hybrid of both.”

Google’s gradually turning up the heat on Kubernetes, emphasising its origins at Google and therefore its cloud’s capabilities as a fine place to run the container orchestration code. Throw in the new support offering and its emphasis on using Google’s expertise to run your cloud, and Google’s position as a cloud for cloud-native and containerised apps is becoming clearer.

But the main thing that changes after this announcement is that Google now has reach to match Amazon Web Services, Azure and SoftLayer. But it’s not a killer blow: AWS on Wednesday announced a new region in Paris, France. ®

Sponsored:
IBM FlashSystem V9000 product guide

LXLE: A Linux distro to give new life to old hardware

The content below is taken from the original (LXLE: A Linux distro to give new life to old hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ll bet that somewhere, perhaps at home and most likely at work, you’ve got some old hardware lying around. What to do with it? It still works but what’s it running? Windows XP? Vista? Windows 7 Starter or Home Basic?

Yep, you’re stuck on some old version of Windows but moving that machine up to a newer version of Windows could be tricky ‘cause one or more of those old graphics cards and printer drivers have probably have fallen out of the update cycle. 

Even if those subsystems are still available, you’ll still have a problem as the newer OSs’ are pretty much guaranteed to suck the life out of old processors with the result that performance and therefore usability will be marginal at best.

So, what to do? Before you start looking for a deal on a new machine and an e-waste disposal site, consider moving to Linux and, most specifically, consider migrating to LXLE, the LXDE eXtra Luxury Edition (though some people also claim it stands for Lubuntu Extra Life Extension).

Based on Lubuntu and using the Lightweight X11 Desktop Environment (LXDE) graphical user interface, LXLE is small, fast, and

… designed to be a drop-in and go OS, primarily for aging computers. Its intention is to be able to install it on any computer and be relatively done after install. At times removing unwanted programs or features is easier than configuring for a day. Our distro follows the same LTS schedule as Ubuntu. In short, LXLE is an eclectic respin* of Lubuntu with its own user support.

lxle 01

While the good people behind LXLE tout the distribution as a replacement for old versions of  Windows, the reality is that non-Linux users are going to have to learn a few new concepts. That said, with other distros, one of the pain points would be maintaining the system which would usually involve updating the package list, then updating the packages, then deleting unwanted files, and … it’s all a tedious and rather techie process. 

LXLE has included a package, uCareSystem Core, that takes a lot of the pain out of keeping everything up to date and “cruft free”. To be honest, uCareSystem could be better integrated by, for example, having a GUI entry to launch it (you currently have to invoke it from a terminal session) as well as being run automatically on a schedule. Even so, including uCareSystem is a step in the right direction.     

The latest release of LXLE, version 16.04, is a 1.29GB download. I first installed it in a Virtual Machine under Parallels 12 with Parallel Tools and then on an old laptop and wow! LXLE really is fast. You can also find both 32- and 64-bit LXLE 14.04 and 32-bit LXLE 12.04 virtual machines for both VirtualBox and VMware on OSboxes.org (none of the VMs come with the guest additions installed and also note that contrary to what OSboxes says, the usernames for the VMs are “osboxes.org” not “osboxes”).

So, if you’re looking for a lightweight Linux-based operating system to extend the life of old hardware or you want bring old equipment back from retirement or want a low overhead, high performance i386 platform, check out LXLE.

Comments? Thoughts? Drop me a line or comment below then follow me on Twitter and Facebook.

* “Respin,” a remastered version of a major Linux distro tailored for a specific purpose or group.

NComputing launches vSpace Pro 10, its next generation desktop virtualization platform

The content below is taken from the original (NComputing launches vSpace Pro 10, its next generation desktop virtualization platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

We haven’t heard much from NComputing for a while now. The company has had its own trials and tribulations, and they’ve spent the better part of a… Read more at VMblog.com.

Microsoft adds SharePoint support to OneDrive

The content below is taken from the original (Microsoft adds SharePoint support to OneDrive), to continue reading please visit the site. Remember to respect the Author & Copyright.

Quite a lot of news came out of the Microsoft Ignite 2016 conference, most of which we have already covered. But there’s more, as Microsoft announced a big upgrade to its OneDrive cloud storage service.

Microsoft wants to provide a single sync client for all of its cloud storage services, whether its OneDrive, OneDrive for Business or SharePoint, and it has been for some time. To achieve this, it has added the ability to sync SharePoint Online document libraries with OneDrive folders,and added an “activity center” to the OneDrive sync client to allow you to view synchronization and file activity at a glance.

You won’t need OneDrive to see shared folders from others, either. The new update adds the ability to get to folders from someone else’s OneDrive that are shared with you through the Windows File Explorer.

Microsoft also updated the OneDrive browser to add rich thumbnails and previews for over 20 new file types. This feature will be rolled out before the end of 2016. You will also have the ability to access and edit all of your files in OneDrive and SharePoint Online from the OneDrive browser client and be able to download multiple files as a single .zip file. These two features will also be rolling out before the end of the year. 

On the mobile side of things, Android and iOS devices will now receive a notification when someone shares a file with you. Android users will be able to access SharePoint Online files in the Android OneDrive app, and the app has been given multi-page scan enhancements. For iOS, you will have the ability to see over time how many people have discovered and viewed your files in OneDrive.

Microsoft promises more Office 365 integration, as well. For example, you’ll have a notification in the ribbon if someone is editing a file at the same time as you and who it is, and you can even launch Skype for Business to talk to that person about it. Also, when you open the File menu, you’ll now see options that have recently been shared with you, in addition to your last used local files. 

For IT pros, Microsoft will be adding per-user controls to the Office 365 User Management console. Admins will be able to sign a user out of their account in the event of the employee leaving or being fired or if the account is somehow compromised, such as if it is lost or stolen.

A Commodore 64 has helped run an auto shop for 25 years

The content below is taken from the original (A Commodore 64 has helped run an auto shop for 25 years), to continue reading please visit the site. Remember to respect the Author & Copyright.

c64-poland
Apple’s Phil Schiller thinks it’s sad that people use 5-year-old computers. Well, Phil, there’s an auto repair shop in Poland that’s going to send you spiraling into a long depression. Why? Because one […]

The New Raspberry Pi OS Is Here, and It Looks Great

The content below is taken from the original (The New Raspberry Pi OS Is Here, and It Looks Great), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Raspberry Pi’s main operating system, Raspbian, just got a brand new look from the Raspberry Pi Foundation. Dubbed PIXEL, it’s a skin for Raspbian that modernizes the interface, adds some new programs, and makes it much more pleasant to use. Let’s take a closer look at your Pi’s new appearance.

The New Splash Screen Replaces the Old Cryptic Boot Messages

The first big change you’ll see is the lack long strings of text when you boot up your Raspberry Pi. In their place is a splash screen that shows the operating system and version number, just like you’d find on any other modern computer. Otherwise, the overall boot time and process remains the same.

PIXEL Comes Preloaded with RealVNC, Chromium, and More



GIF

PIXEL also adds a few notable new default programs. The biggest new app is Chromium, which replaces the aging Epiphany web browser. This is the first version of Chromium built specifically for the Pi and uses the Pi’s hardware to accelerate video playback. Chromium comes with a couple extensions installed, including uBlock Origin for blocking ads as well as the h264ify extension for improving YouTube video quality on the Pi. Chromium is best suited for the Raspberry Pi 2 and 3, but still works on the Pi 1 and the Pi Zero.

RealVNC is included so you can easily use the Raspberry Pi from a remote desktop right out of the box. If you’ve never used a RealVNC on your Pi before, set up is very simple. RealVNC is a nice way to access your Pi if you only own a laptop and don’t feel like buying a keyboard, mouse, and monitor.

There’s also a new SenseHAT emulator that makes it so you can test ideas for the SenseHAT peripheral. The emulator allows you to adjust the gyroscope, temperature, screen, and tons more.

PIXEL Comes with a Bunch of Good-Looking Wallpapers



GIF

This might not sound like much, but considering that Raspbian’s default background has always been either blank or the Raspberry Pi logo, it’s really nice that PIXEL comes with a bunch of wallpapers. Included are 16 photos from one of the Raspberry Pi developers, Greg Annandale. You can get to them by clicking the Pi Logo > Preferences > Appearance Settings.

Of course, you’ve always been able to use your own wallpapers, but it’s much nicer when you’re greeted by a photograph on your first boot.

PIXEL Features All New Application Icons, New Temperature and Voltage Icons



GIF

You likely don’t think about the quality of an icon very often, but the icons in Rapsbian were always a bit lacking. They were drab, sometimes pixelated and blurry, and looked a bit muddy. Now, they’re much more vibrant and easier see at a glance.

Also gone is the cryptic rainbow display that warned if your Pi was under voltage or over temperature. In its place is a lightning bolt for voltage and a thermometer for temperature, which should make troubleshooting a ton easier.

Each Window Sports a Cleaner, Rounded Title Bar



GIF

In previous iterations of Raspbian, the windows were blocky squares that always made the system look outdated. Now, it’s much more modern looking with rounded corners, a new title bar, and new close/minimize/maximize buttons. It’s a minor change but looks a lot better overall.

You Can Easily Disable Wi-Fi or Bluetooth



GIF

If you don’t need Wi-Fi or Bluetooth, having them on can drain power quickly, which is a problem if you’re working on a project that uses a battery pack. PIXEL adds in a new menu for both Wi-Fi and Bluetooth that makes it a lot easier to turn either off. Just click the icon, click the off button, and you’re all set.

How to Update Your Current Version of Raspbian

PIXEL will ship as the Raspberry Pi Foundation’s main operating system from here on out. If you already have a copy of Raspbian up and running, you can update it to this version by loading up the command line and typing the following commands:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install -y rpi-chromium-mods
sudo apt-get install -y python-sense-emu python3-sense-emu
sudo apt-get install -y python-sense-emu-doc realvnc-vnc-viewer

If you prefer to start from scratch and burn a new image, you can get PIXEL from the Raspberry Pi Foundation’s downloads page.

Nano-scale mirror could be a breakthrough for optical computing

The content below is taken from the original (Nano-scale mirror could be a breakthrough for optical computing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Using a mere 2,000 atoms of cesium, Professor Julien Laurat and his team at the Pierre and Marie Curie University in Paris have created the world’s smallest mirror. According to postdoctoral fellow Neil Corzo, who is also lead author on the team’s research paper published in the Physical Review Letters journal this week, the nano-mirror has the same level of reflectance as materials that require tens of millions of atoms and could one day lead to new advances in optical computing.

The mirror uses a nanoscale optical fiber only 400 nm in diameter to place the chain of cesium atoms in just the right alignment to reflect the light. (For reference, a human hair is roughly 80,000-100,000 nm thick.) Because of the extremely tiny scale, the atom chains had to be precisely spaced at half the wavelength of the light beam — which also means the color of light had to be specifically chosen.

As Popular Mechanics notes, the team was able to use the mirror to temporarily trap the light beam, essentially creating a sort of optical diode that can store and retrieve light pulses. As Corzo concludes in the paper, this may eventually prove useful in building light-based photonic circuits that will vastly increase computing speeds.

Via: Popular Mechanics

Source: Physical Review Letters, Science News Journal

Crusty Cat 5e/6 cables just magically sped up to 2.5 Gbps and 5 Gbps

The content below is taken from the original (Crusty Cat 5e/6 cables just magically sped up to 2.5 Gbps and 5 Gbps), to continue reading please visit the site. Remember to respect the Author & Copyright.

The IEEE has approved the specification covering 2.5 Gbps and 5 Gbps Ethernet, 802.11bz.

In particular, the approval signifies that the work item still incomplete at the end of 2015, the interface between the Media Access Control (MAC) and the physical (PHY) layers has been completed.

Last December, NBase-T Alliance leader Peter Jones explained to El Reg that the group wanted 802.11bz to provide connectivity to the latest-generation 802.11ac Wi-Fi kit without replacing existing Cat 5e/6 cabling.

As Jones blogs here:

With the astonishingly huge installed base of Cat5e/6 unshielded twisted pair (UTP) copper cable—70 billion meters and 1.3 billion outlets at last count—organisations across industries were facing an impending crisis. A new generation of 802.11ac Wave 2 devices was on the way and looking directly at a 1Gb/s roadblock.

The IEEE’s approval means vendors can accelerate the rollout of products (which had already began, in the time-honoured practice of rolling pre-standard product).

Work on 802.11bz began in 2014. The Nbase-T alliance board consists of Aquantia, Cisco, CME Consulting, Intel, Marvell, NXP and Xilinx. ®

Sponsored:
IBM FlashSystem V9000 product guide

10 tips for a successful cloud plan

The content below is taken from the original (10 tips for a successful cloud plan), to continue reading please visit the site. Remember to respect the Author & Copyright.

How do you get started using the cloud?

For some organizations, cloud usage has already begun by someone in the company – whether they know it or not. But to have a successful cloud deployment, it’s helpful to have a plan.

Consultancy Cloud Technology Partners is one of many companies that help customers adopt public IaaS cloud computing resources. CTP says the following 10 tips are key for a successful cloud rollout.

+MORE AT NETWORK WORLD: Case study: How Notre Dame is going all in on AWS +

To read this article in full or to leave a comment, please click here

Law firm cyclists create 500km Strava art in aid of London Air Ambulance

The content below is taken from the original (Law firm cyclists create 500km Strava art in aid of London Air Ambulance), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cyclists from law firm Leigh Day have raised over £6,500 for the London Air Ambulance by riding over 500km to create a giant Strava helipad

screen-shot-2016-09-28-at-12-08-31

Cyclists from law firm Leigh Day have raised over £6,500 for the London Air Ambulance by riding over 500km to create a giant Strava helipad

Using Google’s cloud networking products: a guide to all the guides

The content below is taken from the original (Using Google’s cloud networking products: a guide to all the guides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Mike Truty, Cloud Solutions Architect

I’m a relative newcomer to Google Cloud Platform. After nine years working in Technical Infrastructure, I recently joined the team to work hand-in-hand with customers building out next-generation applications and services on the platform. In this role, I realized that my privileged understanding of how we build our systems can be hard to come by from outside the organization. That is, unless you know where to look.

I recently spent a bunch of time hopping around the Google Cloud Networking pages under the main GCP site, looking for materials that could help a customer better understand our approach.

What follows is a series of links for anyone who may want an introduction to Google Cloud Networking, presented in digestible pieces and ordered to build on previous content.

Getting started

First, for some quick 15-minute background, I recommend this Google Cloud Platform Overview. It’s a one-page survey of all the necessary concepts you need to work in Cloud Platform. Then, you may want to scan the related Cloud Platform Services doc, another one-pager that introduces the primary customer-facing services (including networking services) that you might need. It’s not obvious but Cloud Platform networking also lays the foundation for the newer managed services mentioned including Google Container Engine (Kubernetes) and Cloud Dataflow. After all that, you’ll have a good idea of the landscape and be ready to actually do something in GCP!

(click to enlarge)

Networking Codelabs

Google has an entire site devoted to Codelabs — my favorite way to learn nontrivial technical concepts. Within the Cloud Codelabs there are two really excellent networking Codelabs: Networking 101 and Networking 102. I recommend them highly for a few reasons. Each one only takes about 90 minutes end-to-end; each is a quick survey of a few of the most commonly used features in cloud networking; both include really helpful hints about performance and, most importantly, after completing these Codelabs, you’ll have a really good sandbox for experimenting in cloud networking on Google Cloud Platform.

Google Cloud Networking references

Another question you may have is what are the best Google Cloud Networking reference docs? The Google Cloud Networking feature docs are split between two main landing pages: the Cloud Networking Products page and the Compute Engine networking page. The products page introduces the main product feature areas: Cloud Virtual Network, Autoscaling and Load Balancing, Global DNS, Cloud Interconnect and Cloud CDN. Be sure to scroll down to the end, because there are some really valuable links to guides and resources at the very bottom of each page that a lot of people miss out on.

The Compute Engine networking page is a treasure trove of all kinds of interesting details that you won’t find anywhere else. It includes the picture I hold in my mind for how networks and subnetworks are related to regions and zones, details about quotas, default IP ranges, default routes, firewall rules, details about internal DNS, and some simple command line examples using gcloud.

An example of the kind of gem you’ll find on this page is a little blurb on measuring network throughput that links to the PerfKitBenchMarker tool, an open-source benchmark tool for comparing cloud providers (more on that below). I return to this page frequently and find things explained that previously confused me.

For future reference, the Google Cloud Platform documentation also maintains a list of networking tutorials and solutions documents with some really interesting integration topics. And you should definitely check out Google Cloud Platform for AWS Professionals: Networking, an excellent, comprehensive digest of networking features.

Price and performance

Before you do too much, you might want to get a sense for how much of your free quota it will cost you to run through more networking experiments. Get yourself acquainted with the Cloud Platform Pricing page as a reference (notice the “Free credits” link at the bottom of the page). Then, you can find the rest of what you need under Compute Engine Pricing. There, you can see rates for the standard machine types used in the Codelabs, and also a link to General network pricing. A little further down, you’ll find the IP address pricing numbers. Finally, you may find it useful to click through the link at the very bottom to the estimated billing charges invoice page for a summary of what you spent on the codelabs.

Once you’ve done that, you can start thinking about the simple performance and latency tests you completed in the Codelabs. There’s a very helpful discussion on egress throughput caps buried in the Networking and Firewalls doc and you can run your own throughput experiments with PerfKitBenchMarker (sources). This tool does all the heavy lifting with respect to spinning up instances, and understands how different cloud providers define regions, making for relevant comparisons. Also, with PerfKitBenchmaker, someone else has already done the hard work of identifying the accepted benchmarks in various areas.

Real world use cases

Now that you understand the main concepts and features behind Google Cloud Networking, you might want to see how others put them all together. A common first question is how to set things up securely. Securely Connecting to VM Instances is a really good walkthrough that includes more overviews of key topics (firewalls, HTTPS/SSL, VPN, NAT, serial console), some useful gcloud examples and a nice picture that reflects the jumphost setup in the codelab.

Next you should watch two excellent videos from GCP Next 2016: Seamlessly Migrating your Networks to GCP and Load Balancing, Autoscaling & Optimizing Your App Around the Globe. What I like about these videos is that they hit all the high points for how people talk about public cloud virtual networking, and offer examples of common approaches used by large early adopters.

A common question about cloud networking technologies is how to distribute your services around the globe. The Regions and Zones document explains specifically where GCP resources reside, and Google’s research paper Software Defined Networking at Scale (more below) has pretty map-based pictures of Google’s Global CDN and inter-Datacenter WAN that I really like. This Google infrastructure page has zoomable maps with Google’s data centers around the world marked and you can read how Google uses its four undersea cables, with more ‘under’ the horizon, to connect them here.

Finally, you may want to check out this sneaky-useful collection of articles discussing approaches to geographic management of data. I plan to go through the solutions referenced at the bottom of this page to get more good ideas on how to use multiple regions effectively.

Another thing that resonated with me from both GCP Next 2016 videos was the discussion about how easy it is to setup and manage services in GCP to serve from closest, low-latency instances using a single global Anycast VIP. For more on this, the Load Balancing and Scaling concept doc offers a really nice overview of the topic. Then, for some initial exploration of load balancing, check out Setting Up Network Load Balancing.

And in case you were wondering from exactly where Google peers and serves CDN content, visit the Google Edge Network/Peering site and PeeringDB for more details. The peering infrastructure page has zoomable maps where you can see Google’s Edge PoPs and nodes.

Best practices

There’s also a wealth of documents about best practices for Google Cloud Networking. I really like the Best Practices for Networking and Security within the Best Practices for Enterprise Organizations document, and DDoS Best Practices doc provides more useful ways to think about building a global service.

Another key concept to wrap your head around is Cloud Identity & Access Management (IAM). In particular, check out the Understanding Roles doc for its introduction to network- and security-specific roles. Service accounts play a key role here. Understanding Service Accounts walks you through the considerations, and Using IAM Securely offers some best practices checklists. Also, for some insight into where this all leads, check out Access Control for Organizations using IAM [Beta].

A little history of Google Cloud Networking

All this research about Google Cloud Networking may leave you wanting to know more about its history. I checked out the research papers referenced in the previously mentioned video Seamlessly Migrating your Networks to GCP and — warning — they’re deep, but they’ll help you understand the fundamentals of how Google Cloud Networking has evolved over the past decade, and how its highly distributed services deliver the performance and competitive pricing for which it’s known.

Google’s network-related research papers fall into two categories:

Cloud Networking fundamentals

Networking background

The Andromeda network architecture (source)

I hope this post is useful, and that these resources help you better understand the ins and outs of Google Cloud Networking. If you have any other good resources, be sure to share them in the comments.

Why bots are poised to disrupt the enterprise

The content below is taken from the original (Why bots are poised to disrupt the enterprise), to continue reading please visit the site. Remember to respect the Author & Copyright.

The proliferation of robots completing manual tasks traditionally done by humans suggests we have entered the machine automation age. And while nothing captures the imagination like self-directing machines shuttling merchandise around warehouses, most automation today comes courtesy of software bots that perform clerical tasks such as data entry.

Here’s the good news: Far from a frontal assault on cubicle inhabitants, these software agents may eventually net more jobs than they consume, as they pave the way for companies to create new knowledge domain and customer-facing positons for employees, analysts say.

The approach, known as robotic process automation (RPA), automates tasks that office workers would normally conduct with the assistance of a computer, says Deloitte LLP Managing Director David Schatsky, who recently published research on the topic. RPA’s potential will grow as it is combined with cognitive technologies to make bots more intelligent, ideally increasing their value to businesses. Globally, the RPA market will grow to $5 billion by 2020 from just $183 million in 2013, predicts Transparency Market Research.

To read this article in full or to leave a comment, please click here

Announcing the public preview of Azure Monitor

The content below is taken from the original (Announcing the public preview of Azure Monitor), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we are excited to announce the public preview of Azure Monitor, a new service making inbuilt monitoring available to all Azure users. This preview release builds on some of the monitoring capabilities that already exist for Azure resources. With Azure Monitor, you can consume metrics and logs within the portal and via APIs to gain more visibility into the state and performance of your resources. Azure Monitor provides you the ability to configure alert rules to get notified or to take automated actions on issues impacting your resources. Azure Monitor enables analytics, troubleshooting, and a unified dashboarding experience within the portal, in addition to enabling a wide range of product integrations via APIs and data export options. In this blog post, we will take a quick tour of Azure Monitor and discuss some of the product integrations.

Quick access to all monitoring tasks

With Azure Monitor, you can explore and manage all your common monitoring tasks from a single place in the portal. To access Azure Monitor, click on the Monitor tab in the Azure portal. You can find Activity logs, metrics, diagnostics logs, and alert rules as well as quick links to the advanced monitoring and analytics tools. Azure Monitor provides these three types of data – Activity Log, Metrics, and Diagnostics Logs.

Activity Log

Operational issues are often caused by a change in the underlying resource. Activity Log keeps track of all the operations performed on your Azure resources. You can use the Activity Log section in the portal to quickly search and identify operations that may impact your application. Another valuable feature of the portal is the ability to pin Activity log queries on your dashboard to keep a tab on the operations you are interested in. For example, you can pin a query that filters Error level events and keep track of their count in the dashboard. You can also perform instant analytics on Activity Log via Log Analytics, part of Microsoft Operations Management Suite (OMS).

Metrics

With the new Metrics tab, you can browse all the available metrics for any resource and plot them on charts. When you find a metric that you are interested in, creating an alert rule is just a single click away. Most Azure services now provide out-of-the-box, platform-level metrics at 1-minute granularity and 30-day data retention, without the need for any diagnostics setup. The list of supported resources and metrics is available here. These metrics can be accessed via the new REST API for direct integration with 3rd party monitoring tools.

Blog_image1_metrics

Diagnostics logs

Many Azure services provide diagnostics logs, which contain rich information about operations and errors that are important for auditing as well as troubleshooting purposes. In the new Diagnostic logs tab, you can manage diagnostics configuration for your resources and select your preferred method of consuming this data.

Blog_image2_diaglogs

Alerts & automated actions

Azure Monitor provides you the data to quickly troubleshoot issues. But you want to be proactive and fix issues before it impacts your customers. With Alert rules, you can get notified whenever a metric crosses a threshold. You can receive email notifications or kick off an Automation-runbook script or webhook to fix the issue automatically. You can also configure your own metrics using custom metrics and events APIs to send data to Azure Monitor pipeline and create alert rules on them. With the ability to create alerts rules on platform, custom and app-level metrics, you now have more control on your resources. You can learn more about alert rules here.

Single monitoring dashboard

Azure provides you a unique single dashboard experience to visualize all your platform telemetry, application telemetry, analytics charts and security monitoring. You can share these dashboards with others on your team or clone a dashboard to build new ones.

Extensibility

The portal is a convenient way to get started with Azure Monitor. However, if you have a lot of Azure resources and want to automate the Azure Monitor setup you may want to use a Resource Manager template, PowerShell, CLI, or REST API. Also, if you want to manage access permissions to your monitoring settings and data look at the monitoring roles.

Product integrations

You may have the need to consume Azure Monitor data but want to analyze it in in your favorite monitoring tool. This is where the product integrations come into play – you can route the Azure Monitor data to the tool of your choice in near real-time. Azure Monitor enables you to easily stream metrics and diagnostic logs to OMS Log Analytics to perform custom log search and advanced alerting on the data across resources and subscriptions. Azure Monitor metrics and logs for Web Sites and VMs can be easily routed to Visual Studio Application Insights, unlocking deep application performance management within the Azure portal.

The product integrations go beyond what you see in the portal. Our partners bring additional monitoring experiences, which you may wish to take advantage of. We are excited to share that there is a growing list of partner services available on Azure to best serve your needs. Please visit the supported product integrations list and give us feedback.

To wrap up, Azure Monitor helps you bring together the monitoring data from all your Azure resources and combine it with the monitoring tool of your choice to get a holistic view of your application. Here is a snapshot of a sample dashboard that we use to monitor one of our applications running on Azure. We are excited to launch Azure Monitor and looking forward to the dashboards that you build. Review the Azure Monitor documentation to get started and please keep the feedback coming.

Blog_image3_dashboard

Best Notepad++ Tips and Tricks you should use

The content below is taken from the original (Best Notepad++ Tips and Tricks you should use), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are plenty of text editors available for programmers, but most of the people often choose Notepad++ as an alternative to Notepad since it is free, user-friendly and feature-rich. If you are not familiar with Notepad++, you should know that it is possible to write different languages including .html, .css, .php, .asp, .bash, .js, and more. Here are a few Notepad++ tips and tricks that you may use to get started.

Best Notepad++ Tips and Tricks

1] Perform certain things automatically

This is probably the most time-saving feature that Notepad++ has, since it will let you do a repeat a task more than once without actually doing it again. You can record a Macro and perform an act automatically. Let’s assume that you want to replace a certain text in different files and save it in a particular format. You just need to record the whole process and play it later whenever you want to perform that task. It is possible to save as many macros as you want. To record a Macro, just head over to the Macro section in the navigation menu of Notepad++.

2] Launch code in particular browser

Let’s assume that you have written a few lines of code in HTML and CSS. Now, you want to check the look of that page without applying it on a live website. You have two options. First, you can save that code with the respective extension (here it is .html), and open the file in any web browser. Or, you can just launch the code in a particular browser without doing any of this. Just write down your code, select Run > Launch in Firefox/IE/Chrome/Safari. Now, the page will open directly in your desired web browser.

3] Change preferencechange-theme-of-notepad

If you think that the default interface of Notepad++ is boring, and it needs some customization, you can certainly do that without using any third party software or plugin. It is possible to change the theme, font family, font size, font style, font weight, font color, background color, and more. If you have installed a font from third party sources, you can still use it as your default font in Notepad++. To change the preferences, just click on Settings > Style Configurator. You will see a screen, where you can choose everything mentioned earlier. Select your preference and place a tick-mark in the checkbox on the same page. Otherwise, the change will not be effected.

4] Create and set own Notepad++ theme

If you do not like the default themes of Notepad++, you can make one according to your wish and set it as your default theme. The primary requirement is that you have to save the theme file with a .xml extension, and place it inside the following folder:

C:\Users\user_name\AppData\Roaming\Notepad++\themes

Don’t forget to replace user_name with your actual username. Having done this, go to Settings > Style Configurator. You will see the theme inside the Select Theme drop-down menu.

5] Open recently opened files quickly and change the number

Suppose, you have a folder full of your codes, and you need to open a particular file. It will certainly take time if you have to navigate a long path. At such times, you can simply click on File and check your recently opened files. You can get up to 15 files in the list with the actual path. If you think that this feature is useful, and you want to increase the number of “Recently Opened” files, here is a trick to increase or decrease the number. Open Settings > Preferences. Under the Recent Files History, you will get the option to change the number.

6] Open file in tree view

open-file-in-tree-view-in-notepad



If you are developing a theme, obviously there are more than one files. It is quite difficult to open and close different files in a particular folder. To solve this problem, Notepad++ has an awesome feature called Folder as Workspace, which helps users to view all the files and folders in the tree view. You can see a sidebar on the left-hand side that will let you open a particular folder and file. To open a folder, click on File > Open Folder as Workspace, and choose the folder that you want to show in a tree view.

7] Open all files in a folder at once

If you want to open all the files in a folder at once in Notepad++, you can do two things. You can simply open a folder, select all the files and hit Enter. Or you can click on File > Open Containing Folder > Explorer, select the files and hit Enter. Both actions will perform the same task.

8] Find word or text in multiple files

Notepad++ Tips and Tricks

Suppose, you have made a mistake in writing a particular word. For instance, you have written ABC instead of XYZ. To find all the wrongly written words, you do not have to open one file at a time and check them. Instead, you can just open all the files at once using the guide mentioned above. Then, press Ctrl + F and go to Find tab. Now, write down what you want to find and hit the Find All in All Opened Documents button. You will get the result in the bottom of your Notepad++ window. From here, you can go to that particular file and find the error.

9] Replace word or text in multiple files

replace-word-or-text-in-multiple-files-in-notepad

If you want to replace a particular word or text with some other word, in multiple files, open all the files in Notepad++. Press Ctrl + H, type the word you want replaced, and the new word in the given fields and click on the Replace All in All Opened Documents. To save all the files at once, press Ctrl + Shift + S.

10] Find changes side by side

Notepad++ Tips and Tricks

Let’s assume that you have made few changes in a particular file or say you want to make two instances of a single file. To do this, open or create the file that you want to place side by side or make another instance. Then, right-click on the tab and select Clone to Other View.

10] Make a file edit-proof

If you often press buttons by mistake, here is a solution that will help you to edit a particular file and make other files edit-proof when you have placed two files side by side. Right-click on the file’s tab that you want to Read Only and then, select Read Only.

Bonus Tip: You can also access FTP server using Notepad++.

Hope you find these Notepad++ tips useful.