The MagPi 55 is out, with plenty about the Pi Zero W

The content below is taken from the original (The MagPi 55 is out, with plenty about the Pi Zero W), to continue reading please visit the site. Remember to respect the Author & Copyright.

Rob from The MagPi here! We’re still incredibly excited about the brand-new, wireless-enabled Raspberry Pi Zero W, and it’s in our latest issue, out now. Here’s a video of me talking about it.

Introducing Raspberry Pi Zero W

The Raspberry Pi Zero W, the new wireless-enabled Raspberry Pi, is out now! Rob from The MagPi, the official Raspberry Pi magazine, reveals the specifications, price, and more. Get a free Pi Zero W with a twelve-month print sub to The MagPi – http://magpi.cc/SubsNew The subscription offer includes a free Raspberry Pi Zero W, an official case with three covers, and a cable bundle.

We have not just one, but two, big articles about the Raspberry Pi Zero W in issue 55 of The MagPi. Our Big Build feature teaches you how to make a modified PiGRRL handheld retro console, and you’ll also find a full ten-page breakdown of everything that’s cool and new with the Raspberry Pi Zero W.

As usual we have loads of other excellent articles in the magazine, from tutorials on how to create an Amazon Alexa-powered robot to reviews of the brand new version of Kodi.

Pi Zero W, back-ups, advanced GPIO, 3D modelling, and more. We think issue 55 is fabulous!

Get your copy
You can grab a copy of The MagPi 55 in stores now at WHSmith, Tesco, Sainsbury’s, and Asda. Alternatively you can order your copy online, or get it digitally via our app on Android and iOS. There’s even a free PDF of it as well.

We also have a new subscription offer to celebrate the new Raspberry Pi Zero W: grab a twelve-month subscription and you’ll get a Raspberry Pi Zero W absolutely free, along with a free official case and a bundle of adapter cables. Get yours online right now!

New Subs Banner_new

Free Creative Commons download
As always, you can download your copy of The MagPi completely free. Grab it straight from the issue page for The MagPi 55.

Don’t forget, though, that as with sales of the Raspberry Pi itself, all proceeds from the print and digital editions of the magazine go to help the Raspberry Pi Foundation achieve its charitable goals. Help us democratise computing!

Lastly, here’s a full zip of the code from this issue, to help you get off to a flying start with your projects. We hope you enjoy it!

The post The MagPi 55 is out, with plenty about the Pi Zero W appeared first on Raspberry Pi.

Linux hacker board has WiFi, BT, GbE, 8GB eMMC, and $30 price

The content below is taken from the original (Linux hacker board has WiFi, BT, GbE, 8GB eMMC, and $30 price), to continue reading please visit the site. Remember to respect the Author & Copyright.

FriendlyElec has launched a $30, open spec “NanoPi M1 Plus” SBC with a quad-A7 SoC, onboard wireless, 8GB eMMC, a 40-pin RPi interface, and a GbE port. FriendlyELEC (AKA FriendlyARM) has released a more feature-rich version of its community-backed, $15 NanoPi M1 SBC. The $30 NanoPi M1 Plus retains the 1.2GHz quad-core, Cortex-A7 Allwinner H3 […]

Announcing the Public Preview of the Azure Site Recovery Deployment Planner

The content below is taken from the original (Announcing the Public Preview of the Azure Site Recovery Deployment Planner), to continue reading please visit the site. Remember to respect the Author & Copyright.

With large enterprises deploying Azure Site Recovery (ASR) as their trusted Disaster Recovery solution to protect hundreds of virtual machines to Microsoft Azure, proper deployment planning before production rollout is critical. Today, we are excited to announce the Public Preview of the Azure Site Recovery Deployment Planner. This tool that helps enterprise customers to understand their on-premises networking requirements, Microsoft Azure compute and storage requirements for successful ASR replication, and test failover or failover of their applications. In the current public preview the tool is available only for the VMware to Azure scenario.

  • The Deployment Planner can be run without having to install any ASR components in your on-premises environment.
  • The tool does not impact the performance of the production servers, as no direct connection is made to them. All performance data is collected from the VMware vCenter Server/VMware vSphere ESXi Server which hosts the production virtual machines.

What all aspects does the ASR Deployment Planner cover?

As you move from a proof of concept to a production rollout of ASR, we strongly recommend running the Deployment Planner. The tool will help you answer the following questions:

Compatibility assessment

  • Which on-premises servers are not qualified to protect to Azure with ASR and why?

Network bandwidth need vs. RPO assessment

  • How much network bandwidth is required to replicate the servers to meet the desired RPO?
  • How many virtual machines can be replicated to Azure in parallel to complete initial replication in a given time with available bandwidth?
  • What is the throughput that ASR will achieve on my provisioned network?
  • What RPO can be achieved for available bandwidth?
  • What is the impact on the desired RPO if lower bandwidth is provisioned?

Microsoft Azure infrastructure requirements

  • How many storage accounts need to be provisioned in Microsoft Azure?
  • What type of Azure Storage (standard/premium) accounts should every protected virtual machine be placed on for best application performance?
  • What virtual machines can be replicated to a single storage account?
  • How many cores are required to be provisioned in the Microsoft Azure subscription for successful test failover/failover?
  • What Microsoft Azure virtual machine size should be used for each of the on-premises servers to get optimal application performance during a test failover/failover?

On-premises infrastructure requirements

  • How many on-premises ASR Configuration Servers and Process Servers are needed?

Factoring future growth

  • How are all the above factors impacted after considering possible future growth of the on-premises workloads with increased usage?

How does the ASR Deployment Planner work?

The ASR Deployment Planner has three main modes of operation:

  • Profiling
  • Report generation
  • Throughput calculation

Profiling

In this mode, you profile all the on-premises servers that you want to protect over a few days, e.g. 30 days. The tool stores various performance counters like R/W IOPS, Write IOPS, data churn, and other virtual machine characteristics like number of cores, number/size of disks, number of NICs, etc. by connecting to the VMware vCenter Server/VMware vSphere ESXi Server where the virtual machines are hosted. Learn more about profiling.

Report Generation

In this mode, the tool uses the profiled data to generate a deployment planning report in Microsoft Excel format. The report has five sheets:

  • Input
  • Recommendations
  • Virtual machine to storage placement
  • Compatible VMs
  • Incompatible VMs

By default, the tool takes 95th percentile of all the profiled performance metrics and includes a growth factor of 30%. Both these parameters, percentile calculation and growth factor, are configurable. Learn more about report generation.

dp-report

Throughput Calculation

In this mode, the tool finds the network throughput that can be achieved from your on-premises environment to Microsoft Azure for ASR replication. This will help you determine what additional bandwidth you need to provision for ASR replication. Learn more about throughput calculation.

With ASR’s promise of full application recovery on Microsoft Azure, thorough deployment planning is critical for both disaster recovery and migration scenarios where ASR is used. With the new ASR Deployment Planner, we will ensure that both brand new deployments and existing deployments where you are looking to protect or migrate more servers get the best ASR replication experience and application performance when running on Microsoft Azure.

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want us to enable next.

 

5 lessons from Amazon’s S3 cloud blunder – and how to prepare for the next one

The content below is taken from the original (5 lessons from Amazon’s S3 cloud blunder – and how to prepare for the next one), to continue reading please visit the site. Remember to respect the Author & Copyright.

According to internet monitoring platform Catchpoint, Amazon Web Service’s Simple Storage Service (S3) experienced a three hour and 39 minute disruption on Tuesday that had cascading effects across other Amazon cloud services and many internet sites that rely on the popular cloud platform.

“S3 is like air in the cloud,” says Forrester analyst Dave Bartoletti; when it goes down many websites can’t breathe. But disruptions, errors and outages are a fact of life in the cloud. Bartoletti says there’s no reason to panic: “This is not a trend,” he notes. “S3 has been so reliable, so secure, it’s been the sort of crown jewel of Amazon’s cloud.“

+MORE AT NETWORK WORLD: Cloud showdown: Amazon Web Services vs. Microsoft Azure vs. Google Cloud Platform | Amazon’s S3 outage unleashes a flood of apologies – from others +

What this week should be though is a wake up call to make sure your cloud-based applications are ready for the next time the cloud hiccups. Here are five tips for preparing yourself for a cloud outage:

Don’t keep all your eggs in one basket

This advice will mean different things for different users, but the basic idea is that if you deploy an application or piece of data to one point in the cloud, it will not be very fault tolerant. Depending on how highly available you want your application to be will determine how many baskets you spread your workloads across. There are multiple options:

  • AWS recommends at a minimum to spread workloads across multiple Availability Zones. Each of the 16 regions that make up AWS are broken down into at least two, sometimes as many as five, AZs. Each AZ is meant to be isolated from other AZs in the same region. AWS provides low-latency connections between its AZs in the same region, creating the most basic way to distribute your workloads.
  • For increased protection, users can spread their applications across multiple regions.
  • The ultimate protection would be to deploy the application across multiple providers, for example using Microsoft Azure, Google Cloud Platform or some internal or hosted infrastructure resource as a backup.

Bartoletti says different customers will have different levels of urgency for doing this. If you rely on the cloud to make money for your business or its integral for productivity, you’d better make sure it’s fault tolerant and highly available. If you use it to back up files that aren’t accessed frequently, then you may be able to live with the occasional service disruption.

ID failures ASAP

One key to responding to a cloud failure is knowing when one happens. AWS has a series of ways to do this. One of the most basic is to use what it calls Health Checks, which provide a customized view of the status of AWS resources used by each account. Amazon CloudWatch can be configured to automatically track service availability, monitor log files, create alarms and react to failures. One important precursor to this working is having a thorough analysis of what “normal” behavior is so that the AWS cloud tools can detect “abnormal” behavior.

Once an error is identified, there are a range of domino-effect reactions that need to be preconfigured to respond to the situation (see above on multi-AZ, multi-region, or multi-cloud). Load balancers can be in place to redirect traffic and backup systems can be kicked in if they’ve been set up to do so (see below).

Build redundant systems from the start

It will not be very useful to try to respond to an outage in real-time. Preparation before the outage will save you when it inevitably comes. There are two basic ways to build redundancy into cloud systems:

-Standby: When a failure occurs, the application automatically detects it and fails over into a backup, redundant system. In this scenario, the backup system can be off, but ready to spin up when an error is detected. An alternative is the standby backup can be running idly in the background the entire time (this costs more but will reduce failover time). The downside to these standby approaches is there could be a lag between when an error is detected and when the failover system kicks in.

-Active redundancy: To (theoretically) avoid downtime users can architect their application to have active redundancy. In this scenario, the application is distributed across multiple redundant resources: When one fails, the rest of the resources absorb a larger share of the workload. A sharding technique can be used in which services are broken up into components. Say, for example, an application runs across eight virtual machine instances – those eight instances can be broken up into four groups of two each and traffic can be load balanced between them. If one shard goes down, the other three can pick up the traffic.

Back data up

It’s one thing to have redundant systems, it’s another thing to back your data up. This would have been especially important in this week’s disruption because it first impacted Amazon’s most popular storage service, S3. AWS has multiple ways to natively back data up:

-Synchronous replication is a process in which an application only acknowledges a transaction (such as uploading a file to the cloud, or inputting information into a database) if that transaction has been replicated in a secondary location. The downside of this approach is that it can introduce latency to wait for the secondary replication to occur and for the primary system to get confirmation. When latency is not a priority, this is fine though.

-Asynchronous replication: This process decouples the primary node from the replicas, which is good for systems that need low latency write capabilities. Users should be willing to compromise some loss of recent transactions during failure in this scenario.

-Quorum-based replication: Is a combination of synchronous and asynchronous replication that sets a minimum amount of information that needs to be backed up for a transaction to be qualified.

To determine how best to build redundant systems and back data up, customers should consider their desired recovery point objective (RPO) and recovery time objective (RTO).

Test your system

Why wait for an outage to occur to see if your system is resilient to failure? Test it beforehand. It may sound crazy, but the best cloud architects are willing to kill whole nodes, services, AZs and even regions to see if their application can withstand it. “You should constantly be beating up your own site,” Bartoletti says. Netflix has open source tools named Chaos Monkey and Chaos Gorilla, which are part of its Simian Army that can automatically kill certain internal systems to test their tolerance to errors. Do they work? This week, Netflix didn’t report any issues with its service being down.

For more information related to AWS best practices on architecting for fault tolerance, check out this AWS Whitepaper.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Microsoft employees donate $650 million in cash, services and software

The content below is taken from the original (Microsoft employees donate $650 million in cash, services and software), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft formed Microsoft Philanthropies a little over a year ago with a plan to donate money, time, cloud computing services and software around the globe. In its first year, it has done a lot of that. 

Mary Snapp, corporate vice president of Microsoft Philanthropies, provided an update to the program’s progress after its first year, and it’s impressive. The company’s contributions to various nonprofits and schools include donations worth $465 million to 71,000 organizations and more than $30 million in technology and cash donations to organizations serving refugees and displaced people. Plus, company employees raised $142 million for 19,000 nonprofits and schools. 

Approximately 74 percent of Microsoft employees have given money and/or time to a philanthropic cause. And since it began raising money from employees for charitable groups in 1983, Microsoft employees have given more than $1.5 billion. 

Through its YouthSpark education program, a global initiative to increase access for all youth to learn computer science, the company provided more than $23 million through 142 cash grants to organizations in 58 countries. The company has made a commitment of $75 million over three years to YouthSpark. 

For refugees, Microsoft employees have donated $30 million to organizations such as Mercy Corps, CARE, the International Rescue Committee, and NetHope to aid displaced people. They are also providing technology to SOS Children’s Village International, a nonprofit that cares for displaced children and their families. 

Donating cloud computing resources

Last June, the company promised to donate over $1 billion worth of cloud computing resources to 70,000 nonprofits worldwide over three years, and it is making good on that promise. The company has extended access to Azure to 350 scientists in 2016 on top of 600 research papers being worked on already. 

Additionally, Microsoft’s Minecraft coding tutorial at the Computer Science Education week engaged 15 million people in 119 countries. 

There is also some volunteerism going on through the Technology Education and Literacy in Schools (TEALS) program. Started by a Microsoft employee in 2009, TEALS engaged 750 volunteers from more than 400 companies to help bring computer science education to students in 225 U.S. high schools.

Says Snapp: 

“I’m proud of what we achieved at Microsoft Philanthropies in our first year. More than that, I am inspired by the impact that the nonprofits and researchers we support are having as they work to make the world a better place. While we made good progress, one thing is very clear to me—at Microsoft Philanthropies, we must do more.” 

In 2017, Microsoft plans to increase its current efforts, including initiatives in education, increase its support for humanitarian action, and work to make technology more accessible for people living with disabilities. It will also support new technology training opportunities for in-demand jobs that don’t require a four-year college degree but do require learning beyond high school. 

Good for them.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Five Essentials for Making Your Homemade Videos Look Better

The content below is taken from the original (Five Essentials for Making Your Homemade Videos Look Better), to continue reading please visit the site. Remember to respect the Author & Copyright.

Shooting decent video is more than pointing a camera at something and hitting record. If you want to transform your videos from mediocre to great, here’s what you need to work on.

Read more…

IoT Device Pulls Its Weight in Home Brewing

The content below is taken from the original (IoT Device Pulls Its Weight in Home Brewing), to continue reading please visit the site. Remember to respect the Author & Copyright.

floating-square
The iSpindel floating in a test solution.

Brewing beer or making wine at home isn’t complicated but it does require an attention to detail and a willingness to measure and sanitize things multiple times, particularly when tracking the progress of fermentation. This job has gotten easier thanks to the iSpindel project; an ESP8266 based IoT device intended as a DIY alternative to a costly commercial solution.

Hydrometer [Source: grapestompers.com]

Tracking fermentation normally involves a simple yet critical piece of equipment called a hydrometer (shown left), which measures the specific gravity or relative density of a liquid. A hydrometer is used by winemakers and brewers to determine how much sugar remains in a solution, therefore indicating the progress of the fermentation process. Using a hydrometer involves first sanitizing all equipment. Then a sample is taken from the fermenting liquid, put into a tall receptacle, the hydrometer inserted and the result recorded. Then the sample is returned and everything is cleaned. [Editor (and brewer)’s note: The sample is not returned. It’s got all manner of bacteria on/in it. Throw those 20 ml away!] This process is repeated multiple times, sometimes daily. Every time the batch is opened also increases the risk of contamination.

To replace this process, the iSpindel measures specific gravity and temperature regularly and hands-free. The device consists of a plastic tube, a 3D printed raft, an IMU for measuring the angle at which the tube floats, a temperature sensor, a rechargeable battery, and a Wemos D1 mini (ESP8266EX based) microcontroller. The inclination angle of the floating device changes in relation to the device’s buoyancy, and therefore in relation to the sugar content of the fermenting liquid.

This is a clever DIY solution that hits all the right notes and takes advantage of all the right elements. The plastic tube is easily sealed and easy to keep clean. The device itself has no effect on the fermenting process, the battery is more than sufficient to monitor fermentation of a batch from start to finish, the sensors give readings every bit as accurate as a properly used manual hydrometer, and the wireless capabilities are used to transmit data from a sealed environment.

Compare this device to this DIY sensor suite for wine monitoring from 2010, which was originally envisioned as a self-contained floating probe but ended up a two-part device. It’s amazing what’s available for hobbyist use today compared to even just a few short years ago.

Thanks for [janniz] for the tip!

[Image source for hydrometer: grapestompers.com]

Filed under: Beer Hacks, misc hacks

The OpenStack Summit is returning to Vancouver in May 2018

The content below is taken from the original (The OpenStack Summit is returning to Vancouver in May 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back by popular demand, the OpenStack Summit is returning to Vancouver, Canada from May 21-24, 2018. Registration, sponsorship opportunities and more information for the 17th OpenStack Summit will be available in the upcoming months. 

Can’t wait until 2018? Brush up on your OpenStack skills in 2017 by registering to attend the OpenStack Summit Boston, May 8-11 and marking your calendar for the OpenStack Summit Sydney, November 6-8.

For news on new and upcoming OpenStack Summits, visit http://bit.ly/2lvRTuN

Photos from the 2015 OpenStack Summit Vancouver

 

 

 

Using On-Premises Azure Backup Instant File Recovery

The content below is taken from the original (Using On-Premises Azure Backup Instant File Recovery), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, I will show you how to use Instant File Recovery with the MARS agent to restore files from Azure Backup.

 

 

Before Instant File Recovery

The Azure Backup MARS agent had a very traditional file restoration process before the introduction of Instant File Recovery. You selected a time that you want to restore a file from, browsed the backup store using a dialog box in the restore wizard, and then committed to restoring a file or folder. This meant that you had to restore something before seeing if it was what you wanted to restore.

Imagine your boss calls up to ask for a file to be restored that someone deleted some time ago … how far back do you go? Do you go back one day? Two? Thirty? What if the file was corrupted sometime in the past but no one knows when; how far do you go back to restore the file? How many restores have you to do to get the file back and keep your boss from taking their anger out on you?

Instant File Recovery

Microsoft made restores easier with Instant File Recovery, which you can use with version 2.0.9062.0 or later, with any age of backup you have completed with a recovery services vault. Click Recover Data in the Microsoft Azure Backup console to start the Recover Data Wizard.

The first dialog box asks if you want to restore data from the current machine or another machine; select the appropriate option.

The next screen asks if you want to restore Individual Files And Folders or if you want to restore an entire volume. Choose the Individual Files And Folders option to use Instant File Recovery. The agent will retrieve the available backup data for the machine from the recovery services vault. You can select which volume you want to restore from in the Select Volume And Date screen; the available recovery points for the selected volume are retrieved from Azure. The dialog updates itself to present you with a selection of available recovery points.

Choose a date with a recovery point (bold in the calendar), select a time from when the recovery point was created, and click Mount.

Choose a restore point [Image Credit: Aidan Finn]

Choose a restore point [Image Credit: Aidan Finn]

And this is where things change. Instead of browsing a representation of the file system and being locked into restoring a file or folder, the recovery point in Azure is remotely mounted by your MARS agent using iSCSI. This might sound like it’ll be complicated, but the recovery point just appears as another volume in File Explorer.

Browsing an Azure Backup recovery point using Instant File Recovery [Image Credit: Aidan Finn]

Browsing an Azure Backup recovery point using Instant File Recovery [Image Credit: Aidan Finn]

Restoring a file is easy:

  1. Browse through the folder structure to find the file if it is there.
  2. Open the file, directly from Azure without restoring it, to see if it’s what you want or if it’s not corrupt.
  3. Copy the file to wherever you want to, using the same copy/paste methods you’ve been using for years.
Sponsored

The mount will stay active until either:

  • You click unmount in the Azure Backup console.
  • 6 hours pass and the mount is automatically disconnected.

This means that you can go back in and get more files from the same recovery point if you need them, without going through the wizard again, saving you lots of time for complex recoveries.

The post Using On-Premises Azure Backup Instant File Recovery appeared first on Petri.

South West Show Report

The content below is taken from the original (South West Show Report), to continue reading please visit the site. Remember to respect the Author & Copyright.

The 2017 South-West Show took place at the Webbington Hotel, its regular home. It was a bright, sunny day (at least to start with) and attendance seemed good. The final count is still awaited, and there was also a questionnaire for visitors to get information and suggestions. If you were not one of those visitors, here is what you missed.

The very quick summary
No major new machines but lots of interesting new software and hardware releases and a good, upbeat atmosphere. It was a great day and R-Comp/Orpheus did a great job.

The Talks
There were 4 talks, this year in their own little corner. All talks were recorded so should hopefully appear soon. I have summarized them with the details of their stands below.

CJE Micro’s
CJE Micro’s had their usual large range of hardware and software. They had their usual wide selection and had several ARM machines setup to demo for customers.

In their talk CJE Micros explained that their customers include both users of ‘classic’ items as well as more cutting edge users. They have now been able to source a new supply of serial mice, a previously elusive and expensive creature. They also mentioned the recent !PhotoDesk release from London and the new drawing tablet (which should work on almost anything).

They are always interested in second hand items and will be going on a trip to collect items from Oxford next month. They will probably take a route via any other locations which could be on the route (London, Southampton were mentioned) so let them know if you have any previously-cherished and now surplus items taking up space you wish to have them take away.

R-Comp
R-Comp had their usual range or Windows and ARM machines, including the new RiscBook Go (a nice compromise between size, batter life and cost). They released a new RAID solution to provide a non-technical and much cheaper solution for keeping your machine backed up. There are some new, MUCH faster drivers for the ARMX6 networking operations (which are free updates for existing customers) and a new solution to partition much larger hard drives for RISC OS.

In their talk R-Comp talked about the new software releases. Many of their new developments are the result of the requirements of commercial clients, who provide the funding. The partition software is a good example. They also did a demo on the new RiscBook Go, which has a removable screen if you want to use it as a Windows touch tablet. Andrew recommended a Blue tooth mouse if you want to use RISC OS in this mode.

Orpheus Internet
Orpheus Internet were there to talk about their internet and broadband packages (and give out free sweets. Richard was also moonlighting as announcer, technical show trouble-shooter and show organizer.

ROOL
ROOL had their range of software, hardware, books and badges. This included the last few fullsize SSD cards, and new releases of Pico.

ROOL’s talk provided an update on recent events. ROOL is now 10 years old (which was as long as the time from the first RISC OS release by Acorn until their final 3.70 release). It was formed to save RISC OS from obscurity after development stopped in 2003-6. The highlights of the last year have been :- JPEG update bounty, new DDE with updates to the tools and includes BASIC Compiler (which you can get on its own free from www.riscos.fr), and new Pico release. Testing on another bounty finished late last night so watch their newspages in early March.

There is still lots to do and RISC OS relies on everyone’s help. So they highlighted ways to get get involved – contribute to the forum, try the nightly build, donate money to help them cover the costs of being at Shows and hosting the software.

The last user guide was 21 years aga. So it is out of date and needs updates/reviewing as a few things have happened since then (like the Internet). There are now only 23 chapters left to go and it is not a technically challenging task.

There are 3 new bounties – extend clipboard, TCP/IP (improving ssl, security and adding ipv6 and wifi), and USB full sync with netbsd code tree.

Rob Sprowson was reasonably confident that FontPro Dir would be available next week. He is waiting for the manuals which he was notified shipped at 3am this morning.

The Pi3 RISC OS release is still waiting on some final items of software to be fixed before it can be released.

John Norris (Bell ringing) & Tasty Treats
John was talking very enthusiastically about all aspects of bell-ringing and offering both software and hardware solutions for bell-ringers to learn and practise. It is clearly a lot harder than it looks and he was explaining how they ‘ring in the changes’. Tasty treats was offering lots of enticing jams and spreads.

Drag & Drop
The latest release of the magazine was available along with a new set of fonts for RISC OS (both will be reviewed by Iconbar in the next few weeks).

David Snell
David was demonstrating ProCad and WebWonder and John showing lots of clients how to get the most from this powerful tool.

Organizer
The dynamic duo were there to show off the 2.26 version released at London Show and to ask what users would like to see next in the software.

Archive
Jim Nagel had back copies of the magazine and details of the new DVD. The latest release was held back so that it could include all the SW Show news and updates.

Soft Rock Software
Vince had the full selection of software and his very cute RiscPC shaped case for a Pi. He was also promoting Riscository where he tracks all the news fit (and unfit to print) in the RISC OS world (always my first port of call when I log onto the Internet). He was also promoting the Bristol user group.

Steve Fryatt
Steve had new releases of his software (including Cash Book) and was showing users all the little tweaks in the software.

Chris Hall
Chris had a transparent RISC OS box which was receiving GPS data and displaying this on a little OLED screen. He was also showing how you could use this data for alsorts of other applications.

ROUGOL
Brian was talking about the groups meeting and up-coming events. If you are in the South-East, their London venue is very easy to reach (and serves excellent curry).

AmCog Games
AmCog games have a growing range of games, including Mop Tops (a Lemmings style game with lots of humour), Xeroid (a flying game), Overlord (a shoot-em up), Legends of Magic (a 3D Isomorphic adventure which also has a game editor and 70,000 novel to accompany the game).

Several of the games are also available in French and German. They are written in BASIC and run at 800×600 resolution. Users are encouraged to dissect and reuse the code. AmCog may well have the makings of a very good Games library….

All their games include extensive original music tracs and AmCog have also been working on a new sound system for RISC OS, which borrows ideas from many places including BBC and C64 sound chips. The theatre talk focussed on how easy it was to use all these features.

Ident Computer
Tom had his range of Ident Computers for RaspberryPi. He has put a lot of work into improving the configuration software and making it easier to setup and use. He has lots of developments in the pipeline which will be revealed in due course. Stay tuned…

RiscOSBits
RiscOSBits have a growing range of devices to plug into RISC OS machines or provide extra features to RISC OS which were being demonstrated. There was also a prototype card to give the Titanium wireless internet.

Show website

I took pictures all day and you can see these on Flickr

No comments in forum

The best password managers

The content below is taken from the original (The best password managers), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Joe Kissell

This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.

If you’re not using a password manager, start now. As we wrote in Password Managers Are for Everyone—Including You, a password manager makes you less vulnerable online by generating strong random passwords, syncing them securely across your browsers and devices so they’re easily accessible everywhere, and filling them in automatically when needed. After 15 hours of research and testing, we believe that LastPass is the best password manager for most people. It has all the essential features plus some handy extras, it works with virtually any browser on any device, and most of its features are free.

Who should get this

Everyone should use a password manager. The things that make strong passwords strong—length, uniqueness, variety of characters—make them difficult to remember, so most people reuse a few easy-to-remember passwords everywhere they go online. But reusing passwords is dangerous: If just one site suffers a security breach, an attacker could access your entire digital life: email, cloud storage, bank accounts, social media, dating sites, and more. And if your reused password is weak, the problem is that much worse, because someone could guess your password even if there isn’t a security breach.

If you have more than a handful of online accounts—and almost everyone does—you need a good password manager. It enables you to easily ensure that each password is both unique and strong, and it saves you the bother of looking up, remembering, typing, or even copying and pasting your passwords when you need them. If you don’t already use a password manager, you should get one, and LastPass is a fabulous overall choice for most users.

How we picked and tested

Although I’d already spent countless hours testing password managers in the course of writing my book Take Control of Your Passwords, for this article I redid most of the research and testing from scratch, because apps in this category change constantly—and often dramatically.

I looked for tools that do their job as efficiently as possible without being intrusive or annoying. A password manager should disappear until you need it, do its thing quickly and with minimum interaction, and require as little thought as possible (even when switching browsers or platforms). And the barrier to entry should be low enough—in terms of both cost and simplicity—for nearly anyone to get up to speed quickly.

I began by ruling out the password autofill features built into browsers like Chrome and Firefox—although they’re better than nothing, they tend to be less secure than stand-alone apps, and they provide no way to use your stored passwords with other browsers.

Next I looked for apps that support all the major platforms and browsers. If you use only one or two platforms or browsers, support for the others may be irrelevant to you, but broad compatibility is still a good sign. This means, ideally, support for the four biggest platforms—Windows, macOS, iOS, and Android—as well as desktop browser integration with at least Chrome and Firefox, plus Safari on macOS.

I excluded apps that force you to copy and paste passwords into your browser rather than offering a browser extension that lets you click a button or use a keystroke to fill in your credentials. And, because most of us use more than one computing device, the capability to sync passwords securely across those devices is essential.

After narrowing down the options, I tested eight finalists: 1Password, Dashlane, Enpass, Keeper, LastPass, LogmeOnce, RoboForm, and Sticky Password.

I tested for usability by doing a number of spot checks to verify that the features described in the apps’ marketing materials matched what I saw in real life. I set up a simple set of test forms on my own server that enabled me to evaluate how each app performed basic tasks such as capturing manually entered usernames and passwords, filling in those credentials on demand, and dealing with contact and credit card data.

If my initial experiences with an app were good, I also tried that app with as many additional platforms and browsers as I could in order to form a more complete picture of its capabilities. I did portions of my testing on macOS 10.12, Windows 10, Chromium OS (as a stand-in for Chrome OS), iOS 10, Apple Watch, and Android.

Our pick

You can access LastPass in a browser extension, on the Web, or in a stand-alone app.

Before I get to what’s great about LastPass, a word of context: LastPass, Dashlane, and 1Password are significantly better than the rest of the field. I suspect most people would be equally happy with any of them. What tipped the scales in favor of LastPass was the company’s announcement on November 2, 2016, that it was making cross-device syncing (formerly a paid feature) available for free. Although there’s still a Premium subscription that adds important features (more on that in our full guide), this change makes LastPass a no-brainer for anyone who hasn’t yet started using a password manager. Even its $12/year premium tier is much cheaper than 1Password or Dashlane’s paid options.

LastPass has the broadest platform support of any password manager I saw. Its autofill feature is flexible and nicely designed. You can securely share selected passwords with other people; there’s also an Emergency Access feature that lets you give a loved one or other trusted person access to your data. An Automatic Password Change feature works on many sites to let you change many passwords with one click, and a Security Challenge alerts you to passwords that are weak, old, or duplicates, or that go with sites that have suffered data breaches.

LastPass works on macOS, Windows, iOS, Android, Chrome OS, Linux, Firefox OS, Firefox Mobile, Windows RT, Windows Phone—even Apple Watch and Android Wear smartwatches. (Sorry, no BlackBerry, Palm, or Symbian support.) It’s available as a browser extension for Chrome, Firefox, Safari, Internet Explorer, and Microsoft Edge, and it has desktop and mobile apps for various platforms.

Upgrade pick for Apple users

1Password offers Mac and iOS users features not found in LastPass, plus a more-polished interface.

If you’re a Mac, iPhone, and/or iPad user with a few extra bucks, and you’d like even more bells and whistles in your password manager, 1Password is well worth a look. 1Password has a more polished and convenient user interface than either LastPass or Dashlane. It’s also a little faster at most tasks; it has a local storage option if you don’t trust your passwords to the cloud; it gives you more options than LastPass for working with attached files; and it can auto-generate one-time tokens for many sites that use two-step verification—LastPass requires a separate app for this. 1Password is, however, more expensive than LastPass and doesn’t work on as many platforms: Windows and Chromebook users, especially, are better off with LastPass.

This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.

Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

An OSI Model for Cloud

The content below is taken from the original (An OSI Model for Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

In 1984, after years of having separate thoughts on networking standards, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT) jointly published the Open Systems Interconnection Reference Model, more commonly known as the OSI model. In the more than three decades that have passed since its inception, the OSI model has given millions of technologists a frame of reference to work from when discussing networking, which has worked out pretty well for Cisco.

Announcing new Azure Functions capabilities to accelerate development of serverless applications

The content below is taken from the original (Announcing new Azure Functions capabilities to accelerate development of serverless applications), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ever since the introduction of Azure Functions, we have seen customers build interesting and impactful solutions using it.  The serverless architecture, ability to easily integrate with other solutions, streamlined development experience and on-demand scaling enabled by Azure Functions continue to find great use in multiple scenarios.

Today we are happy to announce preview support for some new capabilities that will accelerate development of serverless applications using Azure Functions.

Integration with Serverless Framework

Today we’re announcing preview support for Azure Functions integration with the Serverless Framework. The Serverless Framework is a popular open source tool which simplifies the deployment and monitoring of serverless applications in any cloud. It helps abstract away the details of the serverless resources and lets developers focus on the important part – their applications. This integration is powered by a provider plugin, that now makes Azure Functions a first-class participant in the serverless framework experience.  Contributing to this community effort was a very natural choice, given the origin of Azure Functions was in the open-source Azure WebJobs SDK.

You can learn more about the plugin in the Azure Functions Serverless Framework documentation and in the Azure Functions Serverless Framework blog post. 

Azure Functions Proxies

Functions provide a fantastic way to quickly express actions that need to be performed in response to some triggers (events).  That sounds an awfully lot like an API, which is what several customers are already using Functions for.  We’re also seeing customers starting to use Functions for microservices architectures, with a need for deployment isolation between individual components.

Today, we are pleased to announce the preview of Azure Functions Proxies, a new capability that makes it easier to develop APIs using Azure Functions. Proxies let you define a single API surface for multiple function apps. Any function app can now define an endpoint that serves as a reverse proxy to another API, be that another function app, an API app, or anything else.

You can learn more about Azure Functions Proxies by going to our documentation page and in the Azure Functions Proxies public preview blog post. The feature is free while in preview, but standard Functions billing applies to proxy executions. See the Azure Functions pricing page for more information.

Integration with PowerApps and Flow

PowerApps and Flow are services that enable business users within an organization to turn their knowledge of business processes into solutions. Without writing any code, users can easily create apps and custom automated workflows that interact with a variety of enterprise data and services. While they can leverage a wide variety of built-in SaaS integrations, users often find the need to incorporate company-specific business processes. Such custom logic has traditionally been built by professional developers, but it is now possible for business users building apps to consume such logic in their workflows.

Azure App Service and Azure Functions are both great for building organizational APIs that express important business logic needed by many apps and activities.  We’ve now extended the API Definition feature of App Service and Azure Functions to include an "Export to PowerApps and Microsoft Flow" gesture. This walks you through all the steps needed to make any API in App Service or Azure Functions available to PowerApps and Flow users. To learn more, see our documentation and read the APIs for PowerApps and Flow blog post.

We are excited to bring these new capabilities into your hands and look forward to hearing from you through our forums, StackOverFlow, or Uservoice.

How Exchange Online Protection Dynamic Delivery Works Inside Office 365

The content below is taken from the original (How Exchange Online Protection Dynamic Delivery Works Inside Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft.com

ATP Plays Safe with Attachments

Microsoft introduced the Safe Attachments feature as part of its Advanced Threat Protection (ATP) offering in 2015. ATP is an option for Exchange Online Protection (EOP). It is included in the Office 365 E5 plan and can be licensed as an add-on for $2/user per month for other Office 365 plans. Now Safe Attachments provides the option to scan inbound attachments dynamically and allow users access to message bodies while the scan proceeds. This feature is called Dynamic Delivery.

 

 

The Problem Lurking in Email

The idea behind Safe Attachments is simple. We know that attachments are a prime transmission vector for malware. This has been true since the first email-transmitted attacks like the famous “I Love You” virus appeared in 2002.

The majority of email messages are safe in that anti-malware engines are able to detect that their content does not include anything that could damage the recipient. But because malware authors constantly alter their attack techniques in an attempt to bypass anti-malware blocks, the danger exists that items might contain something that is dangerous but cannot be detected because the attack vector has never been seen before. This content might belong to a so-called Day Zero attack.

Messages and attachments that do not have a known malware signature are deemed unsafe. ATP routes these messages to a special hypervisor environment where a variety of techniques are used to test the content. If everything checks out, ATP releases the item back to the Exchange transport system for delivery to the end user.

The basic Safe Attachments feature works, but the need to spin up special test servers in the hypervisor environment to probe suspicious content can result in email delays. Some tenants reported that they experienced delays of between 10 and 15 minutes, depending on the load on Office 365 and EOP. A good case can be made that it is better to be safe than sorry when Day Zero attacks erupt in the wild, but users want their email now. And they want their email to be secure. It’s a hard balancing act.

Dynamic Delivery Steps In

Dynamic Delivery builds on the concept of Safe Attachments while recognizing that, in most cases, unrecognized content lurks in attachments rather than the messages to which they are attached. If a message is OK, it is possible to deliver it immediately. Meantime, the attachment is shuttled off to the side to be checked and when everything checks out, the attachment is released to reconstitute the full message.

The advantage of the approach is obvious. Recipients get to see the safe part of a message immediately instead of having to wait for EOP to validate the entire content. And often it is enough for a recipient to see the text of a message to understand its importance and know if they must take action.

Setting Up Dynamic Delivery

To implement Dynamic Delivery, you first need the necessary Office 365 licenses. Moving past that obvious requirement (Office 365 does nothing unless licenses are in place). The next step is to enable Safe Attachments scanning by policy. Go to the Threat Management section of the Security and Compliance Center and access Safe Attachments., You might not have a policy in place already. If not, create a new policy. You can then enable Dynamic Delivery in the policy (Figure 1) and decide the mailboxes to which you want the policy to apply. You can apply a policy to all recipients in one or more domains or members of a distribution group, or even individual people. You can also decide to exclude certain recipients from the policy (they are the people who only receive “nice” attachments). And multiple Safe Attachments policies can be active within a tenant.

Enabling Dynamic Delivery

Figure 1: Enabling Dynamic Delivery in a Safe Attachments Policy (image credit: Tony Redmond)

The User View

The Safe Attachments policy becomes active immediately it is saved. Users will be unaware that anything has changed until the first time that ATP intercepts a suspicious message en route to their mailbox. When this happens, ATP swaps the suspicious attachment for a special .eml attachment to inform the user what is happening (Figure 2).

Dynamic scanning of attachment

Figure 2: What a user sees when an attachment is being scanned (image credit: Tony Redmond)

In the background, ATP is scanning the suspicious content as before. The difference is that the user knows what is happening and can access and deal with any “good content” immediately. And when the scan is complete, ATP swaps its message out for the original now-verified and passed attachment. In my experience, the delay between the arrival of a new message and ATP updating the message with a passed attachment is usually two-three minutes, depending on the number of attachments. In short, it is enough for the recipient to read the cover note and begin to consider opening the attachment.

One thing that might disturb users is that messages are apparently delivered twice to their Inbox. The first contains the ATP notice; the second contains the real content. You get two notifications on mobile devices… But after a while, you get used to what’s happening.

Sponsored

A Step Forward

The war against malware will not stop anytime soon. Too many victims remain for malware authors to exploit, usually at great profit for the miscreants and loss for the unsuspecting. It is good to see Microsoft evolving ATP to speed up delivery to users while retaining the goodness of attachment checking. I rather like dynamic delivery. It is a good addition to the anti-malware mix.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post How Exchange Online Protection Dynamic Delivery Works Inside Office 365 appeared first on Petri.

Why DRaaS is a better defense against ransomware

The content below is taken from the original (Why DRaaS is a better defense against ransomware), to continue reading please visit the site. Remember to respect the Author & Copyright.

Recovering from a ransomware attack doesn’t have to take days
1 recovering days

Image by Eric E Castro

It’s one thing for a user’s files to get infected with ransomware, it’s quite another to have a production database or mission-critical application infected. But, restoring these databases and apps from a traditional backup solution (appliance, cloud or tape) will take hours or even days which can cost a business tens or hundreds of thousands of dollars. Dean Nicolls, vice president of marketing at Infrascale, shares some tangible ways disaster recovery as a service (DRaaS) can pay big dividends and quickly restore systems in the wake of a ransomware attack.

To read this article in full or to leave a comment, please click here

Commvault and CloudWave Bring Streamlined Cloud Backup and Recovery to Healthcare Market with Powerful New Service Offering

The content below is taken from the original (Commvault and CloudWave Bring Streamlined Cloud Backup and Recovery to Healthcare Market with Powerful New Service Offering), to continue reading please visit the site. Remember to respect the Author & Copyright.

Commvault , a global leader in enterprise backup, recovery, archive and the cloud for the healthcare market, today announced that its Commvault Data… Read more at VMblog.com.

Reverse Engineering Enables Slick Bluetooth Solution for Old Car Stereo

The content below is taken from the original (Reverse Engineering Enables Slick Bluetooth Solution for Old Car Stereo), to continue reading please visit the site. Remember to respect the Author & Copyright.

Reverse Engineering Enables Slick Bluetooth Solution for Old Car Stereo

Those of us who prefer to drive older cars often have to make sacrifices in the entertainment system department to realize the benefits of not having a car payment. The latest cars have all the bells and whistles, while the cars of us tightwads predate the iPod revolution and many lack even an auxiliary input jack. Tightwads who are also hackers often remedy this with conversion projects, like this very slick Bluetooth conversion on a Jeep radio.

There are plenty of ways to go about piping your favorite tunes from a phone to an old car stereo, but few are as nicely integrated as [Parker Dillmann]’s project. An aftermarket radio of newer vintage than the OEM stereo in his 1999 Jeep would be one way to go, but there’s no sport in that, and besides, fancy stereos are easy pickings from soft-top vehicles. [Parker] was so determined to hack the original stereo that he bought a duplicate unit off eBay so he could reverse engineer it on the bench. What’s really impressive is the way [Parker] integrates the Bluetooth without any change to OEM functionality, which required a custom PCB to host an audio level shifter and input switch. He documents his efforts very thoroughly in the video after the break, but fair warning of a Rickroll near the end.

So many of these hacks highjack the tape deck or CD input, but thanks to his sleuthing and building skills, [Parker] has added functionality without sacrificing anything.

MIT researchers built an energy-sipping power converter

The content below is taken from the original (MIT researchers built an energy-sipping power converter), to continue reading please visit the site. Remember to respect the Author & Copyright.

Researchers from MIT’s Microsystems Technologies Laboratories have built a new power supply system designed specifically for powering electronic sensors, wireless radios and other small devices that will eventually connect the Internet of Things. While most power converters deliver a constant stream of voltage to a device, MIT’s new scheme allows low-power devices to cut their resting power consumption by up to 50 percent.

The MIT system was announced at International Solid-State Circuits Conference earlier this month and maintains its efficiency at a very broad range of currents from 500 picoamps to 1 milliamp.

"Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load," Arun Paidimarri, one of the postdocs who worked on the project said. "So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents."

Rather than providing a continuous flow of power, the MIT step-down converter works with "packets" of energy. "You have these switches, and an inductor, and a capacitor in the power converter," Paidimarri said, "and you basically turn on and off these switches." The switches themselves have a circuit that release a packet of energy when the output voltage is below a specific level. If the device is using a low-power circuit — say it’s a sensor waking up to take a measurement — then the device only releases a few packets of energy. If the device needs a high-power circuit — to send a wireless signal, for example — then it can release up to a million packets per second.

What’s more, the resulting 50 percent drop in quiescent power means the researchers can start exploring other, lower-power energy sources like body-powered electronics.

Via: TechCrunch

Source: MIT News

5 ways to spot a phishing email

The content below is taken from the original (5 ways to spot a phishing email), to continue reading please visit the site. Remember to respect the Author & Copyright.

No one wants to believe they’d fall for a phishing scam. Yet, according to Verizon’s 2016 Data Breach Investigations Report, 30 percent of phishing emails get opened. Yes, that’s right — 30 percent. That incredible click-through rate explains why these attacks remain so popular: it just works.

Phishing works because cybercriminals take great pains to camouflage their “bait” as legitimate email communication, hoping to convince targets to reveal login and password information and/or download malware, but there are still a number of ways to identify phishing emails. Here are five of the most common elements to look for.

Related Video

1. Expect the unexpected

In a 2016 report from Wombat Security, organizations reported that the most successful phishing attacks were disguised as something an employee was expecting, like an HR document, a shipping confirmation or a request to change a password that looked like it came from the IT department.

Make sure to scrutinize any such emails before you download attachments or click on any included links, and use common sense. Did you actually order anything for which you’re expecting a confirmation? Did the email come from a store you don’t usually order supplies from? If so, it’s probably a phishing attempt.

Don’t hesitate to call a company’s customer service line, your HR department or IT department to confirm that any such emails are legitimate – it’s better to be safe than sorry.

2. Name check

If you receive an email or even an instant message from someone you don’t know directing you to sign in to a website, be wary, especially if that person is urging you to give up your password or social security number. Legitimate companies never ask for this information via instant message or email, so this is a huge red flag. Your bank doesn’t need you to send your account number — they already have that information. Ditto with sending a credit card number or the answer to a security question.

You also should double-check the “From” address of any suspicious email; some phishing attempts use a sender’s email address that is similar to, but not the same as, a company’s official email address.

[ Related story: 2017 Security predictions ]

3. Don’t click on unrecognized links

Typically, phishing scams try to convince you to provide your username and password, so they can gain access to your online accounts. From there, they can empty your bank accounts, make unauthorized charges on your credit cards, steal data, read your email and lock you out of your accounts.

Often, they’ll include embedded URLs that take you to a different site. At first glance, these URLs can look perfectly valid, but if you hover your cursor over the URL, you can usually see the actual hyperlink. If the hyperlinked address is different than what’s displayed, it’s probably a phishing attempt and you should not click through.

[ Related story: CIOs List the top hiring priorities for 2017 ]

Another trick phishing scams use is misleading domain names. Most users aren’t familiar with the DNS naming structure, and therefore are fooled when they see what looks like a legitimate company name within a URL. Standard DNS naming convention is Child Domain dot Full Domain dot com; for example, info.LegitExampleCorp.com. A link to that site would go to the “Information” page of the Legitimate Example Corporation’s web site.

phish email3 IDG

A phishing scam’s misleading domain name, however, would be structured differently; it would incorporate the legitimate business name, but it would be placed before the actual, malicious domain to which a target would be directed. For instance, Name of Legit Domain dot Actual Dangerous Domain dot com: LegitExampleCorp.com.MaliciousDomain.com.

To an average user, simply seeing the legitimate business name anywhere in the URL would reassure them that it was safe to click through. Spoiler alert: it’s not.

4. Poor spelling and/or grammar

It’s highly unlikely that a corporate communications department would send messages to its customer base without going through at least a few rounds of spelling and grammar checks, editing and proofreading. If the email you receive is riddled with these errors, it’s a scam.

You should also be skeptical of generic greetings like, “Dear Customer” or “Dear Member.” These should both raise a red flag because most companies would use your name in their email greetings.

phish email1 IDG

[ Related story: 14 Best tech jobs in America ]

5. Are you threatening me?

“Urgent action required!” “Your account will be closed!” “Your account has been compromised!” These intimidation tactics are becoming more common than the promise of “instant riches”; taking advantage of your anxiety and concern to get you to provide your personal information. Don’t hesitate to call your bank or financial institution to confirm if something just doesn’t seem right.

And scammers aren’t just using banks, credit cards and email providers as cover for their scams, many are using the threat of action from government agencies like the IRS and the FBI to scare unwitting targets into giving up the goods. Here’s the thing: government agencies, especially, do not use email as their initial means of communication.

phish email2 IDG

Phishing scams continue to evolve

This is by no means a comprehensive list. Phishing scammers are constantly evolving, and their methods are becoming more cunning and difficult to trace. New tactics include this frighteningly effective Gmail attack, end-of-the-year healthcare open enrollment scams, low-priced Amazon bargains, and tax-season attempts.

So, trust your gut. If an offer seems too good to be true, it probably is. If something seems even the slightest bit “off”, don’t open the email or click on links.

More resources on phishing and how to protect yourself can be found at Phishing.org.

This story, “5 ways to spot a phishing email” was originally published by
CIO.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Sinclair C5 gets a modern reboot as the IRIS eTrike

The content below is taken from the original (Sinclair C5 gets a modern reboot as the IRIS eTrike), to continue reading please visit the site. Remember to respect the Author & Copyright.

Before the Toyota Prius and Tesla Model S, there was the Sinclair C5. Launched in 1985, the electric tricycle was supposed to be the first of many battery-powered vehicles from inventor Clive Sinclair — renowned for developing personal computers such as the ZX Spectrum. The ambitious product was a huge flop due to its short range, low top speed and other limitations, selling only 5,000 before manufacturer Sinclair Vehicles folded. Perhaps it was ahead of its time, or at least that’s what Sir Clive’s nephew Grant Sinclair must be hoping, as he’s created a 25th century reimagining of the C5 called the IRIS eTrike.

The futuristic three-wheeler is part pushbike, part EV, with an eight-speed, twist grip gearing system. Its battery apparently has a one-hour charge cycle and a 50-mile range at a top speed of over 25MPH. But who wants the silver "Eco" version when you can get the matte black "Extreme" model with a top speed of over 30MPH? The eTrike will also be road legal, have a decent-sized boot, and is tall enough that Range Rover drivers should spot it (before crushing you anyway) — the low profile of the C5 wasn’t thought particularly safe, you see.

Unlike the C5, the eTrike is also built to be an all-weather vehicle thanks to its pod-like, mostly windscreen design. Inside, you get an LCD display for the essentials like speedometer and remaining charge, but curiously, you have to stick your smartphone into the included dock to get a live feed from the rear-view camera. Available to reserve right now for a £99 deposit, the IRIS eTrike is expected to launch this winter for £2,999 for the Eco model and £3,499 for the Extreme edition.

While there’s no doubt a legitimate, working prototype has been developed, it’s important to look at any Sinclair-associated product with a pinch of skepticism. For every ZX Spectrum re-release or re-design, there’s some sketchy project or another, like the ZX Vega+ handheld that’s been beset with delay after delay despite receiving over £500,000 in crowdfunding donations. Grant Sinclair himself still seems to be pursuing his POCO Zero micro computer idea after a failed Indiegogo campaign. And let’s not forget that Sir Clive announced a follow-up to the Sinclair C5 in 2010, but his two-wheeled X-1 e-bike never made it to market.

Via: Telegraph

Source: Grant Sinclair

Microsoft creates a low-data version of Skype for India

The content below is taken from the original (Microsoft creates a low-data version of Skype for India), to continue reading please visit the site. Remember to respect the Author & Copyright.

Skype’s place as the original gangster of internet messengers means that it’s never had to watch its weight, until now. Microsoft has put the app on a diet to announce Skype Lite, a slimmed-down messenger designed for countries like India. You’ll win no prizes for guessing that the Android app will heavily compress images and video and is intended to work reliably even on India’s 2G wireless infrastructure.

Microsoft is proud of its new creation, and is boasting that Skype Lite was "built in India, for users in India." The app also has a bundle of India-specific features, including SMS filtering, mobile data and WiFi usage monitoring and local Skype bots. The company is also working on bringing India’s national identity scheme, Aadhaar, into the app to enable callers to verify who they are speaking to. Skype Lite also supports seven languages, including Gujarati, Hindi and Urdu, and is available to download right now.

Via: TechCrunch

Source: Google Play, Microsoft

OpenStack sets its sights on the next generation of private clouds

The content below is taken from the original (OpenStack sets its sights on the next generation of private clouds), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, the OpenStack Foundation is launching the latest version of its platform that allows enterprises to run an AWS-like cloud computing platform in their data centers. Ocata, as the 15th release of OpenStack is called, arrives after only a four-month release cycle, which is a bit faster than its usual six-month cycle, which will resume after this release. The reason for this is a change in how the Foundation organizes its developer events over the course of a release cycle. Because of the shorter cycle, this new release focused more on stability than new features, but it still manages to squeeze a number of new features in as well.

At this point, OpenStack is a massive project that consists of almost 20 sub-projects. All of these are getting updates, of course, but what stands out here is that many of these new feature focus on better support for software containers in OpenStack. As OpenStack COO Mark Collier told me, the container projects are growing faster than the other projects. He described the combination of OpenStack with the Google-incubated Kubernetes container orchestration system as the “LAMP stack of the cloud” and attributed Kubernetes’ popularity to the fact that Google was willing to relinquish control over it and allow an open ecosystem to grow around it that wasn’t dominated by a single company.

For the Octa release, improved container support means better integration of Kubernetes into Kolla, the Openstack project that aims to make OpenStack itself easier to deploy on containers. The advantage here is that this doesn’t just make managing an OpenStack deployment easier, but also makes upgrading it a less complicated affair. Other updates include better support for Mesosphere in Magnum, OpenStack’s main project for making container orchestration services part of its stack, as well as Docker Swarm support for its Kuryr container networking service. OpenStack is clearly not playing favorites when it comes to container engines.

OpenStack is clearly not playing favorites when it comes to container engines. Even a year ago, there was a lot of discussion about whether containers might actually spell doom for OpenStack. For the most part, those fears were likely a bit overblown and containers are now an integral part of the project.

2017-02-21_1746

In discussing the future of OpenStack, Collier also noted that he is seeing a major shift in how enterprises are looking at their private clouds. The first generation of private cloud services, including OpenStack, weren’t very easy to us. “It required a much bigger team and you saw the adoption fit that pattern,” he said. “You saw Paypal and Walmart adopt it. It was great if you were one of those companies but it wasn’t worth the time for the mere mortals.” Now, as we’ve hit what Collier considers the second generation of private clouds, that’s less of an issue and you don’t need massive teams to set up a private cloud anymore — and there’s now a robust ecosystem of companies that can help you set it up, too.

In the earlier scenario, the amount of manpower needed to set up an OpenStack cloud made it difficult to do so for small teams, but Collier believes that what we’re seeing now is an inflection point where private clouds can once again compete on price with large public cloud services like AWS. There, you tend to pay a premium for flexibility, but using the likes of OpenStack can be cost effective for sustained workloads now.

 

Featured Image: Bloomberg/Getty Images

Diamanti Ships Industry’s First Bare-Metal Hyper-converged Container Platform

The content below is taken from the original (Diamanti Ships Industry’s First Bare-Metal Hyper-converged Container Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Diamanti, creators of the first hyper-converged infrastructure appliance purpose-built for containerized applications, today announced general… Read more at VMblog.com.

How Pinterest’s visual search went from a moonlight project to a real-world search engine

The content below is taken from the original (How Pinterest’s visual search went from a moonlight project to a real-world search engine), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sometime around 2013 and 2014, deep learning was going through a revolution that required pretty much everyone to reset their expectations as to how things worked, and leveled the playing field for what people were doing with computer vision.

At least that’s the philosophy that Pinterest engineer Andrew Zhai and his team have taken, because around that time he and a few others began working on some internal moonlight project to build computer vision models within Pinterest. Machine learning tools and techniques had really been around for some time, but thanks to revelations in how deep learning worked and the increasing use of GPUs, the company was able to take a fresh look at computer vision and see how it would work in the context of Pinterest.

“From a computer vision perspective we have a lot of images where visual search makes sense,” Zhai said. There’s this product/data-set fit. Users that come to Pinterest, they’re often in this visual discovery experience mode. We were in the right place at the right time where the technology was in the middle of a revolution, and we had our data set, and we’re very focused on iterating as quickly as we can and get user feedback as fast as we can.”

The end result was Lens, a product Pinterest launched earlier this month that allows users to basically point at an object in the real world with their camera and return search results for Pinterest. While a semi-beta was launched last year, Lens was the result of years of scrapped prototypes and product experimentation that eventually produced something that would hopefully turn the world collectively into a bunch of pins that were searchable through your camera, creative lead Albert Pereta said.

When a user looks at something through Lens, Pinterest’s visual detection kicks in and determines what objects are in the photo. Pinterest’s technology can then frame the image around, say, a chair, and use that to ask a query using Pinterest’s existing search technology. It uses certain heuristics, like a confidence score of what kind of object it is, and the context of it — like whether it is the dominant object, the largest one, the one the most in focus or something along the lines. Zhai said part of the priority was leveraging as much of Pinterest’s existing technology, like search, to build its visual search products.

pinterest lens

Pinterest had collected a lot of data from users initially cropping objects in their images in order to search for objects, drawing bounding boxes for their searches. The company had positive feedback loops to determine if those searches were correct — if users engaged with results for a chair, then it was probably a chair. With that, the company had lots of ways to initially train these deep learning algorithms in order shift the process over to camera photos and try to do the same thing. All that paid off in the future, as the initially janky projects gave the company the critical data set to build something more robust.

Pinterest’s goal was to emulate the service’s core user experience: that sort of putzing around and discovering new products or concepts on Pinterest. Just getting the literal results like you might expect from a Google visual search wasn’t enough to extend the Pinterest experience beyond its typical search — with keywords and concepts — to what you’re doing with your camera. There are other ways to get to that result, like literally reading the label on a bottle or asking someone what kind of shoes they are wearing.

“If I’m in my kitchen and have an avocado in front of me, if we point at that and we return a million photos of avocados, that’s close to as useless as you can get,” Pereta said. “When someone tags am avocado on Pinterest, what they expect is to wander about. It can go from cooking a recipe to health benefits and growing one in a garden. You know the related pins, you don’t quite understand why they’re there but sometimes they feel like exactly what you want to see.”

pinterest blender

One of the biggest challenges Pinterest faced was figuring out how to jump from user-generated content — like low-quality photos — to results that included more professional high-quality photography. It was easy to map from low-quality photos, like ones that are blurry or without great lighting, to other low-quality photos, visual search engineering manager Dmitry Kislyuk said. That’s primarily what the results were returning in the first demos that the team was working on, so the team had to figure out how to get to higher-quality results. Both objects clustered together on their own, so the company had to basically forth them to deliver the same semantic results and bucket them together.

Collectively, these all piece together to put together a strong argument that Pinterest is trying to be a leader in visual search. That’s largely been considered one of Pinterest’s biggest strengths. Because of its large data set that lends itself so neatly to products, each part of an image can easily be broken out into searches for other products. These searches existed early on at Pinterest, but only in limited form — and users couldn’t figure out what to do with them — but in the past years they’ve started to mature more and more. The pitch is part of what’s made Pinterest attractive to advertisers, though it needs to ensure it makes the jump from a curiosity baked into an innovation budget to a mainstay product alongside Facebook (and soon potentially Snapchat).

A lot of the success — and origins — of Pinterest’s modern visual search dovetails almost perfectly with the rise of GPU usage for deep learning. The processors had existed for a long time, but GPUs are great at running processes in parallel such as rendering pixels on a screen and doing it very quickly. CPUs have to be more versatile, but GPUs were specialized at running these kinds of processes in parallel, enabling the actual mathematics that’s happening in the background to execute faster. (This revolution has also rewarded NVIDIA, one of the largest GPU makers in the world, by more than tripling its stock price in the past year and turning it into a critical component in the future of deep learning and autonomous driving.)

“Methods for deep learning existed for 10 or 20 years, but it was this one paper around 2013 and 2014 that showed when you provided those methods on a GPU you can get amazing accuracy and results,” Zhai said. “It’s really because of the GPU itself, without that this revolution probably wouldn’t happen. GPUs only care about these specific things like matrix multiplication, and you can do it really fast.”

The actual process is a careful dance between what happens on the phone and what happens online, in order to build a more seamless user experience. For example, when a user looks at something through their phone, the annotations for Lens are returned quickly while the company finishes doing the image search on the back-end. That kind of perceived user latency helps smooth out the experience and makes it feel more real-time. That will be important going forward as Pinterest begins to expand internationally — and has to start grappling with problems like low-latency areas, potentially moving more operations to the phone.

pinterest object detection

Pinterest’s results were partially the result of a lot of new learnings, and part luck that everyone’s teams had to scrap and re-learn all their approaches to deep learning. Beyond that, Pinterest has billions of images that are largely loaded with high-quality versions of images that lend themselves to be naturally searchable, an archive of data that other companies or academics might not have. The whole “move fast, break things” kind of fits with Pinterest, which was trying to get versions in front of users in order to figure out what worked best, because the team (of less than a dozen) felt like it was inventing new user behavior.

There are plenty of other attempts by other companies to weaponize this technology into something commercial, with startups like Clarifai raising a lot of capital and  building metadata-driven visual search that it make available for retailers and businesses. Google is always a looming beast with its vast amount of data, though whether that translates into a commercial product is another story. Pinterest, meanwhile, hopes that its focus on returning related ideas rather than direct one-to-one image results — and the tech behind it — is something that’ll continue to differentiate it going forward.

“We’re trying to use camera to turn your world into Pinterest,” Pereta said. “It’s not that we’re creating some completely new experience to a user. It feels like when we nailed it, it’s when you feel like the entire world is made of pins. That thing, I take a photo of that chair, it’s not just that chair’s similar styles but also it in context. If you were to find that chair on Pinterest, that’s exactly what you’d expect to find. That wandering, that discovering. When we do a really good job with camera, it’s gonna feel like the world is made of pins.

Friday Hack Chat: Security for IoT

The content below is taken from the original (Friday Hack Chat: Security for IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

securityforiot-01Over the last few weeks, our weekly Hack Chats on hackaday.io have gathered a crowd. This week, we’re talking about the greatest threat humanity has ever faced: toasters with web browsers.

The topic of this week’s Hack Chat is Security for IoT, because someone shut down the Internet with improperly configured webcams.

This chat is hosted by the Big Crypto Team at the University of Pittsburgh. [Wenchen Wang], [Ziyue Sun], [Brandon Contino], and [Nick Albanese] will be taking questions about lightweight devices connected to the Internet. Discussion will include building things that connect to larger networks securely.

The Big Crypto team at UP are thinking about the roadblocks people have to implement security in their projects, and if apathy or ignorance is the main reason security isn’t even considered in the worst IoT offenders.

The Hack Chat is scheduled for Friday, February 24th at noon PST (20:00 GMT).

Here’s How To Take Part:

join-hack-chatOur Hack Chats are live community events on the Hackaday.io Hack Chat group messaging.

Log into Hackaday.io, visit that page, and look for the ‘Join this Project’ Button. Once you’re part of the project, the button will change to ‘Team Messaging’, which takes you directly to the Hack Chat.

You don’t have to wait until Friday; join whenever you want and you can see what the community is talking about.

Upcoming Hack Chats

These Hack Chats are becoming very popular, and that’s due in no small part to the excellent lineup of speakers we’ve hosted. Already, we’ve had [Lady Ada], [Sprite_tm], and [bunnie] — engineers, hackers, and developers who are at the apex of their field. We’re not resting on our laurels, though: in a few weeks we’ll be hosting Hack Chats with [Roger Thornton], an engineer with Raspberry Pi, and Fictiv, masters of mechanical manufacturing.

Filed under: Hackaday Columns