Acorn World exhibition in Cambridge – 13th and 14th May

The content below is taken from the original (Acorn World exhibition in Cambridge – 13th and 14th May), to continue reading please visit the site. Remember to respect the Author & Copyright.

A modern event with a retro bent! In just over a week’s time, the Centre for Computing History, based in Cambridge, will be hosting an event that should be of interest to anyone with fondness for computers that came from the Acorn stable – Acorn World 2017. The event has been organised by the Acorn […]

Paperspace launches GPU-powered virtual machines loaded with tools for data scientists

The content below is taken from the original (Paperspace launches GPU-powered virtual machines loaded with tools for data scientists), to continue reading please visit the site. Remember to respect the Author & Copyright.

Not many companies would dare to fight with Amazon, Microsoft and Google at the same time, but three year old Paperspace thinks it can carve out a cloud computing niche by rolling out the red carpet for data scientists. Today it launched new GPU-powered virtual machines that utilize Nvidia’s Pascal cards. With pre-installed machine learning frameworks, Paperspace caters to the emerging market of prosumer and enthusiast data-scientists.

“Amazon Web Services (AWS) is amazing, but it’s hard to get going,” said Dillon Erb, co-founder of Paperspace.

To make machine learning in the cloud more accessible, Paperspace offers users a familiar Linux desktop from within their web browser. Anyone can then use either secure shell or a web based terminal to implement code. And in addition to interface and hardware improvements, Paperspace aims to undercut on price by offering access to Pascal chips with 2560 CUDA cores and 16GB of memory for just $0.65 per hour.

“In the last 18 months, we have seen a rise in people asking for GPUs,” added Erb.

In time Paperspace will have to prove out its thesis about the size of the democratized machine learning market. Over 50,000 machines have been launched by Paperspace users, indicating traction, but it’s still very early days for the company.

Machine learning as a service (MLaaS) startups have taken heat in recent months. There are a number of reasons for this, but one of them is that there’s a mismatch between the market of highly technical engineers and products that simplify the development process for novices.

To be clear, Paperspace is in a category of its own aside from startups like Bonsai and H20.ai, but the metaphor is still appropriate. Existing cloud computing providers, that are already embedded in enterprises, will continue to move toward democratization — that doesn’t guarantee whitespace for anyone else, especially when you take into account the sheer cost of building and upgrading data centers.

Y Combinator, NYU and Insight Data Science are early partners with Paperspace. The new GPU-powered virtual machines will be used by Insight for its professional training program. YC is also experimenting with the streamlined system for use within its new cohort of AI startups. 

May the Fourth be with you on World Password Day

The content below is taken from the original (May the Fourth be with you on World Password Day), to continue reading please visit the site. Remember to respect the Author & Copyright.

Get ready to be bombarded with “May the Fourth be with you” puns regarding your passwords and identity, as this year May 4 is not only Star Wars Day but also World Password Day.

Leading up to World Password Day, I received dozens of emails about how bad our password hygiene still is, studies about poor password management, reminders to change passwords, pitches about password managers and biometric options to replace passwords, reminders to use multi-factor authentication (MFA) as well as the standard advise for choosing a stronger password. Some of that advice contradicts NIST-proposed changes for password management.

Although NIST closed comments on for its Digital Identity Guidelines draft on May 1, VentureBeat highlighted three big changes. Since this is NIST and changes to password management rules will eventually affect even nongovernment organizations and trickle down to affect pretty much everyone online, it’s important to look at them. Those changes, according to VentureBeat, boil down to:

No more periodic password changes. No more imposed password complexity. Mandatory validation of newly created passwords against a list of commonly-used, expected, or compromised passwords.

Right now, NIST is working on developing SOFA-B Framework; that is short for the project’s full mouthful of Strength of Function for Authenticators – Biometrics. It will establish a standardized method for comparing and combining authentication mechanisms and “focuses on three core concepts: False Match Rate, Presentation Attack Detection Error Rate, and Effort.” By creating SOFA-B, NIST hopes to “achieve a level of measurability similar to that of entropy for passwords.”

SecureIDNews reported:

Working with the biometrics community, NIST has a five-step approach to creating the SOFA-B framework:

1.       Analyzing the attack points of a biometric system

2.       Requiring baseline security to mitigate common attacks

3.       Quantifying factors specific to biometric systems

4.       Differentiating attack types as random attacks or targeted attacks on a known user

5.       Measuring strength of function for biometric authenticators

Why should you care? Because the basis for biometric updates in SOFA-B has worked its way into NIST SP 800-63-3, aka NIST’s Digital Identity Guidelines draft. When it’s done, you might be able to compare the biometric security in one device, say a smartphone, to another.

We’ve been hearing that passwords are dead for years, yet for most people wanting to log in on most places online, you still use a username and password—or sign in via another site such as Facebook or Google where you were authenticated via username, password and hopefully 2FA.

Most everyone knows that, as a whole, people suck at setting up strong passwords and changing default passwords. In fact, according to the latest Verizon Data Breach Investigation Report, “80 percent of hacking-related breaches leveraged either stolen passwords and/or weak or guessable passwords.” Furthermore, the report states, “If a username and password is the only barrier to escalating privilege or compromising the next device, you have not done enough to stop these actors.”

Under the discussion of breach trends in Verizon’s DBIR, it states:

Even if you are not breached, there are armies of botnets with millions (or billions) of credentials attempting to reuse them against other sites. In other words, even though components of authentication weren’t compromised from you, it doesn’t mean they were not compromised. Again, if you are relying on username/email address and password, you are rolling the dice as far as password re-usage from other breaches or malware on your customers’ devices are concerned. Those are two things you shouldn’t have to worry about.

Although a basic username/password login is not enough, despite what some of the pitches claim, I can’t imagine this will be the last World Password Day. So, have a care about your passwords as they are the key to open the door to your online life, business secrets or even networks. I encourage you to use a password manager and to set up 2FA on every site that offers it. Don’t forget to change those shared passwords for online streaming sites either!

Happy Star Wars Day, as well as World Password Day!

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Dubai is the first city to design its own Microsoft font

The content below is taken from the original (Dubai is the first city to design its own Microsoft font), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s not as flashy as having the world’s tallest building, but the city of Dubai can now claim a new achievement — it’s the first to create its own Microsoft font. The Dubai Font, which combines Latin and Arabic texts, can be accessed globally through Microsoft Office 365.

Dubai’s crown prince, Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, oversaw the project and is now ordering government agencies to use the new font in their official correspondence. Prince Hamdan also shared a video on Twitter touting the Dubai Font as a tool for self-expression. "Expression knows no boundaries or limits. Expression is strength and freedom. It defines who you are," the video states. The prince concludes his tweet with the hashtag #ExpressYou.

But, as The New York Times points out, there are boundaries and limits for expression in Dubai, and in the rest of the United Arab Emirates as a whole. The government restricts freedom of speech and freedom of the press. Even a YouTube sketch comedy video can land you in a maximum-security prison for nine months. "What’s missing from Dubai’s new motto is a little asterisk with fine print, ‘Except that anyone who says something the emirs don’t like goes to jail,’" Sarah Leah Whitson, the executive director of the Middle East and North Africa division of Human Rights Watch, tells The New York Times.

At least the Dubai Font has decent kerning, which is more than can be said for other internet fonts.

Via: The New York Times

Source: Dubai Font

Google brings its solar panel calculator to Germany

The content below is taken from the original (Google brings its solar panel calculator to Germany), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google’s Project Sunroof is a way of combining the company’s mapping data with information on how much sunlight hits your home. With it, people can work out if their abode gets enough radiation for them to consider investing in solar panels. Until now, the service was limited to the US but, from today, the system is rolling out to Germans similarly curious about adopting renewables.

The search engine has teamed up with German power company E.On to map the country and calculate solar fall. The project currently covers 40 percent of all German homes, totaling some seven million buildings with a focus on densely-populated areas like Berlin, Munich, Rhine-Main and the Rhur. Locals can simply type in their address to be given Google’s estimate of the potential costs and return of the project.

The tool is also being added to E.On’s solar sales website and will, hopefully, encourage more people to make the switch. Germany is leading the way in the fight against climate change, and in 2016, a full third of its energy came from sources such as wind, solar and hydro. The country’s efforts have been so successful that there is a pleasant "trail of blood" running across the local coal industry.

Source: Google

Sure it’s Time to Upgrade your Data Center, but is your Plan Feasible?

The content below is taken from the original (Sure it’s Time to Upgrade your Data Center, but is your Plan Feasible?), to continue reading please visit the site. Remember to respect the Author & Copyright.

So you’ve realized it’s time to upgrade your data center and you’re overflowing with "brilliant" ideas.  You’ve probably run out of space, power, and/or cooling and want to take advantage of newer technologies and operate more efficiently.  You’ve been googling every possible approach, but the daunting options are starting to overwhelm you.  So many choices.  So many trade-offs. You can’t decide if you should focus on a short-term upgrades or invest and plan for the long haul.  And which approach is cheaper?  At this point, it’s time to consult the experts at PTS to conduct a feasibility study to quantify the costs and benefits of your data center upgrade plans.

PTS will identify the risks and rewards associated with your project.  We will help you understand the requirements to design and build a new or upgraded data center, and clarify the ROI and impact of any proposed changes. Armed with the information included in a feasibility study, you will be fully prepared to approach your data center project with realistic expectations with a full understanding of the benefits of the chosen approach. And PTS can execute the plan to perfection, and will be with you every step of the way.

For more information, download a free copy of the PTS Brochure on:
Data Center Planning & Feasibility Consulting Services

Core blimey! 10,000 per rack in startup’s cloud-in-a-box

The content below is taken from the original (Core blimey! 10,000 per rack in startup’s cloud-in-a-box), to continue reading please visit the site. Remember to respect the Author & Copyright.

+Comment Say hello to hyperdense server and NVMe storage startup Aparna Systems and its Cloud-in-a-Box system.

Originally named Turbostor and founded in February 2013, the company has emerged from stealth with the Orca µCloud, a 4U enclosure that converges compute, storage and networking, and offers, Aparna claims, up to 10,000 cores in a rack. The 4015 version has 15 µServers and the 4060 has up to 60. So, have ten of these in a 42U rack and that means 1,000 cores per 4U 4060 box.

The Orca µServer is packaged in a box or cartridge-sized like a 3.5-inch hard disk drive, draws less than 75 watts, and comes in two variants: Oserv8 with 8-core Broadwell Xeons, and Oserv16 with 16-core ones.

Oserv16

Oserv16 µServer

Both have DDR DRAM, and dual SATA or NVME SSDs. We’re told “storage IO is non-blocking based on its support for both SATA at 12 Gbps (6 Gbps per SSD) and NVMe at 64Gbps (32 Gbps per SSD), with latencies of 100 microseconds (µs) and 10µs, respectively.”

The µCloud systems enclosure is NEBS-compliant and features a fully fault-tolerant, non-stop, non-blocking performance, with:

  • 5 “nines” availability
  • Dual, hot-swappable AC/DC power supplies
  • GPS clock to support applications that require precise timing
  • Dual, hot-swap active/active switches that deliver 20Gbps per µServer
  • Aggregate uplink capacity of 640Gbps (2 switches x 8 ports at 40 Gbps each)

It can be used as a bare-metal system or for running virtualized or containerized environments.

OrcaCloud60

Orca µCloud 4060

Aparna claims its compact convergence of compute, storage and networking, when compared to existing clusters made from rack-level systems or blade servers, can mean an up to 40 per cent CAPEX and OPEX saving, lower space needs and an electricity draw reduction of up to 80 per cent.

It’s marketing the Orca systems to service providers and enterprises for mission-critical applications from the edge to the core. The startup is casting its net far and wide, saying Orca’s “open software architecture and non-stop high performance make the Cloud-in-a-Box suitable for virtually any networking, computing or storage application, including fog and multi-access edge computing, databases, data analytics, the Internet of Things, artificial intelligence and machine learning.”

CEO Sam Mathan says he expects “the µCloud systems [to be] especially popular for edge computing and aggregation applications. Of course, these same capabilities are also important in the enterprise, where the ability to scale compute and storage resources is often constrained by available data centre space and power.”

Other execs are CTO Alex Henderson, a co-founder of Turbostor in March 2013, as was Ramana Vakkalagadda, director for software engineering.

Aparna investors include:

  • Divergent Venture Partners
  • Vish Mishra, Clearstone Partners
  • Sam Mathan
  • Kumar Malavalli, Brocade founder
  • Sam Srinivasan, Former Cirrus Logic CFO

We don’t know the funding amounts but Divergent reportedly put in $500,000.

Orca_table

Orca systems and server table

+Comment

This hopeful killer whale of servers seems to rely on innovative packaging to produce its core density, high per-server bandwidth and precise event timing. It must also have ferociously efficient cooling technology in its 4060 version with 650 x 16-core Xeons milling around in there.

We don’t know the storage capacity but suspect it’s not much, given the space constraints inside an Orca µServer. We don’t know if 2.5-inch or M.2 firm factor flash storage is being used either. Aparna simply does not say how much SSD capacity there is.

Nor do we know much about its sales channel. The company has just come out of stealth, has a few customers and what looks like a hot box with neat timing features suitable for carrier types as well as dense server packaging.

We think the funding needs for such a hardware-focussed startup must in the $5m to $10m area, if not more. Aparna now has to sell a fair number of systems and get set for an A-round of VC funding. It has a mountain to climb as it establishes itself, with Cisco, Dell, HPE, Huawei, Lenovo, Supermicro and others selling servers against it. If Aparna can pack server CPUs this close together then, surely, so can they.

Orca µCloud 4015 and 4060 systems, and the Oserv8 and Oserv16 µServers are all available for customer shipment. Pricing for entry-level configurations of the µCloud model 4015 system begins at $49,500. ®

Configure StorSimple as an iSCSI Storage System

The content below is taken from the original (Configure StorSimple as an iSCSI Storage System), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this step-by-step instructional guide, I will show you how to configure an Azure StorSimple 1200 virtual appliance as an MPIO-capable iSCSI storage system.

 

 

Please read Deploying a StorSimple Virtual Appliance before proceeding with this article.

Why Use iSCSI?

StorSimple will often be used when demands on an existing storage system have exceeded its capabilities. For example, an application server uses an E: drive to store terabytes of data. Demand continues to grow. The business has decided that instead of continuing to invest in fast and expensive local storage for the virtual hard disks of the virtual machine, they are going to switch to using cloud storage. They need to do so without too much configuration change.

By using the iSCSI capabilities of StorSimple, the customer can:

  1. Configure a volume (iSCSI LUN) on-premises of the StorSimple virtual appliance.
  2. Connect the application server to the StorSystem volume using the guest OS’s built-in iSCSI initiator and bring the volume online as F:.
  3. Shut down the application in the virtual machine.
  4. Copy the data from E: to F:.
  5. Disconnect E: from the virtual machine and change the StorSimple volume from F: to E: on the application server.
  6. Start the application in the virtual machine. The data is in the same volume letter but it is kept in StorSimple. This is storage that is local and tiered to Azure.

Deploying the StorSimple Virtual Appliance

When you are designing the virtual machine, you probably should keep normal iSCSI concepts in mind:

  • Use 2 virtual NICs just for iSCSI, as well as the default management virtual NIC. This will allow you to have 2 iSCSI connections from each application server that is using MPIO.
  • Assign these iSCSI NICs to a dedicated storage network. This is either a virtual network or VLAN.
  • Do this or similar for each machine that will connect to the StorSimple virtual appliance.
  • Each machine that will connect to StorSimple should have MPIO enabled. This is a feature in Server Manager on Windows Server.
Sponsored

When you are configuring Network Settings in the local StorSimple browser interface, you will have 3 interfaces to assign IP addresses to:

  • Ethernet: The management interface and the only interface capable of talking to Azure
  • Ethernet 2: The first iSCSI NIC
  • Ethernet 3: The second iSCSI NIC
Configuring management and iSCSI NICs in StorSimple virtual appliance [Image Credit: Aidan Finn]

Configuring Management and iSCSI NICs in StorSimple Virtual Appliance [Image Credit: Aidan Finn]

When you are setting up the StorSimple virtual appliance in the local browser interface, do not forget to choose iSCSI as the appliance option. You do not want the default File Server in Device Settings.

Setting up StorSimple as an iSCSI storage appliance [Image Credit: Aidan Finn]

Setting Up StorSimple as an iSCSI Storage Appliance [Image Credit: Aidan Finn]

Create an iSCSI Volume

Log into your application server and retrieve the IQN for iSCSI communications. In the case of Windows Server:

  1. Open iSCSI Initiator. Permit the service to run after every reboot.
  2. Open the Configuration tab.
  3. Copy the IQN and keep it safe.
Retrieving the iSCSI IQN from an application server [Image Credit: Aidan Finn]

Retrieving the iSCSI IQN from an Application Server [Image Credit: Aidan Finn]

You can create a new iSCSI Volume in two places in the StorSimple Device Manager in the Azure Portal:

  • Centrally in the StorSimple Device Manager: Treating your collection of appliances as a logical unit
  • In the properties of an individual virtual appliance

Go to Volumes and click +Add Volume. In my example, I have to select a virtual appliance. Then, the reset of the Add Volume blade will populate with options. Enter the following details:

  • Volume name: The name of the LUN that you want to create and share via iSCSI
  • Type:
    • The default tiered volume: Mix of local and cloud storage
    • Locally pinned: Entirely local storage
  • Capacity: Between 500GB and 5TB
  • Connected hosts: Permit a server to connect to the volume via iSCSI

Navigate into Connected Hosts. You can select an existing server or click Add New. I will do the latter. Enter the name of the server to describe the access control record (ACR) and paste in the IQN of the application server. Click Add.

Wait for the ACR creation task to complete. Select the ACR and click Select. The Add Volume blade is now complete and you can click Create. A new iSCSI volume is brought online after a few moments.

Creating a new iSCSI volume in StorSimple [Image Credit: Aidan Finn]

Creating a New iSCSI Volume in StorSimple [Image Credit: Aidan Finn]

Adding More Servers to a Volume

You can allow more servers to connect to an iSCSI volume. Make note, this should only be done when the servers are in the same cluster and are accessing the volume in a coordinated manner.

Sponsored

To add more ACRs, browse to the volume in StorSimple Device Manager, open the volume, and click Connected Hosts. You can add an ACR for each server. Enter the name to describe the connection and the IQN of the server in question.

The post Configure StorSimple as an iSCSI Storage System appeared first on Petri.

Windows 10 S laptops will start at $189 and ship this summer

The content below is taken from the original (Windows 10 S laptops will start at $189 and ship this summer), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s answer to Chrome OS is called Windows 10 S. This new operating system is a streamlined and secured version of Windows 10. It runs sandboxed apps and doesn’t require expensive hardware.

And this is where Microsoft shines as it can talk with all major PC OEMs to convince them to build Windows 10 S devices. The company announced that Acer, Asus, Dell, Fujitsu, HP, Samsung and Toshiba are all working on Windows 10 S devices.

Entry-level Windows 10 S devices will start at $189, which should help convince schools when it comes to buying a ton of laptops for their students.

These devices come with a free one-year subscription of Minecraft: Education Edition. But that’s not all. Office 365 apps will also be free for both students and teachers.

Windows 10 S devices will ship this summer, just in time for the new school year. Schools will be able to download Windows 10 S for free if they want to upgrade older devices to Windows 10 S. Let’s see if this will be enough to compete with Google in the education space.

How Microsoft plans to reinvent business productivity

The content below is taken from the original (How Microsoft plans to reinvent business productivity), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s Office applications haven’t changed much over the past 25 years. Indeed, a time-traveller from 1992 who knew how to use Word 5.5 for DOS or Mac System 7 would have to get used only to the tools moving from vertical menus to the horizontal ribbon.  

Yes, Microsoft successfully brought Office back to the Mac after years of neglect. It also used the acquisition of Accompli and Sunrise to quickly get high-quality email and calendar apps onto iOS and Android — those teams are revitalizing the Outlook applications on PC and Mac, and the new To Do service is trying to do the same thing, based on the popular Wunderlist app. Yes, there are some clever new tools in Word and PowerPoint that use machine learning to improve spell checking and automate slide design, and the monthly updates keep adding more features. And, yes, the hidden gem that is OneNote is finally getting significant investment to make the note-taking tool more useful on more platforms.

But after all this time, a Word document still looks like a Word document, a PowerPoint presentation still looks like a nice slide deck, an Excel spreadsheet still looks like a spreadsheet, and an Access database still looks like a searchable, sortable, customizable card catalogue. Meanwhile, cloud services have become far more powerful ways of capturing and using information.

Linking data not documents

Power BI takes massive data sets from multiple services and data stores and creates visualizations that would need deep expertise to create in Excel. Drag and drop programming in Microsoft Flow lets you take information from services like Twitter, run it through machine learning APIs like sentiment analysis and use it to generate alerts or control internet of things (IoT) device like a lightbulb. PowerApps lets you add information to the very specific workflows that are too specific to your business to be covered by a CRM system or even industry-specific software.

For example, if you’re a small business that inspects and repairs engines and generators, you can use PowerApps to create your own app to capture all the steps of your checklist on a phone or tablet, including taking photos of the machinery and calculating the measurements instead of filling out paper forms that end up as a document. For that PowerApps customer, doing those calculations was the single most error-prone step of the process. And Microsoft’s new Health team is trying to do something similar for doctors, who (in the US) currently spend 102 minutes a day filling out forms.

Dynamics is increasingly becoming Microsoft’s third commercial cloud, alongside Azure and Office 365, and that’s where Microsoft has been “reinventing business process” as Nadella puts it. That starts with shifting software you used to run on your own servers into a cloud service that’s managed for you and updated regularly with new features, to compete with both Salesforce and ERP tools from companies like Oracle and SAP, and unifying the system of record for your business with the system of engagement that tells you about your customers.

This is where the LinkedIn integration that Microsoft is just starting fits in. Currently it’s being used as a source of information to understand which people are relevant to your sales pitches (Dynamics and the LinkedIn Sales Navigator show you to whom you should send what information and when, and you can now buy them together as the Microsoft Relationship Sales Solution for $135 a month), and for keeping track of employees and potential hires on LinkedIn.

The new Dynamics sales tool looks at email and calendar information compared to who you’re connected to on LinkedIn to see if you’re meeting and talking to the right people, and it might suggest a coworker who could help make a connection that would help your deal. Instead of filling out a document in Office and sending it to a human, in the future you might chat with a bot to kick off a business process. That’s intelligence aimed at making you more productive, and this broad notion of productivity covers everything from blockchain-based smart contracts running on Azure to being able to have a business conversation with live translation through Skype – to the MyAnalytics add-in for Office that uses the Office Graph to tell you how much time you spend in meetings and whether you’re missing email from key contacts.

In a blog post, Nadella wrote about “systems of intelligence that are tailored to each industry, each company, each micro-task performed by each person, systems that can learn, expand and evolve with agility as the world and business changes.” And the approaches outlined above do something very different with the information that would otherwise end up in a document as just another step of a business process — but what about changing those documents?

Documents get intentional?

For the past quarter-century, the cash cow that is Office has simply been too successful to be replaced, despite several projects targeting it over the years.

Some of you may remember that, back in 2000, part of the original .NET vision was to create a “universal canvas” that “[built] upon XML schema to transform the internet from a read-only environment into a read/write platform, enabling users to interactively create, browse, edit, annotate and analyze information,” Microsoft wrote.

Over the years that canvas was going to be the web browser, or XML data structures in documents, but what we mostly got out of it was a more reliable, XML-based file format for Office documents that isn’t really used for any of the clever things you could do with a semi-structured document. There are, for example, tools that can automatically combine information from a system like Microsoft Dynamics to generate an invoice or a marketing document, but the majority of documents are still manually generated, and they look very familiar.

The most interesting recent attempt from Microsoft to come up with a new kind of document since OneNote (which is still familiar as the digital version of a paper notebook) is Sway. Sway uses machine learning-driven design and layout to take your information and make it into something that’s a bit like a web site and a bit like a mobile app and a bit like a slide deck (and a lot like what hypertext might have evolved into if prosaic documents hadn’t been more useful for most people). If apps are for building a workflow out of inputting information, Sway is about persuasive ways of conveying information.

Now Microsoft is buying Intentional Software and bringing the creator of the first WYSIWYG word processor Charles Simonyi back after 15 years to work under Rajesh Jha, executive vice president of the Office product group and co-author of some significant patents about using XML for documents.

Is reinventing documents, and the Office applications that create them, the next step in Nadella’s promise to reinvent productivity?

Simonyi’s vision

Intentional Programming was Simonyi’s idea for making development easier for non-developers using domain-specific languages that describe all the details of an area of expertise, whether that’s marine engineering or shoe manufacturing. “The Intentional platform can represent domain specific information both at the meta-level (as schemas) and at the content level (as data or rules),” Simonyi notes in his rather vague explanation of the acquisition; it’s all about moving from generic applications that help anyone create a generic document like a letter or an invoice, to much more specific systems that incorporate rules and definitions but are still as easy to use as Word and Excel.

An expert in pensions, for example, could write down the details of a pension contract as mathematical formulas and tables (like a spreadsheet) and test cases in text descriptions (like a Word document), and have that description act as the rules of the pension in code. Instead of writing business applications to do what those expert users need, developers would write a tool that allows the experts to create the applications themselves by specifying what the application should do. In the example above, the pension system would be an end-user tool like a spreadsheet, but it would always follow the rules the expert specified — and it could be updated by the expert without needing the developer to come back and help.

It’s a grand vision; when Simonyi wrote a research paper on it in 1995 he called it “the death of computer languages.” In the 15 years since Simonyi left Microsoft (taking the idea but not the code for his research project with him and rewriting it twice), Intentional Software hasn’t shipped a product publicly, despite demonstrating its Domain Workbench for creating domain-specific tools in a variety of languages some years ago. The company has a handful of customers, including consulting firm Capgemini, which worked with Intentional Software to create a pensions “workbench,’” though it’s not clear whether that was a pilot or a live project.

But cloud services, machine learning and new approaches to development mean that revisiting ideas that were impossible to develop ten or 15 years ago can yield very different results now. Plus, including them in the Office platform has major advantages.

For Intentional Software, building business tools as a small software company meant competing with every other software development tool on the market, including the ‘low-code’ app creation services like Salesforces Lightning and Microsoft’s own PowerApps. Intentional’s tools could have more impact by being built it right into the Office applications where billions of people get their work done —  and they could make use of an increasing range of add-ins in those Office applications to pull in information from services and route documents into different workflows.

Office as a service

The intelligence and machine learning services that Microsoft is investing in so heavily can help you do research, or let you validate addresses and find out if the person who needs to sign a contract is available before you route a document to them for signing. It’s the first steps of what Nadella promised when he wrote about reinventing productivity, saying that “productivity requires a rich service spanning all your work and work artifacts (documents, communications, and business process events and tasks). It is no longer bound to any single application. It’s a service that leverages the cumulative intelligence and knowledge you and your organization need to drive productivity.”

The focus of Office 365 is becoming less about the individual productivity of creating an Office document, and more about teamwork, or ‘social productivity, as Nadella puts it. That includes Microsoft Teams, Skype and Cortana, as well as Word, Excel and PowerPoint, where you can collaborate on documents and reuse information from SharePoint.

Today, all these new services still sit alongside the familiar document interface, or take you out of Office onto web sites, rather than changing the experience of working with information. The Intentional platform could give us Office applications that make working with business information — and the rules and policies that govern it — much more like using generic applications to create generic documents. It could deliver on Microsoft’s ideas about moving “from tools that require us to learn how they work, to natural tools that learn to work the way we do.”

It could also build on the ‘business application platform’ Microsoft is creating between Office, Dynamics and PowerApps, where the system knows what entities like customers, suppliers and invoices are, using the Common Data Model (CDM). The CDM connects your business information in a graph that a domain-specific system could take advantage of, so the pension-specific language from the earlier example wouldn’t have to specify all over again what a job, salary and contract are.

As Simonyi puts it, “imagine the power of an ontology consisting of thousands of terms covering most of the common activities that comprise our personal and professional lives ranging from life transitions, education, entertainment, buying and selling.”

Simonyi also talks about building on pen computing (like the Surface Pro) and wall-sized screens (like the Surface Hub), as well as the Common Data Model, and says Intentional has “patterns for distributed interactive documents and for views for a universal surface.”

Bill Gates has continued to be enthusiastic about Simonyi’s methods and ideas and mentions him frequently to current Microsoft executives. It’s not clear whether Nadella is bringing Simonyi back to create new kinds of Office applications that might be a better fit for what we want to do with business information, or whether Simonyi is coming back because he wants to use all the resources that Microsoft already has created that fit in so well with his vision. It’s also not clear whether the ideas of the Intentional platform will fit better at Microsoft than they did in 2002, when many Microsoft divisions decided against adopting them.

Whether or not it succeeds, bringing Intentional to the Office group certainly signals that Nadella is interested in shaking up the familiar Office applications and documents and creating something quite different. It’s yet another unexpected but apparently logical step for his attempt to reinvent productivity, which is really less about productivity apps and much more about what you accomplish relative to the effort you invest. It’s “simply a way of thinking about how well we use our time” as Frank Shaw put it in a 2014 post on the Official Microsoft Blog, or rather more grandly, “it’s the engine of human progress.”

This story, “How Microsoft plans to reinvent business productivity” was originally published by
CIO.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Would you pay $1,600 to replace your sheet music with a tablet?

The content below is taken from the original (Would you pay $1,600 to replace your sheet music with a tablet?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, we told you about the Gvido, a lovely double-screened tablet designed to organize and display sheet music. Created by Japanese company Terrada Music, it allows musicians to turn pages with the tap of a finger. Now, it looks like the Gvido is finally getting ready to ship. It’ll be available on September 20th for a measly $1,600.

So, what are you getting for that $1,600? Two 13.3-inch E Ink displays, 8GB internal storage, a microSD card slot, PDF compatibility and a Wacom pen for annotations. If you’ll recall, Sony also has an E Ink device that’s expensive and serves a very specific purpose. The $1,100 Digital Paper is targeted at TV and movie studios’ HR departments and lets crew members easily read, fill out and submit paperwork. You know, without the hassle of using actual paper.

The Gvido is an interesting idea, and yes turning pages while playing is a hassle. But, that seems like an awful lot of money for a problem that can easily be solved with some Sortkwik and a little manual dexterity. If the tablet could actually listen to you while you play and turn the page for you automatically, that would be something!

Via: The Verge

Source: Gvido

Acorn World Sat 13th – Sunday 14th May

The content below is taken from the original (Acorn World Sat 13th – Sunday 14th May), to continue reading please visit the site. Remember to respect the Author & Copyright.

This new event/exhibition is being organised in Cambridge at the Centre for Computing History by the Acorn and BBC User group.

It will include machines and software from the whole of Acorn’s history and also beyond. And it will also include Acorn’s ‘children’ – the Companies which Acorn helped to create and grow.

Event runs 10am-5pm and tickets will cost 8 pounds (which also gives you full access to the rest of the Museum which includes lots of other history, nostalgia and trivia.

Whether your interest is past, present or future, there will be lots of interest to see…

Museum website

No comments in forum

Microsoft To-Do app Tips & Tricks

The content below is taken from the original (Microsoft To-Do app Tips & Tricks), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft To-Do is already a buzz in the mobile application world. And you probably might be one of the users. We recently saw how to use Microsoft To-Do app and also discussed troubleshooting common To-Do problems. In this post, we intend to cover some Microsoft To-Do Tips & Tricks that might help you if you are going to use the app regularly.

Microsoft To-Do app Tips & Tricks

Using ‘Ok Google’ with To-Do

The latest inclusion in Android Nougat was the Google Assistant. And Microsoft To-Do app supports voice commands that are compatible with the assistant. You can simply start the assistant by saying “Ok Google”. Then say “Take a Note”. The assistant would prompt you if you have more than one note taking applications. Once you’ve selected Microsoft To-Do from the list, you are good to go. Now say whatever you want to save as a note. All the notes go into To-Do list by default and the settings cannot be changed for now.

Microsoft To-Do app Tips & Tricks

Using Siri with To-Do

The pre-requisite for using Siri with To-Do is that you must have registered your Microsoft Account with iPhone. If you have a work account then you need to register your Exchange account with iPhone. Now fire up iPhone’s settings, tap Reminders > Default List and then select the desired list. Now you use the voice commands to add reminders or notes to To-Do. Just say ‘Hey Siri, remind me to…’ and your reminder will be synced with To-Do.

Using 3D Touch with To-Do

Microsoft To-Do supports the most anticipated pressure sensitive touch feature of the iPhone. You can press the To-Do icon to get a list of three options available. You can create a new To-Do, view ‘My Day’ and search between To-Do from that menu itself.

Sharing anything to To-Do on Android

Just imagine that you got an important message or an email and you want to reminded about that. You can just select the message or the text, hit the share button and select To-Do from the app list. Now you would be taken to the To-Do application where you can customize more settings. You can easily add any reminder or To-Do using this sharing technique available on Android.

Add Icons to List name

While I was using the app, I noticed that I was not able to add icons to the lists. There is a workaround, just rename the list and add the desired emoji at the beginning of the name of the list. The emoji would be treated as the list icon. The icons look cool and can help you easily identify a list. Basic emojis are supported but some complex ones are not yet supported.



Magic Number

A perfect schedule is that when you are not overbooked and you are carrying out an optimum number of tasks each day. Magic Number works the same way. It predicts the optimum number of tasks that you must add to your day to be as much productive as possible. Doing less but doing what you’ve decided creates a sense of accomplishment rather than a demotivating thought.

Prioritize yourself

Create a new list where you can add some simple off the beat To-Dos like listening music. These entertaining tasks can help you stay productive and refreshed for the whole day. Although the app doesn’t offer any particular option to create such lists. But you can create recurring tasks to ‘My Day’ section. This way you keep yourself entertained even in your hectic schedule. Also, you can prioritize tasks by adding due dates. The app does not support priority levels for now.

Productivity and To-Do

Microsoft To-Do is meant to be a productivity focused application. That is the sole reason behind the My Day section in the application. It is suggested that you should add tasks to this section every morning so that you can have a complete sense of what you are going to do throughout the day. Planning your day before hand has always proved to be a successful way of accomplishing tasks. Hit the bulb icon to get more productivity suggestions.

Accomplishments and Suggestions

Hit the bulb icon to know what tasks you accomplished yesterday and the tasks that are overdue. Based upon your usage, the app can suggest you the tasks that you should consider doing today. Before actually updating your My Day section in the morning, it is advised to have a look at what the app has for you. More you use the application, more precise suggestions it will make over time.

Adding Due Dates

A great way to stay productive and keep track of your tasks is by adding due dates. Adding a deadline to a task makes you mentally more focussed about it and you complete it on time. Also, if you add a deadline, the app can remind you about that and can show that task in suggestions. The application can automatically show you suggestions based upon the upcoming tasks and the overdue tasks.

Recurring Tasks

Want to achieve a goal and devote some time to it daily? Simply create a task and set it to repeat daily. Now all you need to do is add that task to ‘My Day’ and you will be reminded about it daily and the app can simply help you achieve a goal. I’ve added swimming and meditation to my schedule so that I can take out some time for my health from this hectic college schedule.

These were some tips and tricks that can help you utilize the application in a better way. And yes, don’t forget to see your magic number and plan your schedule accordingly. All the changes you make to the application will be synced to all of your devices.



Tempow turns your dumb Bluetooth speakers into a connected sound system

The content below is taken from the original (Tempow turns your dumb Bluetooth speakers into a connected sound system), to continue reading please visit the site. Remember to respect the Author & Copyright.

Meet Tempow, a French startup that can make your Bluetooth speakers more versatile. The company has been working on a new implementation of the Bluetooth protocol in order to let you play music from your phone on multiple speakers and headphones at once.

Bluetooth speakers have become a common gift and a hit item in consumer electronics stores. Most people now have multiple Bluetooth speakers and headphones at home. While it’s nice that you don’t have to use cables anymore, it can be quite frustrating that you can only play music on one device at a time.

Many manufacturers avoid this issue by relying on Wi-Fi and different protocols, such as Spotify Connect. This is also Sonos’ main selling point. But most speakers are still Bluetooth only.

Tempow is replacing the Bluetooth driver on your phone so that you can send music to multiple Bluetooth devices at once. It works with standard Bluetooth chipsets and all Bluetooth audio devices out there.

It only works on Android as it requires some low-level modifications. As a side note, Apple has developed its own Bluetooth chipset for the AirPods, and the company is probably going to reuse this proprietary chip in all its devices.

After pairing the Bluetooth devices with your phone, you can activate them one by one, enable stereo by differentiating left and right speakers and adjust the volume individually. In other words, it works pretty much like a Sonos system.

Tempow thinks its technology could be valuable for smartphone manufacturers. That’s why the company has been negotiating with them to license its technology.

Other chipset companies have been working on similar stuff. You may have seen that Samsung’s Galaxy S8 lets you play music on two Bluetooth devices at once. Samsung isn’t using Tempow’s technology.

So it clearly means that Tempow is onto something. Let’s see if the startup can sign deals with smartphone makers and ship its technology in the coming months. This could be a lucrative business model as I could see smartphone companies paying a tiny amount of money for each device they sell with Tempow.

The startup is also building a competent team when it comes to all things Bluetooth. As smartphones become the central element of your digital life, Bluetooth is going to be increasingly important in the coming years.

The Great Data Center Headache – the Internet of Things

The content below is taken from the original (The Great Data Center Headache – the Internet of Things), to continue reading please visit the site. Remember to respect the Author & Copyright.

If data center managers thought virtualization and cloud computing were challenging in terms of big shifts in architecture, they better get ready for the next big thing. The Internet of Things is likely to give you far more headaches in terms of volume of data to store, devices to connect with and systems to integrate.

Long-term data center managers have certainly borne witness to immense change in recent decades. From mainframes to minicomputers and client/server, then virtualization and cloud computing. The pattern seems to be as follows: at first, their entire mode of operation is challenged and altered. After a few hectic years, life calms down, only for yet another wave of innovation to sweep the world of the data center off its axis.

And, here we go again with the Internet of Things (IoT). The general idea is that sensors and microchips are placed anywhere and is subjected to advanced analytics to give business a competitive edge, and provide the data center with greater capabilities in terms of infrastructure management and security.

“The Internet of Things means everything will have an IP address,” said Jim Davis, former executive vice president and chief marketing officer at analytics provider SAS, now with Informatica.

According to Vin Sharma, director of machine learning solutions in the Data Center group at Intel, the future could well include more distributed data centers, perhaps a network made up of huge centralized hubs as well as much smaller, more localized data centers, or even a completely different infrastructure model. More than likely, some data centers will fade from memory as their value proposition is eroded by cloud-based operations. Others will have to transform themselves in order to survive.

IoT Implications

While the interest and buzz around IoT has grown steadily in recent years, the promise continues to move closer to reality. According to  International Data Corp (IDC), a transformation is underway that will see the worldwide market for IoT solutions grow to $7.1 trillion in 2020.

The best way to comprehend IoT is to look at it in the context of PCs, servers and phones the data center currently has to manage. IDC numbers showed that just 1.5 billion smartphones, tablets, desktops and laptops were sold in 2013 that could connect people to the internet. That figure is expected to reach an alarming 32 billion by 2020.

The vast majority of these devices operate without human interaction. In a data center, for example, we see that the many high-end servers and storage devices automatically gather sensor data and report it to the manufacturer. The manufacturer can then, for example, remotely adjust the settings for better performance or send a technician out to replace a failing component before it crashes.

General Electric (GE) is very much on the forefront of the IoT race. Its product portfolio encompasses a vast number of items of industrial and consumer devices that will form the backbone of the Internet of things. But the company understands that it can’t preach the benefits without itself becoming an example of its virtues.

As a result, Rachel Trombetta, software director at GE Digital, expects many data center functions to disappear into the cloud. But she acknowledged that some of its manufacturing activities would have to remain out of the cloud and continue to be run from more traditional data centers.

“The case for public clouds is not yet compelling enough for us to get rid of our own data centers,” said Trombetta. “We still need better security and Infrastructure-as-a-Service (IaaS).”

More likely, GE will adopt a hybrid model where it creates two clouds—one for itself as an enterprise and one for customer data. Certain functions will be trusted to the public cloud while others will be hosted in GE’s own private cloud—underpinned by company owned and operated data centers.

“Every business within GE is currently looking at all its apps to discover which ones we actually need, and which ones can be moved to the cloud,” said Trombetta. “We will probably see hybrid data centers operations with far more IaaS and cloud applications taking over data center workloads where it makes sense.”

Staring into the crystal ball, Trombetta envisions a world where instead of just spinning up virtual servers on demand, the company will be able to spin up whole manufacturing execution systems and data centers on demand for internal business units as well as customers.

“Why take six months to build your own data center if you can have someone else get one online at the push of a button at a fraction of the cost,” asked Trombetta.

Storage Headache

Much has been made about the number of devices that will make up the Internet of Things. But that only tells half the tale. It’s the volume of generated data that is probably the scariest part for the data center.

Here are some examples: oil rigs generate 8 TB of data per day. The average flight wirelessly transmits half a TB of data per engine and an additional 200 TB is downloaded after landing. The self-driving Google car produces 1 GB per second.

“Big data is the fuel of the connected vehicle,” said Andreas Mai, Director of Smart Connected Vehicles, Cisco. “It is analytics which gives you the true value.”

Already, Google has been driving autonomous networked vehicles around the U.S. for some time. When you scale it up to every vehicle, the traffic jams would shift from the roadways to the airwaves.

It is unrealistic to expect satellite, cellular or cloud-based networks to be able to cope. The volume will not only swamp existing data centers, it is beyond the ability of any currently conceived technology to be able to store that information. So where is it going to go?

“It isn’t possible for cars to receive external impulses from traffic lights, mapping programs and other vehicles if it all has to go via the cloud,” said Mai. “So the IoT will require a lot more compute power on the edge of the network.”

Mai said it will take additional networking from what he termed “the fog.” These are local networking nodes that supplement cloud and land-line systems, perhaps only operating in the vicinity of one junction.

The good news is that only certain data has lasting value. Most of will likely be of very temporary interest and will have little long-term value.

“A lot of the data being generated by IoT will be short lived” said Greg Schulz, an analyst with StorageIO Group. “It will have to be analyzed, summarized and then tossed aside after a period of time.”

So there is hope that beleaguered data center managers won’t be called upon to find enough storage capacity to house it all.

Drew Robb is veteran information technology freelance writer based in Florida.

Solution guide: Best practices for migrating Virtual Machines

The content below is taken from the original (Solution guide: Best practices for migrating Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Peter-Mark Verwoerd, Solutions Architect, Google Cloud

Migrating to the cloud can be a challenging project for any sized company. There are many different considerations in migrations, but one of the core tasks is to migrate virtual machines. Due to the variety of hardware, hypervisors and operating systems in use today, this can be a complex and daunting prospect.

The customer engineering team at Google Cloud has helped a number of customers migrate to GCP – customers like Spotify and Evernote. Based on those experiences and our understanding of how our own cloud works, we’ve released an article describing our recommended best practices on migrating virtual machines.

One of the tools that can help customers move to Google Cloud is CloudEndure. CloudEndure powers the Google VM Migration Service, and can simplify the process of moving virtual machines. CloudEndure joined us in this article with practical case studies of migrations that they’ve done for various customers.

We hope you find this article helpful while migrating to the cloud. If you decide to use the Migration Service, take a look at our tutorial to help guide you through the process.

Enabling Azure CDN from Azure web app and storage account portal extension

The content below is taken from the original (Enabling Azure CDN from Azure web app and storage account portal extension), to continue reading please visit the site. Remember to respect the Author & Copyright.

Enabling CDN for your Azure workflow becomes easier than ever with this new integration. You can now enable and manage CDN for your Azure web app service or Azure storage account without leaving the portal experience.

When you have a website, a storage account for download or a streaming endpoint for your media event, you may want to add CDN to your solution for scalability and better performance to make your CDN enablement experience easy for these Azure workflow. When you create a CDN endpoint from Azure portal CDN extension, you can choose an "origin type" which lists all the available Azure web app, storage account, and cloud services within your subscription. To enhance the integration, we started with CDN integration with Azure media services. From Azure media services portal extension, you can enable CDN for your streaming endpoint with one click. Now we have extended this integration to Web App and storage account.

Go to the Azure portal web app service or storage account extension, select your resource, then search for "CDN" from the menu and enable CDN! Very little information is required for CDN enablement. After enabling CDN, click the endpoint to manage configuration directly from this extension.

From Azure storage account portal extension:

From Azure web app service portal extension:

Directly manage CDN from Azure web app or storage portal extension:

More information

Is there a feature you’d like to see in Azure CDN? Give us feedback!

The Raspberry Pi Becomes a SCSI Device

The content below is taken from the original (The Raspberry Pi Becomes a SCSI Device), to continue reading please visit the site. Remember to respect the Author & Copyright.

SCSI devices were found in hundreds of different models of computers from the 80s, from SUN boxes to cute little Macs. These hard drives and CDROMs are slowly dying, and with that goes an entire generation of technology down the drain. Currently, the best method of preserving these computers with SCSI drives is the SCSI2SD device designed by [Michael McMaster]. While this device does exactly what it says it’ll do — turn an SD card into a drive on a SCSI chain — it’s fairly expensive at $70.

[Gokhan] has a better, cheaper solution. It’s a SCSI device emulator for the Raspberry Pi. It turns a Raspberry Pi into a SCSI hard drive, magneto-optical drive, CDROM, or an Ethernet adapter using only some glue logic and a bit of code.

As far as the hardware goes, this is a pretty simple build. The 40-pin GPIO connector on the Pi is attached to the 50-pin SCSI connector through a few 74LS641 transceivers with a few resistor packs for pullups and pulldowns. The software allows for virtual disk devices – either a hard drive, magneto-optical drive, or a CDROM – to be presented from the Raspberry Pi. There’s also the option of putting Ethernet on the SCSI chain, a helpful addition since Ethernet to SCSI conversion devices are usually rare and expensive.

Officially, [Gokhan] built this SCSI hard drive emulator for the x68000 computer, developed by Sharp in the late 80s. While these are popular machines for retrocomputing aficionados in Japan, they’re exceptionally rare elsewhere — although [Dave Jones] got his mitts on one for a teardown. SCSI was extraordinarily popular for computers from the 70s through the 90s, though, and since SCSI was a standard this build should work with all of them.

If your retrocomputer doesn’t need a SCSI drive, and you’re feeling left out of the drive-emulation club, the good news is there’s a Raspberry Pi solution for that, too: this Hackaday Prize entry turns a Pi into an IDE hard drive.

McDonald’s Frork May be The Greatest Fast Food Invention of All Time

The content below is taken from the original (McDonald’s Frork May be The Greatest Fast Food Invention of All Time), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Raspberry Pi As An IR To WiFi Bridge

The content below is taken from the original (The Raspberry Pi As An IR To WiFi Bridge), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Raspberry Pi As An IR To WiFi Bridge

[Jason] has a Sonos home sound system, with a bunch of speakers connected via WiFi. [Jason] also has a universal remote designed and manufactured in a universe where WiFi doesn’t exist. The Sonos can not be controlled via infrared. There’s an obvious problem here, but luckily tiny Linux computers with WiFi cost $10, and IR receivers cost $2. The result is an IR to WiFi bridge to control all those ‘smart’ home audio solutions.

The only thing [Jason] needed to control his Sonos from a universal remote is an IR receiver and a Raspberry Pi Zero W. The circuit is simple – just connect the power and ground of the IR receiver to the Pi, and plug the third pin of the receiver into a GPIO pin. The new, fancy official Raspberry Pi Zero enclosure is perfect for this build, allowing a little IR-transparent piece of epoxy poking out of a hole designed for the Pi camera.

For the software, [Jason] turned to Node JS, and LIRC, a piece of software that decodes IR signals. With the GPIO pin defined, [Jason] set up the driver and used the Sonos HTTP API to send commands to his audio unit. There’s a lot of futzing about with text files for this build, but the results speak for themselves: [Jason] can now use a universal remote with everything in his home stereo now.

Deep dive on AWS vs. Azure vs. Google cloud storage options

The content below is taken from the original (Deep dive on AWS vs. Azure vs. Google cloud storage options), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the most common use cases for public IaaS cloud computing is storage and that’s for good reason: Instead of buying hardware and managing it, users simply upload data to the cloud and pay for how much they put there.

+MORE AT NETWORK WORLD: Battle of the clouds: AWS vs. Azure vs. Google Cloud Platform | Interactive map of public cloud regions around the world +

It sounds simple. But in reality, the world of cloud storage has many facets to consider. Each of the three major public IaaS cloud vendors – Amazon Web Services, Microsoft Azure and Google Cloud Platform – have a variety of storage options and in some cases complicated schemes for how much it costs.

According to Brian Adler, director of enterprise architecture at cloud management provider RightScale, who recently ran a webinar comparing cloud storage options, there is no one vendor that is clearly better than the others. “Is anyone in the lead? It really depends on what you’re using (the cloud) for,” he says. Each provider has their own strengths and weaknesses depending on the specific use case, he says. Below are three of the most common cloud storage use cases and how the vendors stack up.

Block Storage

Block storage is persistent disk storage used in conjunction with cloud-based virtual machines. Each of the providers break their block storage offerings into two categories: traditional magnetic spinning hard-drive disks, or newer solid state disks (SSD), which are generally more expensive but have better performance. Customers can also pay a premium to get a certain amount of guaranteed input/output per second (IOPs), which basically is an indication of how fast the storage will save new information and read information stored in it.

Amazon’s product is named Elastic Block Store (EBS) and it comes in three main flavors: Throughput Optimized HHD, a traditional magnetic, spinning-disk offering; General Purpose SSD, next-generation drives; and Provisioned IOPS SSD, which come with a guaranteed rate of reads and writes to the data.

Azure’s block storage offering is called Managed Disks and comes in standard or premium with the latter based on SSDs.

Google’s version is named Persistent Disks (PDs), which come in a standard or SSD option.

block storage aws azure google rightscale Rightscale

AWS and Google have a 99.95% availability, while Azure offers a 99.99% availability service-level agreement (SLA) for block storage service.

One of the most important factors to consider when buying block storage is how fast you need access to the data stored on the SSD disk. For that, the vendors offer different guaranteed rates of IOPs. Google is in the lead here; the company offers 40,000 IOPs for reads and 30,000 for writes to its disks. AWS’s general purpose SSD offers 10,000 IOPS, but its provisioned IOPs offering can offer up to 20,000 IOPs per instance, with a maximum IOPs of 65,000 per volume. Azure offers 5,000 IOPs.

Google not only has the highest IOPs, but gives customers the most choice in the size of block storage volumes. For more traditional hard-drive based storage, Google offers volume sizes ranging from 1GB to 64TB. AWS offers volumes between 500GB to 16TB. Azure offers between 1GB and 1TB volume sizes. Like with the SSDs, Google offers the highest level of IOPs-per-volume in HDDs, at 3,000 for reads and 15,000 for writes. AWS and Azure are at 500 max IOPs per volume. Max throughput ranges from Azure are 60 MBps to Google at 180 for read and 120 for write, and AWS at 500 MBps.

As for pricing, it gets a bit complicated (all prices are per GB/month), but for HHD, AWS starts at $0.045, for Google it’s $0.04 and Azure is $0.03.

SSD pricing starts at $0.10 in AWS, $0.17 for Google and between $0.12 and 0.14 for Azure, depending on the size of the disk.

In a pricing analysis done by RightScale, the company found that generally the pricing structure means that Azure has the best price/performance ratio for block storage. But, for workloads that require higher IOPs, Google becomes the more cost-effective option.

There are caveats when using provisioned IOPs, says Kim Weins, vice president of marketing at RightScale. In AWS, if you need a guaranteed amount of IOPs, that costs a premium. “You pay a higher cost per GB, but you also pay for the required IOPs on top of it, which drives the cost up higher,” Weins says. “Be smart about choosing your provisioned IOPs level because you are going to be paying for it.”

Weins adds that RightScale has found some customers pay provisioned IOPs then forgot to deprovision the EBS instance when they are done using it, thus wasting money.

Object storage

Got a file that you need to put in the cloud? Object storage is the service for you. Again, the cloud providers have different types of storage, classified by how often the customer expects to access it. “Hot” storage is data that needs to be almost instantaneously accessible. “Cool” storage is accessed more infrequently, and “cold” storage is archival material that is rarely accessed. The colder the storage, the less expensive it is.

AWS’s primary object storage platform is Simple Storage Service (S3). It offers S3 Infrequent Access for cool storage and Glacier for cold storage. Google has Google Cloud Storage, GCS Nearline for cool storage and GCS Coldline for archival. Azure only has a hot and cool option with Azure Hot and Cool Storage Blobs; customers have to use the cool storage for archival data. AWS and Google each have a 5TB object size limit, while Azure has a 500TB per account limit. AWS and Google each publicize 99.999999999% durability for objects stored in their cloud. That means that if you store 10,000 objects in the cloud, on average one file will be lost every 10 million years, AWS says. The point is these systems are designed to be ultra durable. Azure does not publish durability service level agreements.

object storage aws azure google rightscale Rightscale

Pricing on object storage is slightly more complicated because customers can choose to host their data in a single region, or for a slightly increased cost they can back it up across multiple regions, which is a best-practice to ensure you have access to your data if there is an outage in a region.

In AWS, for example, S3 costs (all prices are in GB/month) $0.023; to replicate data across multiple regions costs twice as much: $0.046, plus a $0.01 per GB transfer fee. AWS’s cool storage service, named S3 Infrequent Access (IA) is $0.0125 and its cold storage/archival service Glacier costs $0.004.

Google has the most analogous offerings: It’s single-region storage costs $0.02, while multi-region is $0.026, with free transfer of data. The company’s cool storage platform named Nearline is $0.01 and the cold/archival product named Coldline is $0.007. Google says data retrieval from Coldline is faster (within milliseconds) than in Glacier, which AWS says could take between minutes and hours.

Azure offers single-region storage for $0.0184, and what it calls “Globally Redundant Storage” for $0.046, but it is read only, which means you cannot write changes to it, doing so costs more money. Azure’s cool storage is named Cool Blob Storage is $0.01. Azure does not yet offer a cold or archival storage platform, so customers must use Cool Blob storage for that use case.

Based on these pricing scenarios, Google has the least expensive pure object storage costs, plus the free transfer of data, RightScale found. AWS, however, beats Google on cold storage costs.

File Storage

An emerging use case is the use of a cloud-based file storage system. Think of this as a cloud-based version of a more traditional Network File System (NFS): Users can mount files to the system from any device or VM connected to it, then read and retrieve files. This is a relatively nascent cloud storage use case and therefore offerings are not yet as full featured compared to block and object storage, Adler says.

AWS’s offering in this category is named Elastic File Storage, which emerged from beta in June 2016. It allows users to mount files from AWS Elastic Compute Cloud (EC2) virtual machines, or from on-premises services using AWS Direct Connect or a virtual private connection (VPC). There is no size limit, so it scales automatically based on need and offers a 50 MB per second throughput per TB of storage; customers can pay for up to 100MBps throughput. It starts at $0.30/GB/month.

Azure, meanwhile offers Azure File Storage, which is similar in nature but has a capacity of 5TB per file and 500TB per account and it requires manual scaling. It offers a 60MBps throughput for reading files.

file storage aws azure google rightscale Rightscale

Google does not have a native file storage offering, but instead offers the open source FUSE adapter, which allows users to mount files from Google Cloud Storage buckets and converts them into a file system. Google claims this provides the highest throughput of the three providers with 180MBps on reads and 120MBps on writes. But, Adler said in his experience the FUSE adapter is not as well integrated into Google’s cloud platform compared to the other two offerings, leading to potentially frustrating user experiences with it. Adler also notes that AWS’s EFS does not have a native backup solution, while Azure does. AWS encourages EFS users to rely on third-party backup tools at this point.

Azure and Google offer lower prices for their file storage systems compared to AWS: Azure is $0.80 per GB/month, and Google is $0.20, but Adler says those costs do not take into account any replication or transfer charges. While AWS’s base price may seem higher, when taking into account all that it factors in related to scaling, it could be a wash between the three providers.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Create Your Own Unroll.me With a Google Script

The content below is taken from the original (Create Your Own Unroll.me With a Google Script), to continue reading please visit the site. Remember to respect the Author & Copyright.

Article preview thumbnail

Yesterday, The New York Times went deep into some of Uber’s shady business practices. In the…Read more Read more

While Unroll.me’s primary feature was not just its ability to one-click unsubscribe, but to also “roll up” all the newsletters you do want, sometimes you have to make some sacrifices for the sake of privacy. Over at Digital Inspiration, Amit Agarwal put together a Google Script for bulk unsubscribing to newsletters. The idea for how this works is pretty straightforward: first, you go through your inbox and mark any newsletter you no longer want with the label “Unsubscribe.” Then, the Google Script parses the content of bulk emails, find the unsubscribe link, then automatically follows that link.

With this script, you do not need to grant access to your Gmail account to any third party services, it just uses the Google account and services you already have. Plus, you can apply the “Unsubscribe” label from any third-party email client, the Gmail app, or the normal web app. Sure, it’s an extra, manual step, but it’s still easier than clicking every single link individually and you don’t have to worry about your data being sold off to a third-party. As with any Google Script, the installation here takes a couple of steps:

  1. Copy the Gmail Unsubscriber sheet to your Google Drive.
  2. In that Google Sheet, click the Gmail Unsubscriber link in the top bar then click Configure.
  3. Allow the script to access your Gmail account. Don’t worry the script doesn’t store any data anywhere outside your own Google Account.
  4. Save the configuration with the default name.
  5. Head over to your Gmail inbox, then create a label called Unsubscribe. Select any emails you want to unsubscribe to, then apply the Unsubscribe label to them.

It might take 10-15 minutes, but eventually you’ll start to see a list of the emails you’ve marked form inside the Google Sheet alongside a note saying whether or not the script was successful. Once the script is set up, you no longer need to look at it unless you’re just curious. Otherwise, going forward, you just need to label an email as Unsubscribe and the rest happens behind the scenes. You can find more information about how exactly the Google Script works and some more advanced information over at Digital Inspiration.

How to Unsubscribe from Mailing Lists and Junk Newsletters in Gmail | Digital Inspiration

Wikipedia co-founder launches Wikitribune to fight fake news

The content below is taken from the original (Wikipedia co-founder launches Wikitribune to fight fake news), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wikipedia co-founder Jimmy Wales hopes to tackle fake news with a journalism outfit of his own. Wikitribune will be free to access and use crowdfunding to hire experienced reporters. They’ll work alongside volunteers who can sub-edit articles, fact-check stories and suggest new topics for the site to pursue. "This will be the first time that professional and citizen journalists will work side-by-side as equals writing stories as they happen, editing them as they develop, and at all times backed by a community checking and rechecking all of the facts," Wales said.

Wikitribune‘s existence (and success) will depend on donations from people who believe in its mission and the journalism it’s producing. The site will cover traditional news beats, such as UK and international politics, as well as science, technology and specialist subjects chosen by subscribers. "If you take as an example the bitcoin community," Wales said, "they’ve a very active and obsessed community. There’s a lot of news that comes out in the field, and I think they’d love to be able to raise money to hire a journalist and put them on the bitcoin/blockchain beat."

Wikitribune articles will include detailed sourcing and link out to full transcripts, video and audio recordings of interviews. Reader submissions will also need to be approved by a full-time editor before they appear on the site. These two mechanisms will, Wales hopes, create a news culture that is both transparent and accurate, leveraging the expertise of reporters and the news gathering, fact-checking scale of the crowd. Controlling such a large group of people will be tricky, however. Reddit users, for instance, wrongly identified one of the Boston Marathon bombers in 2013.

Wales is optimistic, however. Few people thought that Wikipedia would work, and while the site is far from perfect, it has been able to avoid the false news problem, at least for the most part. The hope is that the same culture generated by Wikipedia’s editors will blossom inside Wikitribune. Those values will be helped in part by the site’s business model — with no advertiser funding, the site can avoid the click-chasing and click-bait stories that have plagued other sites and reduced reader trust.

Plenty of technology companies are trying to tackle the ‘fake news’ epidemic. YouTube is holding workshops in the UK to teach young people how to spot bogus stories. Google is showing fact-checkers such as PolitiFact and Snopes more prominently in search results. Facebook bought a bunch of full-page newspaper ads in Germany with basic reader fact-checking tips. Despite these efforts, the problem persists. Politicians in the UK are holding an inquiry into the rise of fake news, and British newspapers have requested a deeper investigation into the role that Facebook and Google play in its distribution.

Clearly, Wales thinks the answer is a new, better-funded and creatively organised newsroom. Joshua Benton, director of Harvard University’s Nieman Journalism Lab, isn’t so sure. "There are a variety of people who – if he does this right – will view it as a trusted platform," he told the BBC. "But another 10 to 20 people are not going to ‘fix the news’."

Via: The Guardian, The BBC

Source: Wikitribune

Open-spec networking Mini-ITX has 1, 2.5, and 10 GbE ports

The content below is taken from the original (Open-spec networking Mini-ITX has 1, 2.5, and 10 GbE ports), to continue reading please visit the site. Remember to respect the Author & Copyright.

SolidRun’s “Marvell MacchiatoBIN” is a $349, Mini-ITX networking SBC that runs Linux 4.4 on Marvell’s quad -A72 Armada 8040, and supports ODP, OFP, and NFV. SolidRun, which is known for its NXP i.MX6 based HummingBoard SBCs and Marvell Armada 38x based ClearFog Pro and scaled down ClearFog Base networking boards, has spun a $349 (and […]

Azure Billing Reader role and preview of Invoice API

The content below is taken from the original (Azure Billing Reader role and preview of Invoice API), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, we are pleased to announce the addition of a new in-built role, Billing Reader role. The new Billing Reader role allows you to delegate access to just billing information with no access to services such as VMs and storage accounts. Users in this role can perform Azure billing management operations such as viewing subscription scoped cost reporting data and downloading invoices. Also, today we are releasing the public preview of a new billing API that will allow you to programmatically download subscription’s billing invoices.

billing-reader-view

Allowing additional users to download invoices

Today, only the account administrator for a subscription can download and view invoices. Now the account administrator can allow users in subscription scoped roles, Owner, Contributor, Reader, User Access Administrator, Billing Reader, Service Administrator and Co-Administrator, to view invoices. The invoice contains personal information and hence the account administrator is required to enable access to allow users in subscription scoped roles to view invoices. The steps to allow users in subscription scoped roles to view invoices are below:

  1. Login to the Azure Management Portal with account administrator credentials.

  2. Select the subscription that you want to allow additional users to download invoices.

  3. From the subscription blade, select the Invoices tab within billing section. Click on Access to invoices command. The feature to allow additional users to download invoices is in preview, not all invoices may be available. The account administrator will have access to all invoices.

    AA-optinAnnotated

  4. Allow subscription scoped roles to download invoice

    AA-optinAllow

How to add users to Billing Reader Role

Users who are in administrative roles i.e. Owner, User Access Administrator, Service Administrator and Co-administrator roles can delegate Billing Reader access to other users. Users in Billing Reader can view subscription scoped billing information such as usage and invoices. Note, currently billing information is only viewable for non-enterprise subscription. Support for enterprise subscriptions will be available in the future.

  1. Select the subscription for which you want to delegate Billing Reader access
  2. From the subscription blade, select Access Control (IAM)
    select-iam
  3. Click Add
  4. Select “Billing Reader” role
    select-roles
  5. Select or add user that you want to delegate access to subscription scoped billing information
    add-user

The full definition of access allowed for user in Billing Reader role is detailed in built in roles documentation.

Downloading invoice using new Billing API

Till now you could only download invoices for your subscription via the Azure management portal. We are enabling users in administrative (Owner, Contributor, Reader, Service Administrator and Co-administrator) and Billing Reader roles to download invoices for a subscription programmatically. The invoice API allows you to download current and past invoices for an Azure subscription. During the API preview some invoices may not be available for download. The detailed API documentation is available and samples can also be downloaded. The feature to download invoices via API is not available for certain subscriptions types such as support, enterprise agreements, or Azure in Open. To be able to download invoices through API the account admin has to enable access for users in subscription scoped roles as outlined above.
You can easily download the latest invoice for your subscription using Azure PowerShell.

  1. Login using Login-AzureRmAccount
  2. Set your subscription context using Set-AzureRmContext -SubscriptionId <subscription Id>
  3. To get the URL of the latest invoice, execute Get-AzureRmBillingInvoice –Latest

The output will give back an invoice link to download the latest invoice document in PDF format, an example is shown below


PS C:\> Get-AzureRmBillingInvoice -Latest
         Id                           : /subscriptions/{subscription ID}/providers/Microsoft.Billing/invoices/2017-02-09-117274100066163
         Name                   : 2017-02-09-117274100066163
         Type                     : Microsoft. Billing/invoices
         InvoicePeriodStartDate : 1/10/2017 12:00:00 AM
         InvoicePeriodEndDate   : 2/9/2017 12:00:00 AM
         DownloadUrl            : http://bit.ly/2qdMiPn identifier}.pdf?sv=2014-02-14&sr=b&sig=XlW87Ii7A5MhwQVvN1kMa0AR79iGiw72RGzQTT% 2Fh4YI%3D&se=2017-03-01T23%3A25%3A56Z&sp=r
         DownloadUrlExpiry      : 3/1/2017 3:25:56 PM


To download invoice to a local directory you can run the following


PS C:\> Get-AzureRmBillingInvoice -Latest
PS C:\> Invoke-WebRequest -Uri $invoice.DownloadUrl -OutFile <directory>\InvoiceLatest.pdf

In the future,  you will see additions to this API which will enable expanded programmatic access to billing functionality.