Two tickets to the ARM show: HPE buffs up the StoreVirtual line

The content below is taken from the original (Two tickets to the ARM show: HPE buffs up the StoreVirtual line), to continue reading please visit the site. Remember to respect the Author & Copyright.

Two tickets to the ARM show: HPE buffs up the StoreVirtual line

Scaling up, out, and across, but workout flops at files

Pic: Alexander Lukatskiy / Shutterstock

HPE has given its SV3200 software a workout to strengthen its muscles with a raft of enhancements, but draws the line at using ARM for file access.

The company introduced a low-cost, entry-level, block-access array with its StoreVirtual 3200 in August last year. It comes with a pair of active-active 64-bit ARM controllers, not Xeon ones, and costs $14,000 for a 2U, 14.4TB product, $33,000 for a 3U, 14.4TB SV4330.

At the time HPE said: “The StoreVirtual 3200 has 100 per cent more usable capacity than a two-node 4330, is 58 per cent cheaper at street price level, costs 80 per cent less in $/GB usable capacity terms, and has equivalent performance.” That’s what you call a Xeon tax.

What’s been added to the flash-ready SV3200?

  • Scale
    • Up by adding more drives or drive enclosures
    • Out by grouping controller pairs in a cluster
    • Across by federating clusters and moving workloads with Peer Motion SW

Stretch clusters are supported as is synchronous replication.

SV3200_bezel_and_bezel_free

Dressed and undressed StoreVirtual 3200

A StoreVirtual 3000 File Controller has been added, and it is based on the ProLiant DL 120 server; no ARMs for file access as this uses Xeon E5-2600 v3, E5-1600 v3, E5-2600 v4, or E5-1600 v4 for its compute. This filer is possibly based on the StoreEasy 3000 (File) Gateway. It’s a shame that it isn’t ARM-powered as well.

HPE says created file shares have native Windows integration and compatibility with Windows Dynamic Access Controls. The 3000 provides sub-file deduplication and compression, and supports up to 2,000 concurrent connections. It integrates with Microsoft Work Folders on users’ PCs, notebooks and devices (Android and iOS).

StoreVirtual 3200 systems start at $6,055 for a 1GB config and $6,475 for a 10GB alternative. StoreVirtual 3000 file controllers start a $2,975. The latest StoreVirtual 3200 SW upgrade is no charge to supported customers and will be available later this quarter. ®

Sponsored:
DevOps and continuous delivery

Researchers genetically engineer Salmonella to eat brain tumors

The content below is taken from the original (Researchers genetically engineer Salmonella to eat brain tumors), to continue reading please visit the site. Remember to respect the Author & Copyright.

Salmonella has earned its bad reputation. It is responsible for more than a million cases of food poisoning every year, of which nearly 400 people die. But a team of researchers from Duke University have recently engineered the bacteria to not attack the human gastrointestinal tract, but rather the most aggressive form of brain cancer known to man.

Glioblastoma is no joke. It’s extremely aggressive, with barely 10 percent of people diagnosed with it living another 5 years — the mean lifespan is just 15 months. What’s more, the cancer is protected from conventional drug and radiation-based therapies due to the blood brain barrier. Surgery is also an imperfect option because if even a single cancerous cell is left behind, it can spawn new tumors.

But that’s where the Salmonella typhimurium comes in. The Duke team made a few genetic adjustments to the bacteria’s DNA and transformed it into a guided-missile against Glioblastoma while rendering it harmless to the patient. Specifically, the team rendered the bacteria perpetually deficient in a crucial amino acid known as Purine. It just so happens that tumors are packed with Purine, which subsequently attracted the bacteria like flies to honey. Once injected directly into the brain, the Salmonella then burrows deep into the tumorous mass and begins to reproduce. The team also instructed the bacteria’s genetics to produce two compounds — Azurian and p53, both of which cause cells to self destruct — but only in low-oxygen environments such as the interior of a tumor where bacteria are rapidly multiplying. That way both the tumorous cells and bacteria alike eventually die off.

"A major challenge in treating gliomas is that the tumor is dispersed with no clear edge, making them difficult to completely surgically remove. So designing bacteria to actively move and seek out these distributed tumors, and express their anti-tumor proteins only in hypoxic, purine rich tumor regions is exciting," Ravi Bellamkonda, Vinik Dean of Duke’s Pratt School of Engineering and corresponding author of the paper, said in a statement. "And because their natural toxicity has been deactivated, they don’t cause an immunological response. At the doses we used in the experiments, they were naturally cleared once they’d killed the tumors, effectively destroying their own food source."

In rat trials, a full 20 percent of patients lasted 100 days, the rodent equivalent of 10 human years. The treatment basically doubled the survival rate and lifespan of those suffering from Glioblastoma. Of course, success in rodent-based trials don’t guarantee those same benefit will be conferred upon humans, but the results are nonetheless impressive. There’s no word yet on when this experimental therapy will make it out of the lab.

Via: Eureka Alert

Source: Duke University

SCSI Emulation Of A Rare Peripheral For The Acorn BBC Micro

The content below is taken from the original (SCSI Emulation Of A Rare Peripheral For The Acorn BBC Micro), to continue reading please visit the site. Remember to respect the Author & Copyright.

Mass storage presents a problem for those involved in the preservation of older computer hardware. While today’s storage devices are cheap and huge by the standards of decades ago their modern interfaces are beyond the ability of most older computers. And what period mass storage hardware remains is likely to be both unreliable after several decades of neglect, and rather expensive if it works due to its rarity.

The Domesday Project 86 team face this particular problem to a greater extent than almost any others in the field, because their storage device is a particularly rare Philips Laser Disc drive. Their solution is the BeebSCSI, a small board with a CPLD and an AVR microcontroller providing host adaptor and SCSI-1 emulation respectively for a modern micro-SD card.

An original BBC Domesday set-up. Regregex [CC BY 3.0], via Wikimedia Commons.
An original BBC Domesday set-up. Regregex [CC BY 3.0], via Wikimedia Commons.

1986 saw the 900th anniversary of the Domesday Book, a survey and inventory of his new kingdom commissioned in 1086 by the Norman king of England, William the Conqueror. One of the ways the event was marked in 1986 was the BBC Domesday Project, a collaboration between the BBC, several technology companies including Acorn and Philips, and a huge number of volunteers from the general public and the British school system. Pictures, video, and text were gathered relating to locations all over the country, and the whole was compiled with a not-quite-hypertext interface onto a set of Laser Disc ROMs. The system required the upgraded Master version of the 6502-based BBC Micro, a SCSI interface, and a special Laser Disc player model manufactured by Philips for this project alone. The hardware was expensive, rare, and unreliable, so few of its contributors would have seen it in action and it faded from view to become a cause celebre among digital archivists.

There have been several resurrections of the project over the years, including one from the BBC themselves which you can browse online. What makes this project different from the others is that it strives to present the Domesday experience as it was originally intended to be viewed, on as far as possible the original hardware and with the original BBC Micro interface. Many original parts such as BBC Master systems are relatively easy to source in 2016, but the special Laser Disc player is definitely not. This board replaces that impossible link in the chain, and should allow them to present a glimpse of 1986 in more than just the on-screen information.

If you would like to see an original BBC Domesday Project system, you can find one in action at the National Museum of Computing, at Bletchley Park. Meanwhile we’ve already featured another peripheral from the same stable as this one, the SmallyMouse USB-to-quadrature mouse emulator.

Filed under: classic hacks

Opera launches Neon, a new experimental desktop browser

The content below is taken from the original (Opera launches Neon, a new experimental desktop browser), to continue reading please visit the site. Remember to respect the Author & Copyright.

logoBrowsers have gotten boring. After a flurry of innovation, especially around the time Google launched Chrome, things slowed down over the last few years. We’ve seen a few interesting experiments, mostly from smaller players like Yandex, Brave and Vivaldi, but the largest players have pretty much stuck to their script. Opera, which was sold to a consortium of Chinese companies last year, is now doing its part to mix things up with the launch of Opera Neon, an experimental desktop browser for Windows and Mac that tries to reimagine what a modern browser should look like.

The moment you open Neon, you’ll notice that this is not your average browser. There is no task bar or bookmarks bar (though the team kept the concept of the URL bar alive). Instead of having tabs at the top, you get round bubbles on the right. It automatically grabs your desktop’s background image and uses that as the background image of your new tabs page.

There is also a sidebar on the left that lets you control audio and video playback (which you can also pop out so you can watch it even while you’re surfing in other tabs). This same sidebar also features a screenshotting tool and access to your recent downloads.

For those of you with very large and wide screens, Opera Neon also allows you to place two browser tabs side-by-side within the same window (similar to the split-screen view on iOS or Android).

2017-01-11_1445

I spent the last day or so playing with Opera Neon. I can’t quite see myself switching to it as my main browser at this point (especially because it doesn’t support any plugins yet), but it does feature its fair share of interesting concepts. I don’t mind the tabs on the side, for example, even though I never got used to the side-tab plugins for Chrome and Firefox (though I acknowledge that they do have their ardent fans). I’m also a big fan of Opera’s existing pop-out video feature which also makes an appearance in Neon. Just like the standard Opera desktop browser, Neon uses the Blink engine, so it feels fast, too.

I’m not sure I like Neon’s new tab page, though, which is also your bookmarks page. I’d rather have an easier way to get to my bookmarks than to open a new tab, for example. Now, you have to open a new tab (even if you don’t want to) and then open your bookmark in that new tab. If you heavily rely on your bookmarks (and especially your bookmarks bar), that doesn’t quite feel right and quickly leads to far more open tabs than necessary. I also like to organize my bookmarks into folders and that’s not currently an option in Opera Neon. To be fair, Neon isn’t billing itself as a concept for power users (if that’s what you’re looking for, check out Vivaldi, which was started by Opera’s former CEO).

Opera stresses that this is meant to be a “concept browser.” It won’t replace Opera’s existing browser. “However, some of its new features are expected to be added to Opera this spring,” the company notes in today’s announcement.




R-Comp support scheme

The content below is taken from the original (R-Comp support scheme), to continue reading please visit the site. Remember to respect the Author & Copyright.

In 2017, Iconbar will be looking at a number of sites, schemes, packages available for RISC OS, reminding you what is on offer, seeing what is going on, etc. As always, if you have any suggestions (or articles!), please drop us a line. We start with the software support scheme from R-Comp.

R-Comp have offered an interesting scheme for users of BeagleBoard, Panda, ARMX6, and Titanium for a number of years now. This is available as part of any R-Comp purchase or as a one-off purchase for anyone else. So I purchased access to PandaLand and gained free access to the Titanium side when I bought my TimeMachine.

So what do you get as part of the scheme? Membership buys you access to the password protected areas of the R-Comp website where you can download the a new stable version of RISC OS 5 for your specific machine, along with additional bundled software. R-Comp includes a slick upgrade program, which backups the previous installation, and performs the update. Ideally R-Comp will update for new stable releases of RISC OS 5.

All the installation happens inside RISC OS – you do not need to create a new SD card build. I have found this very slick and robust, without any issues. Most of the software is public domain but there are some nice little R-Comp tools for each platform (for example the PandaLand scheme includes a useful little CMOS widget).

The latest download for Titanium is from 2016 (and I am told it is suitable for all Titanium machines, not just the TiMachine). The Panda feels a little neglected with the lastest release being 2015 – I hope it is on the ToDo list for 2017.

You can manually upgrade these machines yourself with the latest build from RISC OS Open downloads page.

R-Comp has been involved in RISC OS development and making RISC OS run on their machines for many years now and what you are gaining from the scheme is a slick, tested and supported solution for your machine which will save you considerable time and should just work ‘out of the box’. For me personally, that has been well-worth the investment.

R-Comp website

No comments in forum

Azure AD: Set Up Self-Service Password Reset

The content below is taken from the original (Azure AD: Set Up Self-Service Password Reset), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s Ask the Admin, I’ll show you how to set up self-service password reset in Azure Active Directory (AD).

One of the most time-consuming jobs for IT departments is dealing with users’ passwords. Microsoft claims that support-assisted password reset typically accounts for 20 percent of an organization’s IT budget. Practical problems can also impair the user experience, such as waiting for the help desk to respond to a password reset request, so any technology that reduces costs and improves the user experience, while keeping systems secure, is worth a look.

 

 

Because Azure AD can be integrated with on-premises AD, the self-service password features in the cloud can be extended to your onsite directory, although Azure AD Premium is required for that functionality. Azure AD Basic, or any Office 365 subscription, provides the ability for cloud-only users and cloud-only administrators to reset their own passwords, while the free Azure AD tier allows only cloud administrators to reset their own passwords. For more information on Azure AD, see What is Azure Active Directory? on the Petri IT Knowledgebase.

Before starting, you’ll need an Azure AD tenant connected to an Azure subscription. Additionally, at least two users assigned an Office 365, an Azure AD Basic, or Premium license already assigned. For more information about assigning licenses to users, see Use PowerShell to Create and Assign Licenses to Office 365 Users on Petri IT Knowledgebase. Licenses can also be assigned to users in the Office 365 management portal.

Password Reset Policy

Let’s start by enabling password reset policy in Azure AD.

  • Log in to the Azure classic portal here using an administrator account.
  • In the portal window, click ACTIVE DIRECTORY in the list of options on the left.
  • In the list of available directories, click the directory you want to modify.
Enable password reset policy in Azure Active Directory (Image Credit: Russell Smith)

Enable password reset policy in Azure AD (Image Credit: Russell Smith)

  • Switch to the CONFIGURE tab.
  • Scroll down to user password reset policy and change the USERS ENABLED FOR PASSWORD RESET to YES.

The password reset experience can be customized with additional options that now appear in the portal window. For instance, you can specify if passwords can be written back to on-premises AD or determine the methods users may use for additional verification, such as a mobile number or alternate email address.

Enable password reset policy in Azure Active Directory (Image Credit: Russell Smith)

Enable password reset policy in Azure AD (Image Credit: Russell Smith)

If you need more explanation about each setting, hover the mouse pointer over the question mark icon to the right of each option. In this example, I’ll leave the policy configuration with the default settings.

  • Click SAVE at the bottom of the portal window.

Verify User Contact Data

Testing password reset requires that users have contact data information recorded in the directory. For example, if you allowed password reset using an alternate email address, then an alternate email address must be already stored in the directory for each user.

Users can log in to the User Registration Portal and provide the information themselves. If you have synchronization configured from on-premises AD, then contact information for users can be synchronized to the cloud. Administrators can also manually enter contact information for users in the Office 365 or Azure Classic admin portals.

When logging in to the User Registration Portal, users will be prompted to verify their contact details if an administrator has enabled password reset policy for the directory. If contact details don’t already exist for the user, they’ll be asked to provide and verify them.

Verify account contact information (Image Credit: Russell Smith)

Verify account contact information (Image Credit: Russell Smith)

Sponsored

Perform a Password Reset

To test the password reset functionality, log in to a site that uses Azure AD for authentication, such as the Office 365 portal, and click the Can’t access your account? link.

  • Click Work or school when prompted to choose the account type.
  • On the Get back into your account screen, confirm the user ID, and enter the characters in the picture as prompted.
Test resetting an Azure AD account password (Image Credit: Russell Smith)

Test resetting an Azure AD account password (Image Credit: Russell Smith)

  • On the verification step 1 screen, choose a verification method, such as Email my alternate email, and click Email.
  • Check your email, and enter the verification code in the browser window. Click Next.

 

Test resetting an Azure AD account password (Image Credit: Russell Smith)

Test resetting an Azure AD account password (Image Credit: Russell Smith)

  • On the choose a new password screen, enter and confirm a new password, and click Finish.
  • On the Your password has been reset screen, click the link to sign in with the new password.
Test resetting an Azure AD account password (Image Credit: Russell Smith)

Test resetting an Azure AD account password (Image Credit: Russell Smith)

In this article, I showed you how to configure and test password reset policy for cloud-only users in Azure Active Directory.

The post Azure AD: Set Up Self-Service Password Reset appeared first on Petri.

All that data Uber has been collecting might just come in handy

The content below is taken from the original (All that data Uber has been collecting might just come in handy), to continue reading please visit the site. Remember to respect the Author & Copyright.

Uber has A rocky history with city governments—to put it mildly… Now, Uber is making something of a peace offering. The company is launching a new service that could help cities master their traffic. It’s called Uber Movement, and it uses information on the billions of rides Uber has completed.

Uber Movement is free for the select planning agencies and researchers granted access to it. With it, you can gauge travel times between any two locations. Since, as Uber’s chief of transportation policy notes, Uber doesn’t actually do any urban planning, they figure they might as well give all this information to the people who do. And, hey, maybe it’ll help the ride-sharing company cozy up to previously hostile city governments.

Microsoft’s StaffHub Comes To Office 365

The content below is taken from the original (Microsoft’s StaffHub Comes To Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has announced a new feature for Office 365 and it is called StaffHub. The new tool, which is included with Office 365 K1, E1, E3 and E5 plans (including the Education version) helps with information sharing/scheduling and also has the ability to connect to other work-related apps and resources.

If you have ever been in a role where you are required to manage a schedule for staff, it’s a serious time-sink if you don’t have the proper tools. While there are many options available to help with this task, Microsoft is now including one for ‘free’ with your Office 365 subscription (provided you are in the correct tier).

The app makes it possible to setup a schedule quickly and also allows employees to swap shifts from inside the mobile apps. The apps also act as the scheduling manager where an employee can view their upcoming shifts and managers can update the schedules in real-time too. Currently, you can access the platform from the web and iOS/Android apps; there are no Windows 10 apps available at this time

In addition, this app makes it easy for manager to distribute information to their teams; policy documents, news videos, etc. And if the need arrises, this is also an easy way for manager to communicate 1:1 with an employee as well.The StaffHub app does not work in isolation either, it can connect to other applications like Kronos.

The app is now available today and comes at no additional charge to your Office 365 subscription. Microsoft is once again trying to show the value of its Office 365 subscription model and is continuously adding new services with today’s announcement of StaffHub and also applications like Teams.

The post Microsoft’s StaffHub Comes To Office 365 appeared first on Petri.

New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

The content below is taken from the original (New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

Right Click Enhancer Adds Tons of Useful Options to Your Context Menu

The content below is taken from the original (Right Click Enhancer Adds Tons of Useful Options to Your Context Menu), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows: The right-click menu in Windows comes with tons of useful options. Right Click Enhancer makes it even better by letting you add custom commands to your context menu.

Read more…

Our full 2017 schedule of UK Picademy events

The content below is taken from the original (Our full 2017 schedule of UK Picademy events), to continue reading please visit the site. Remember to respect the Author & Copyright.

Happy new year to everyone! We’re back with a new programme of Picademy events for 2017. All our UK events have been scheduled up to the end of the year, so you can look ahead and choose something at a location and date that is convenient.

An educator gets to grips with our Camera Module

For the uninitiated, Picademy is a free CPD programme that aims to give educators the skills and knowledge they need to get creative with computing, no matter what their level of experience. In fact, you don’t need any previous experience to apply, just an enthusiasm for teaching kids computing. Each course lasts for two full days and is a mixture of digital making workshops, project-based learning, and hacking. Delegates graduate as Raspberry Pi Certified Educators (RCEs).

Last year’s Picademy events yielded some wonderful moments. We trained over 540 educators in the UK and the US, so we had lots of highlights to choose from; I certainly witnessed many in person while delivering events in Glasgow. Two of my favourites included the educator who created music by coding DNA into Sonic Pi as note values (amazing!), and the project that used the Sense HAT to input notes to Sonic Pi and then convert them into coloured blocks in Minecraft for a digital disco.

It was so great to see the enthusiasm, the camaraderie, and the willingness of educators to be open to new experiences. You could see the cogs turning as they thought about how they could apply the new ideas to work in their own classrooms. It was also great to hear about things educators found less easy, and to answer questions about aspects of the computing curriculum. We find this feedback particularly useful as we are always looking for ways to improve our content and provide better support.

Below you’ll find details of the Picademy events we’re running across the UK in 2017:

CITY VENUE DATES
Cardiff Tramshed,
Clare Rd,
Cardiff,
CF11 6QP
21/22 February
Manchester MadLab Manchester,
36-40 Edge St,
Manchester,
M4 1HN
14/15 March
02/03 October
Birmingham The Learning Hub,
Birmingham Airport,
Birmingham,
B26 3QJ
10/11 April
04/05 December
Cambridge Raspberry Pi Foundation,
30 Station Road,
Cambridge,
CB1 2JH
15/16 May
London TBC Late May*
Late November*

* While London details are not fully confirmed, you can still apply for these events. We will email details to applicants later in 2017.

Who should apply?

We are looking for inspirational educators who are passionate about computing, enthusiastic about creating awesome learning experiences for their students, and proactive at sharing good practice.

While we’re primarily looking for primary, secondary, FE and HE teachers to apply, we’re also seeking other outstanding educators such as librarians, community educators, trainee teachers, and trainers of teachers.

We’re committed to running free high-quality training, and we invest substantial time (and money) in the educators that attend. Our hope is that our certified educators not only return home with a digital making mindset to inspire students and colleagues, but also have an impact on their wider education community through social media, meetups, or running their own training.

With this in mind, we should point out that Picademy events are often oversubscribed: for this reason, it’s really important that we get a sense of the person behind the application. We would therefore urge you to take your time when answering questions that ask you to reflect on your own experiences and reasons for applying.

A cohort of Picademy graduates in Manchester

How to apply

To apply for any of the events, fill in our Online Application Form. If you have any further questions, you can email [email protected] or post a message in the Picademy area on our forums.

The post Our full 2017 schedule of UK Picademy events appeared first on Raspberry Pi.

1 kB Challenge: And the winners are…

The content below is taken from the original (1 kB Challenge: And the winners are…), to continue reading please visit the site. Remember to respect the Author & Copyright.

The 1 kB Challenge deadline has come and gone. The judges have done their work, and we’re ready to announce the winners. Before you jump down to find out who won, I’d like to take a moment to say thanks to everyone who participated. We had some incredible entries. To say that judging was hard is quite an understatement. Even [Eben Upton], father of the Raspberry Pi got in on the action. He created a new helicopter game for the classic BBC Micro. Look for writeups on the winners and many of the other entries in the coming weeks.

Grand Prize

brainfckThe grand prize goes to [Jaromir Sukuba] for Brainf*cktor. [Jaromir] went above and beyond this time. He created a computer which can be programmed in everyone’s favorite esoteric programming language. Brainf*cktor uses 1019 bytes of program memory in [Jaromir’s] PIC18F26K22. You can write, execute and edit programs. [Jaromir] ran into a bit of a problem with his LCD. The character tables would have thrown him over the 1 kB limit. Not a problem – he designed his own compressed character set, which is included in the 1019 bytes mentioned above. All the clever software takes physical form with a homemade PCB, and a case built from blank PCB material. Best of all, [Jaromir] has explained his software tricks, as well as included a full build log for anyone who wants to replicate his project. All that hard work will be rewarded with a Digi-Comp II kit from EMSL.

First Prize

mosFirst prize goes to [Dumitru Stama] with M0S – CortexM0 RTOS in 1024 bytes. Operating systems are complex beasts. Many of our readers have toyed with the Linux Kernel. But writing a real-time OS from scratch? That’s quite an undertaking.  [Dumitru] didn’t shy away from the challenge. He designed a Real-Time Operating System (RTOS) for ARM processors, written completely in ARM thumb assembly instructions. This is no bare-bones executive. M0S has a rich list of features, including preemptive task scheduling, mutexes, and inter-process communication. [Dumitru] even gave us memory allocation with an implementation of malloc() and free(). The OS was demonstrated with a NUCLEO-F072RB board from ST-Micro.

[Dumitru] didn’t just drop a GitHub link and run. He documented M0S with seven project logs and a 37-minute long video. The video uses electronic whiteboard drawings to clearly explain all the internal workings of the operating system, as well as how to use it.

[Dumitru] is the proud new owner of a Maker Select 3D printer V2!

Second Prize

1klaserSecond prize goes to [Cyrille Gindreau] with 1K Challange Laser. Vector lasers generally take lots of memory. You have to manage galvanometers, laser drive, and perform all the magic it takes to convert a set of vectors to lines drawn in space. The project uses 912 bytes of program and initialized data memory to command an MSP430 to draw an image.

Proving that flattery will get you everywhere, [Cyrille] picked the Hackaday logo as the subject. The Jolly Wrencher is not exactly simple to convert to vector format, though. It took some careful optimizations to come up with an image that fit within 1 kB. [Cyrille] wins a Bulbdial Clock kit from EMSL.

Third Prize

tinygamesThird prize goes to [Mark Sherman] with tinygames. Video games have been around for awhile, but they are never quite this small. [Mark] coaxed the minuscule Atmel ATtiny84 to play Centipede with only 1024 bytes of program memory. Even the BOM is kept small, with just a few support components. Control is handled by an Atari 2600 compatible joystick. Video is black and white NTSC, which is demonstrated on a period accurate CRT. [Mark] generates his video by racing the electron beam, exactly the same way the Atari 2600 did it.

[Mark] will take home a Blinkytile kit from Blinkinlabs.

Final thoughts

First of all, I’d like to thank the judges. Our own [Jenny List], [Gerrit Coetzee], [Pedro Umbelino], [Bil Herd], and [Brian Benchoff] worked hard with me in judging this contest. I’d also like to thank our community for creating some amazing projects. The contest may be over, but these projects are now out there for others to build, enjoy, and learn from.

I’ve wanted to organize this contest since [Jeri Ellsworth] and [Chris Gammell] took on the 555 contest way back in 2011. The problem is creating a set of rules that would be relatively fair to every architecture. I think 133 entries to this contest proves that we found a very fair set of constraints. It is safe to say this won’t be the last 1 kB Challenge here at Hackaday, so if you have ideas for future editions, share them in the comments!

Get to Know Everything About Using a Multimeter With This Guide

The content below is taken from the original (Get to Know Everything About Using a Multimeter With This Guide), to continue reading please visit the site. Remember to respect the Author & Copyright.

Multimeters seem simple enough to use. You turn it on, connect the leads, then start poking things. Really though, there’s a quite a bit going on in a multimeter and a lot of different settings to get used to. Make has a guide that walks you through everything you need to know.

Read more…

Disposable Drones

The content below is taken from the original (Disposable Drones), to continue reading please visit the site. Remember to respect the Author & Copyright.

Disposable Drones

How do you deliver medical supplies to a war zone cheaply? The answer, according to this project, might be a make a disposable drone. Created by friend of Hackaday [Star Simpson] and the Sky Machines group at Otherlab, this project is looking to make drones out of cheap biodegradable products like cardboard. Rather than risk an expensive drone that might never return, the project imagines a drone that flies to the target, delivers its cargo with an accuracy of about 10 meters and then be easily disposed of. The prototype the team is working on is part of a DARPA project called Inbound, Controlled Air-Releasable Unrecoverable Systems (ICARUS) and is a glider designed to be released from a plane or helicopter. Using a cheap GPS receiver and controller, the drone then glides to the destination. It’s an interesting take on the drone: making it so simple and cheap that you can use it once and throw it away. And if you want to get a feel for how [Star] and Otherlabs approach problems like this, check out the awesome talk that [Star] gave at our recent Superconference on making beautiful circuit boards.

Thanks for the tip, [Andrew]!

3 reasons why 2017 will see massive cloud migration

The content below is taken from the original (3 reasons why 2017 will see massive cloud migration), to continue reading please visit the site. Remember to respect the Author & Copyright.

How many Global 2000 enterprise applications are on the public cloud right now? You’ll see estimates of 20 to 30 percent, but that overstates the reality.

When all outsourced hosting is taken into account—which includes SaaS, IaaS, and PaaS—many analysts estimate that 20 to 30 percent of workloads are currently on the cloud. But a better metric is to look at what enterprise applications have migrated to an IaaS or a PaaS platform, which is how most enterprises measure their presence in the cloud. Although SaaS is certainly an option for replacing on-premises applications, its usage tends to be for new, often off-the-shelf software, not existing software as in the case of IaaS and PaaS.

Using that IaaS- and PaaS-only scenario, Global 2000 enterprises have migrated about 5 to 7 percent of their on-premises applications. That’s up from my estimate of 1 percent in 2013, a figure that aligns well with the revenue growth of the major public cloud providers.

What will the number be at the end of 2017? Brace yourself: I figure it will be about 18 to 20 percent. Why that huge jump? I see three reasons:

  1. In 2015 and 2016 many Global 2000 companies set up migration factories in their IT groups. These factories include methodologies, tools, approaches, and even the use of devops practices to speed cloud migrations. Moreover, these companies have been training and hiring cloud architects, consultants, and developers, then putting in place other new skills they will need. Thus, the troops are aligned and ready for battle.
  2. The projects in 2015 and 2016 that got the Global 2000 to 5 to 7 percent were pioneer projects that provided enough experience for enterprises to become confident that they can now hit the accelerator.
  3. Enterprises have picked their low-hanging fruit for the first wave of migrations. In other words, they are going to pick the easy stuff first, which willed them migrate a large volume of applications fast.

If you’re surprised by my migration projection of 18 to 20 percent this year, I’ll surprise you again: I believe we’ll see a similar jump in 2018, too—for the same reasons.

Of course, at some point, we’ll reach a saturation point where the workloads left to migrate won’t prove cost-effective, and many enterprises will elect to leave them where they are, move them to a managed server provider or colocation provider, retire them (with—where possible—a SaaS replacement if the functionality is needed).

Anyway, that’s my story and I’m sticking to it. We’ll see if I’m right this time next year.

Disk-wiping malware Shamoon targets virtual desktop infrastructure

The content below is taken from the original (Disk-wiping malware Shamoon targets virtual desktop infrastructure), to continue reading please visit the site. Remember to respect the Author & Copyright.

A cybersabotage program that wiped data from 30,000 computers at Saudi Arabia’s national oil company in 2012 has returned and is able to target server-hosted virtual desktops.

The malware, known as Shamoon or Disttrack, is part of a family of destructive programs known as disk wipers. Similar tools were used in 2014 against Sony Pictures Entertainment in the U.S. and in 2013 against several banks and broadcasting organizations in South Korea.

Shamoon was first observed during the 2012 cyberattack against Saudi Aramco. It spreads to other computers on a local network by using stolen credentials and activates its disk-wiping functionality on a preconfigured date.

In November last year, security researchers from Symantec reported finding a new version of Shamoon that had been used in a fresh wave of attacks against targets in Saudi Arabia. The version was configured to start overwriting data on hard disk drives on Thursday, November 17 at 8:45 p.m. local time in Saudi Arabia, shortly after most workers in the country started their weekend.

Researchers from Palo Alto Networks found yet another Shamoon variant, different from the one seen by Symantec and likely used against a different target in Saudi Arabia. This third version had a kill date — the day when it was configured to start wiping data  — of November 29 and contained hard-coded account credentials that were specific to the targeted organization, the Palo Alto researchers said Monday in a blog post.

Some of those credentials were for Windows domain accounts, but a few were default usernames and passwords for Huawei FusionCloud, a virtual desktop infrastructure (VDI) solution.

VDI products like Huawei FusionCloud let companies run multiple virtualized desktop installations inside a data center. Users then access these virtual PCs from thin clients, making workstation management across different branches and offices a lot easier.

Another benefit of VDI solutions is that they create regular snapshots of these virtualized desktops, allowing administrators to easily restore them to a known working state in case something goes wrong.

Apparently the attackers behind this latest Shamoon campaign were aware that the targeted organization used Huawei’s VDI product and realized that it wouldn’t be enough to just wipe virtual PCs using stolen Windows domain credentials.

“The fact that the Shamoon attackers had these usernames and passwords may suggest that they intended on gaining access to these technologies at the targeted organization to increase the impact of their destructive attack,” the Palo Alto Networks researchers said. “If true, this is a major development and organizations should consider adding additional safeguards in protecting the credentials related to their VDI deployment.”

While so far this technique has only been observed in a targeted cyberattack whose primary purpose was the destruction of data, it could easily be adopted by ransomware creators in the future. Some ransomware variants already attempt to delete certain types of backups before encrypting data, so targeting VDI snapshots would be a natural expansion of that tactic.

None of the targets in the November attacks were named by Symantec or Palo Alto Networks.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

This Video Explains How GitHub Works As Simply As Possible

The content below is taken from the original (This Video Explains How GitHub Works As Simply As Possible), to continue reading please visit the site. Remember to respect the Author & Copyright.

As ubiquitous as it is, GitHub is a little baffling for beginners because it’s not evident at the start how it actually works. So, GitHub made a video to help make sense of it all.

Read more…

Nine reasons why commuting by bike is surprisingly brilliant

The content below is taken from the original (Nine reasons why commuting by bike is surprisingly brilliant), to continue reading please visit the site. Remember to respect the Author & Copyright.

With train and tube strikes affecting many people’s commute, maybe it’s time to try cycling into work… and here’s why

Commuting by bike

With train and tube strikes affecting many people’s commute, maybe it’s time to try cycling into work… and here’s why

Which other subreddits?

The content below is taken from the original (Which other subreddits?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi fellow homelabbers…

Which other tech related subreddits do you subscribe to, and why? Looking for some quality subreddits that are out there, that I'm missing. I am currently subscribed to /r/homeserver and /r/datahoarder

How to enable & configure PIN Complexity Group Policy in Windows 10

The content below is taken from the original (How to enable & configure PIN Complexity Group Policy in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

You can force your users to create a complex PIN that uses digits, lowercase, uppercase & special characters to sign into in Windows 10 or Windows Server 2016 by enabling PIN Complexity Group Policy.

To create a PIN for signing into Windows 10 you have to open Settings > Accounts > Sign-in options. Here under PIN you will a Create or Add button to create a new PIN or you will see a Change or Remove button to change the PIN or remove it. You can enforce a policy where your users will be required to create a strong complex PIN to sign in. Let us see how to do this.

PIN Complexity Group Policy

To configure this policy, your version of Windows must ship with the Group Policy Editor. The Group Policy Editor is available in Windows 10 Pro, Windows 10 Enterprise, and Windows 10 Education editions only, and not in Windows 10 Home.

PIN Complexity Group Policy

Run gpedit.msc to open the Local Group Policy Editor and navigate to the following setting:

Computer Configuration > Administrative Templates > Windows Components > Windows Hello for Business > PIN Complexity

Here you will see the following settings that are available:

  • Require digits: Use this policy setting to configure the use of digits in the PIN.
  • Require lowercase letters: Use this policy setting to configure the use of lowercase letters in the PIN.
  • Maximum PIN length: The largest number you can configure for this policy setting is 127
  • Minimum PIN length: The lowest number you can configure for this policy setting is 4
  • Expiration: This setting specifies the period of time (in days) that a PIN can be used before the system requires the user to change it.
  • History: This setting specifies the number of past PINs that can be associated to a user account that can’t be reused.
  • Require special characters: Use this policy setting to configure the use of special characters in the PIN.
  • Require uppercase letters: Use this policy setting to configure the use of uppercase letters in the PIN.

Double-clicking on each of these settings will open up the configuration box for this setting – and the options & details are as follows-

Require digits Not configured: Users must include a digit in their PIN.

Enabled: Users must include a digit in their PIN.

Disabled: Users cannot use digits in their PIN.

Require lowercase letters Not configured: Users cannot use lowercase letters in their PIN.

Enabled: Users must include at least one lowercase letter in their PIN.

Disabled: Users cannot use lowercase letters in their PIN.

Maximum PIN length Not configured: PIN length must be less than or equal to 127.

Enabled: PIN length must be less than or equal to the number you specify.

Disabled: PIN length must be less than or equal to 127.

Minimum PIN length Not configured: PIN length must be greater than or equal to 4.

Enabled: PIN length must be greater than or equal to the number you specify.

Disabled: PIN length must be greater than or equal to 4.

Expiration Not configured: PIN does not expire.

Enabled: PIN can be set to expire after any number of days between 1 and 730, or PIN can be set to never expire by setting policy to 0.

Disabled: PIN does not expire.

History Not configured: Previous PINs are not stored.

Enabled: Specify the number of previous PINs that can be associated to a user account that can’t be reused.

Disabled: Previous PINs are not stored.

Require special characters Not configured: Users cannot include a special character in their PIN.

Enabled: Users must include at least one special character in their PIN.

Disabled: Users cannot include a special character in their PIN.

Require uppercase letters Not configured: Users cannot include an uppercase letter in their PIN.

Enabled: Users must include at least one uppercase letter in their PIN.

Disabled: Users cannot include an uppercase letter in their PIN.

Go through the options carefully before you enable them.

As an example, let us say we want that users should use special characters in their PIN. In this case, you will have to double-click on Require special characters to open its configuration box.

Select Enabled and click on Apply.

Use this policy setting to configure the use of special characters in the PIN.  Allowable special characters are: ! ” # $ % & ‘ ( ) * + , – . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~ . If you enable this policy setting, Windows Hello for Business requires users to include at least one special character in their PIN. If you disable or do not configure this policy setting, Windows Hello for Business does not allow users to use special characters in their PIN.

Once you enable these policies, your users will be required to change the PIN and depending on the policies you have set, they will see the PIN requirements that you may have set.

complex-pin

Hope this helps.

See this post of PIN does not work and will not let you sign in to Windows 10.



OpenStack Developer Mailing List Digest December 31 – January 6

The content below is taken from the original (OpenStack Developer Mailing List Digest December 31 – January 6), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Dims – Keystone now has Devstack based functional test with everything running under python3.5.
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All

Time To Retire Nova-docker

  • nova-docker has lagged behind the last 6 months of nova development.
  • No longer passes simple CI unit tests.
    • There are patches to at least get the unit tests work 1 .
  • If the core team no longer has time for it, perhaps we should just archive it.
  • People ask about it on ##openstack-nova about once or twice a year, but it’s not recommended as it’s not maintained.
  • It’s believed some people are running and hacking on it outside of the community.
  • The Sun project provides lifecycle management interface for containers that are started in container orchestration engines provided with Magnum.
  • Nova-lxc driver provides an ability of treating containers like your virtual machines. 2
    • Not recommended for production use though, but still better maintained than nova-docker 3.
  • Nova-lxd also provides the ability of treating containers like virtual machines.
  • Virtuozzo which is supported in Nova via libvirt provides both a virtual machine and OS containers similar to LXC.
    • These containers have been in production for more than 10 years already.
    • Well maintained and actually has CI testing.
  • A proposal to remove it 4 .
  • Full thread

Community Goals For Pike

  • A few months ago the community started identifying work for OpenStack-wide goals to “achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become to high – across all OpenStack projects.”
  • First goal defined 5 to remove copies of incubated Oslo code.
  • Moving forward in Pike:
    • Collect feedback of our first iteration. What went well and what was challenging?
    • Etherpad for feedback 6
  • Goals backlog 7
    • New goals welcome
    • Each goal should be achievable in one cycle. If not, it should be broken up.
    • Some goals might require documentation for how it could be achieved.
  • Choose goals for Pike
    • What is really urgent? What can wait for six months?
    • Who is available and interested in contributing to the goal?
  • Feedback was also collected at the Barcelona summit 8
  • Digest of feedback:
    • Most projects achieved the goal for Ocata, and there was interest in doing it on time.
    • Some confusion on acknowledging a goal and doing the work.
    • Some projects slow on the uptake and reviewing the patches.
    • Each goal should document where the “guides” are, and how to find them for help.
    • Achieving multiple goals in a single cycle wouldn’t be possible for all team.
  • The OpenStack Product Working group is also collecting feedback for goals 9
  • Goals set for Pike:
    • Split out Tempest plugins 10
    • Python 3 11
  • TC agreeements from last meeting:
    • 2 goals might be enough for the Pike cycle.
    • The deadline to define Pike goals would be Ocata-3 (Jan 23-27 week).
  • Full thread

POST /api-wg/news

  • Guidelines current review:
    • Add guidelines on usage of state vs. status 12
    • Add guidelines for boolean names 13
    • Clarify the status values in versions 14
    • Define pagination guidelines 15
    • Add API capabilities discovery guideline 16
  • Full thread

 

Microsoft closes the door on Visual Studio’s Team Rooms

The content below is taken from the original (Microsoft closes the door on Visual Studio’s Team Rooms), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s Team Room collaboration capability for application lifecycle management soon will be no more. Instead, developers will need to rely on other options, such as Slack or Microsoft Teams.

The company said this week that Team Rooms is to be deprecated from the on-premise Visual Studio Team Foundation Server at the next major version, and from the online Visual Studio Team Services platform later this year.

“We don’t have a name yet for this release, but it will be the version beyond TFS 2017 and associated updates,” Microsoft’s Ewald Hofman, TFS program manager, said.

Explaining Microsoft’s reasoning, Hoffman said more collaboration solutions have emerged since Team Rooms was added a few years ago. Slack, for one, has emerged as a popular team communication tool, while Teams provides a chat-centered workspace in Office 365. “With so many good solutions available that integrate well with TFS and Team Services, we have made a decision to deprecate our Team Room feature from both TFS and Team Services.”

Both TFS and Team Services integrate with HipChat, Campfire, and Flowdock, Hoffman said, and he noted “you can also use Zapier to create your own integrations or get very granular control over the notifications that show up.” Another alternative is to install the Activity Feed widget, showing activity in a team’s dashboard.

Microsoft’s move was lauded by a user responding to the company’s bulletin about the deprecation. “Good to see this,” one commenter said. “While I appreciate the innovative approach taken here, I always felt the rooms were a confusing addition to the otherwise-solid feature set of the TS offering. Chat is a very difficult domain to solve correctly, as Skype clearly demonstrates.”

This story, “Microsoft closes the door on Visual Studio’s Team Rooms” was originally published by

InfoWorld.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

text-to-mp3-converter (2.0.0)

The content below is taken from the original (text-to-mp3-converter (2.0.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Text to speech converter recording MP3 files

Keeping up with RISC OS in 2017

The content below is taken from the original (Keeping up with RISC OS in 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a minority platform, you can sometimes feel a little isolated. So here are some suggestions of how updated with developments and meet with the Community.

This is just a selection, so please feel to add your own suggestions in the comments section.

The Shows
There are 3 dedicated shows (and also RISC OS appearances at retro events, Pi jams and other events). This is your chance to meet other users, talk to developers and actually see and touch new software and hardware. 3 dates for your diary are
Saturday 25th February (South-West Show)
Saturday 22nd April (Wakefield Show)
Saturday 28th October (London Show)

Magazines
Drag’N’Drop is published every quarter as an online magazine.
Archive magazine is published on dates calculated using a secret forumla known only to Jim Nagel. It also runs an offline discussion group.

Google Newsgroups
comp.sys.acorn.* groups are still active and see regular postings. comp.sys.acorn.announce is still the place for announcements for new releases. I also recommend comp.sys.acorn.misc

Some RISC OS websites
RISC OS Open includes all the latest developments for RISC OS 5 and a set of busy discussion forums (including one called Aldershot for all things non-RISC OS).
Stardot is a very active forum with lots of discussion forums for both 32bit and 8bit topics.
Riscository is an active news site.
RISC OS Blog is another new site. It also runs some good comparison and summary articles.
riscos.fr has been running some great competitions in 2016. Lots of french resources and also caters for English readers. It has a great list of resources and books you can read/buy online.
Riscoscode has been quiet in 2016 but still has some really good links and has posted some really interesting articles.

User groups
There are still some very active RISC OS user groups out there. 2 to get you started are
Rougol who organises a regular monthly London meeting with external speakers.
Wakefield RISC OS Users group which organises monthly meetings, an online discussion group and the Wakefield show.

Youtube
Many talks from previous RISC OS shows and events are posted on youtube.
If you are looking to buy a new machines, here is Chris Hall to give some some ideas and options.
Or maybe James Hodson’s getting started on C might appeal.

So what are your favourite events/resources/links/websites?

No comments in forum

Replacing the Deprecated Azure RemoteApp

The content below is taken from the original (Replacing the Deprecated Azure RemoteApp), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft-Azure-cloud-hero

Microsoft-Azure-cloud-hero

In this article, I will explain your options for replacing Azure RemoteApp, which Microsoft plans to terminate in August 2017.

End of Azure RemoteApp

Azure RemoteApp was a feature of Azure and was available only via Azure Service Management (ASM) and not in Azure Remote Management (ARM/CSP). It provided a very easy way to deliver remote desktop services connections to desktop applications that were hosted in a collection of Remote Desktop Services (RDS) session hosts. There were a few nice features about the solution, including:

  • The licensing was a simple, all-inclusive, per-user charge. If you used the service, you were charged based on how many users had access and how much they used the service. If you didn’t deploy the service (maybe for “normal” times in a DR scenario), then there was no charge. And the cost of the RDS CAL was included with the cost of the VMs (including Windows license and CAL) that powered the service.
  • It was simple to deploy and design compared to a normal RDS farm. There are a lot of pieces in an RDS farm. With Azure RemoteApp, Azure handled the connection broker, SSL gateways, and all the other RDS pieces behind the scenes; all you needed to provide RemoteApp with was an image of your session host, user accounts (via Azure AD), a domain to join (for optimal usage and management), and a virtual network to connect to.
  • It made for a nice way to provide users with a way to connect to legacy client-server applications that are hosted in the cloud, bypassing the effects of latency by moving the client experience to the same virtual network as the services/data.

On August 12th, Microsoft announced the deprecation of Azure RemoteApp, and as one expect, marketing told us that this was a good thing. Personally, I was shocked. I like the easy package of RemoteApp, the wrapped up pricing that didn’t require another licensing program outside of Azure, and the ability to fire up RemoteApp on demand using a PowerShell script or Azure Automation. What left me further stunned was that we were told we had one year to find an alternative and that the recommended solution was vapour-ware from Citrix … a solution that was just announced but would not be available until Q1 or Q2 of 2017.

 

 

Customers need an alternative to Azure RemoteApp. We have known since Microsoft Ignite that Citrix has plans for 2 solutions, but there are others that we can consider, too; I’ll discuss these options so that you can start planning now.

Remote Desktop Services

Azure RemoteApp might be dead, but this doesn’t mean that RDS in Azure is dead, too. There is nothing to stop you from deploying an RDS farm in Azure using Azure virtual machines — this is the first thing I ever deployed in Azure because I wanted to experience what a client/server solution would be like. You’ll either do a one machine deployment for a very small number of users, or you’ll design a scaled out RDS farm will all of the RDS components running in their own virtual machines.

Azure RemoteApp used Standard A3 (4 cores, 7GB RAM) virtual machines for the session hosts — this was what Microsoft, Azure, and some RDS MVPs recommended because it balanced price, performance, and scale-out abilities. Today, you might consider the Av2-Series virtual machines. They are lower cost, and the M variants offer larger amounts of RAM. The Standard A3 costs roughly $267 per month (North Europe), but the slightly larger A4v2 (8GB RAM) costs just $203 per month. You also have the option of using N-Series virtual machines if GPU performance is required for workloads such as AutoCAD.

Note that the cost of a Windows Server virtual machine includes the Windows CAL, but it does not include the cost of an RDS CAL. You must license users for RDS via one of these methods:

  • Purchase RDS CALs with Software Assurance via volume licensing
  • Lease RDS SALs on a per month basis via a SPLA-R agreement

Citrix XenApp

The masters of the server-based desktop, Citrix, has published guidance on how to deploy XenApp on Azure. Like with the above RDS solution, instead of deploying Citrix with on-premises virtual machines, you use Azure virtual machines instead.

Licensing-wise, things get complicated here. You’ll need:

  • Azure virtual machines (and all their dependencies)
  • RDS licensing for each user (see above)
  • Citrix licensing

Potentially, you’re acquiring licensing via three different channels for something that is meant to be “utility computing.”

Sponsored

Citrix and Azure Partnership

The silver lining that Microsoft marketing put in the cloud (of the death of Azure RemoteApp) is that Microsoft is partnering with Citrix to deploy alternative solutions for RemoteApp.

Blackmail jokes aside, there is some gossip about why Microsoft went with Citrix for an insourced desktop/VDI solution instead of using its own code. I cannot state that any of this is fact, but there might be some grain of truth in the following:

  • Microsoft has had success in using other vendors’ code in Azure. Docker, Mesosphere, and Hadoop are all examples of where Microsoft could have coded something itself (and might have already started) but realized that others are better at it than the Redmond-headquartered corporation already was.
  • Azure RemoteApp was not very scalable. I had heard from people that I trust that it did not scale well beyond 1,000 users per deployment.
  • Microsoft was having trouble migrating RemoteApp to Azure Resource Manager (ARM).

Microsoft and Citrix co-announced in the Summer that Citrix would be bringing two of its services to Azure:

  • XenApp Express
  • XenDesktop for Windows 10

Both of these solutions will be:

  • Deployed and managed from the Citrix Cloud portal, which just happens to be hosted on Azure, but that’s kind of irrelevant; you will still have to log into a Citrix portal, which means that you will have two points of management and two management experiences.
  • Be deployed in your Azure subscription from the Citrix Cloud portal.
  • Able to leverage all kinds of Azure virtual machines, including the N-Series machines, which offer direct hardware access to NVIDIA graphics processing units (GPUs). This means that high-demand applications such as AutoCAD should perform very well in the cloud.

XenApp Express is a direct successor to Azure RemoteApp; based on what was shown at Microsoft Ignite by Citrix, the solution looks very similar to RemoteApp. Citrix deliberately maintained the deployment experience, the language, and it is executed similarly. Virtual machines operate as session hosts, each of which hosts many virtualized session environments for multiple users at a time.

XenDesktop is quite interesting because it brings with it a much-demanded licensing change for Windows 10. Hosted VDI on shared infrastructure has not been possible (licensing terms) using the Windows desktop operating system until now. Windows 10 Enterprise will have conditions to allow hosted VDI and Citrix will leverage this for XenDesktop running on Azure. I’m not sure what the precise licensing terms are just yet — no real detail was shared at Ignite — but it is a start!

What Citrix presented was very interesting looking, but I had some concerns:

  • You must deploy its services from the Citrix cloud. I don’t like that at all because it doubles the management experience and it sets a bad precedent for future insourcing on Azure.
  • The licensing model that was shared at Ignite was a disaster. Part of RemoteApp’s attractiveness was the per-user simplicity. Citrix requires licensing from them, Microsoft volume licensing and SPLA-R, and Azure. The two companies need to come up with a profit sharing scheme where using Citrix on Azure is a simple per-user charge that is deducted by Azure based on usage. That will help adoption, contrary to the death-by-licensing approach that was shared a few months ago.
  • Although all of this is good news for larger enterprises, small-to-midsized enterprises have not been Citrix customers; they went with RDS in the past because it was affordable (half the price) and did just enough. It feels like Microsoft has abandoned this market — maybe Azure RemoteApp just didn’t get enough usage?

We expect to see Citrix XenApp on Azure appear as a technical preview very soon, with general availability expected before the scheduled death of Azure RemoteApp in August. Citrix claimed that XenDesktop should appear as an Azure service in Q4 2016 – at the time of writing Citrix, had one month left to meet that deadline.

Sponsored

Options

You have some options available to you. If you have a pressing need, you can deploy RDS or XenApp using Azure virtual machines yourself. That will require more machines than a hosted solution such as Azure RemoteApp, but it will solve a need. If you can wait, Citrix will bring two services to Azure over the next 10 months; hopefully, in that time, Citrix and Microsoft can figure out the logistics to make the services both affordable and logistically easier to consume.

The post Replacing the Deprecated Azure RemoteApp appeared first on Petri.