AWS Cloud9 – Cloud Developer Environments

The content below is taken from the original ( AWS Cloud9 – Cloud Developer Environments), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10

The content below is taken from the original ( Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you need a quick OpenSSH client or server for Windows 10, there is a beta client hidden and available for installation

The post Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10 appeared first on ServeTheHome.

Magnesium batteries could be safer and more efficient than lithium

The content below is taken from the original ( Magnesium batteries could be safer and more efficient than lithium), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s still early days for the promise of safer, energy-dense solid-state rechargeable batteries. However, a team of scientists at the Joint Center for Energy Storage Research have just discovered a fast magnesium-ion solid-state conductor that will g…

Store files ‘in’ the internet

pingfs - "True cloud storage"
	by Erik Ekman <[email protected]>

pingfs is a filesystem where the data is stored only in the Internet itself,
as ICMP Echo packets (pings) travelling from you to remote servers and
back again.

https://github.com/yarrick/pingfs

Eight best smart turbo trainers for 2017/2018

The content below is taken from the original ( Eight best smart turbo trainers for 2017/2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Your definitive guide to the smart turbo trainer, what they are, what they can do and where to find the best

smart turbo trainer

Your definitive guide to the smart turbo trainer, what they are, what they can do and where to find the best ones

Keeping Time With Amazon Time Sync Service

The content below is taken from the original ( Keeping Time With Amazon Time Sync Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re launching Amazon Time Sync Service, a time synchronization service delivered over Network Time Protocol (NTP) which uses a fleet of redundant satellite-connected and atomic clocks in each region to deliver a highly accurate reference clock. This service is provided at no additional charge and is immediately available in all public AWS regions to all instances running in a VPC.

You can access the service via the link local 169.254.169.123 IP address. This means you don’t need to configure external internet access and the service can be securely accessed from within your private subnets.

Setup

Chrony is a different implementation of NTP than what ntpd uses and it’s able to synchronize the system clock faster and with better accuracy than ntpd. I’d recommend using Chrony unless you have a legacy reason to use ntpd.

Installing and configuring chrony on Amazon Linux is as simple as:


sudo sudo yum erase ntp*
sudo yum -y install chrony
sudo service chronyd start

Alternatively, just modify your existing NTP config by adding the line server 169.254.169.123 prefer iburst.

On Windows you can run the following commands in PowerShell or a command prompt:


net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:"169.254.169.123"
w32tm /config /reliable:yess
net start w32time

Leap Seconds

Time is hard. Science, and society, measure time with respect to the International Celestial Reference Frame (ICRF), which is computed using long baseline interferometry of distant quasars, GPS satellite orbits, and laser ranging of the moon (cool!). Irregularities in Earth’s rate of rotation cause UTC to drift from time with respect to the ICRF. To address this clock drift the International Earth Rotation and Reference Systems (IERS) occasionally introduce an extra second into UTC to keep it within 0.9 seconds of real time.

Leap seconds are known to cause application errors and this can be a concern for many savvy developers and systems administrators. The 169.254.169.123 clock smooths out leap seconds some period of time (commonly called leap smearing) which makes it easy for your applications to deal with leap seconds.

This timely update should provide immediate benefits to anyone previously relying on an external time synchronization service.

Randall

Amazon is putting Alexa in the office

The content below is taken from the original ( Amazon is putting Alexa in the office), to continue reading please visit the site. Remember to respect the Author & Copyright.

 The interface is evolving. What has long been dominated by screens of all shapes and sizes is now being encroached upon by the voice. And while many companies are building voice interfaces — Apple with Siri, Google with Assistant, and Microsoft with Cortana — none are quite as dominant as Amazon has been with Alexa.
At the AWS reinvent conference, Amazon will announce Alexa for… Read More

Alexa and Echo will land in Australia and NZ in early 2018

The content below is taken from the original ( Alexa and Echo will land in Australia and NZ in early 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon just dropped its umpteenth Alexa skill, this time for Destiny 2 fans. Already in the tens of thousands, the digital assistant's tricks span shopping, news, smart home controls, pop trivia, kiddie pastimes, and now video games. But while a grow…

Uploading to Azure Web Apps Using FTP

The content below is taken from the original ( Uploading to Azure Web Apps Using FTP), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, I’m going to show you how you can upload your web content to an Azure web app using FTP.

 

 

Upload My Code!

What good is a web hosting plan if you cannot put your website code on it? Azure offers a few ways to get code into an Azure web app or app service from automated solutions using the like of Visual Studio Team Services (VSTS) and GitHub to a more basic option such as using FTP.

In this post, I will show you how to use an FTP client to upload your website into a web app. My web app is called preprod. It’s actually a pre-production deployment slot for a web app called petri, which has its own FTP configuration.

Configure FTP Account

Each web app and deployment slot has its own FTP username and address. You must enter a new password to use this FTP account, which is known in Azure as a deployment credential.

 

Sponsored

 

To set up the FTP user account, open the web app and browse to Deployment Credentials under the Deployment settings. Here you can specify the user account name and the password. Please note that the password:

  • Must be between 8 and 60 characters long and longer is better.
  • Must have at least two of the following: uppercase letters, lowercase letters, and numbers.
Configuring the Azure web app FTP account in Deployment Credentials [Image Credit: Aidan Finn]

Configuring the Azure Web App FTP Account in Deployment Credentials [Image Credit: Aidan Finn]

Note that the FTP/Deployment Username is not the complete username that you will require in your FTP client. You will retrieve that in the next step.

Retrieve FTP Details

You will need a server name or address to connect to with your FTP client; this can be found in the Overview of your web app or deployment slot. Note that you can find the FTP and FTPS addresses here.

You will also get the complete FTP username. In the previous step, I set the Deployment Username to petriadmin2. However, the actual username that I need to enter in my FTP client is shown below: petri__preprod\petriadmin2. It is named after the web app and deployment slot.

The FTP address of the Azure web app [Image Credit: Aidan Finn]

The FTP Address of the Azure Web App [Image Credit: Aidan Finn]

FTP Client

I have installed an FTP client on my PC and created a new connection. I entered the following information into the New Site dialog:

  • The FTP Hostname from the web app’s overview into the Host/IP/URL box.
  • The FTP/Deployment Username from the web app’s overview into the Username box.
  • The password that I set in the web app’s Deployment Credentials into the Password box.
Connecting to the Azure web app using FTP [Image Credit: Aidan Finn]

Connecting to the Azure Web App Using FTP [Image Credit: Aidan Finn]

When I connect to the website, I can browse the web host’s file structure. You can see the familiar wwwroot folder from IIS in the below screenshot; this is where I will upload my web content.

Browsing the web app folder structure using FTP [Image Credit: Aidan Finn]

Browsing the Web App Folder Structure Using FTP [Image Credit: Aidan Finn]

I can now use the FTP tool to upload and download content to the web app’s folder structure. I’ve already extracted the website content on my PC and it’s an easy upload to the wwwroot folder from there.

 

Sponsored

Testing the Site

The default document of my site is index.php. I should verify that the Application Settings of the web app or deployment slot have the following:

  • php is one of the default documents.
  • It is also higher priority than the default document of the web app (hostingstart.html).

Now, I can browse to the URL of the web app or deployment slot and my web content should load.

The post Uploading to Azure Web Apps Using FTP appeared first on Petri.

Introducing an easy way to deploy containers on Google Compute Engine virtual machines

The content below is taken from the original ( Introducing an easy way to deploy containers on Google Compute Engine virtual machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Containers are a popular way to deploy software thanks to their lightweight size and resource requirements, dependency isolation and portability. Today, we’re introducing an easy way to deploy and run containers on Google Compute Engine virtual machines and managed instance groups. This feature, which is currently in beta, allows you to take advantage of container deployment consistency while staying in your familiar IaaS environment.

Now you can easily deploy containers wherever you may need them on Google Cloud: Google Kubernetes Engine for multi-workload, microservice friendly container orchestration, Google App Engine flexible environment, a fully managed application platform, and now Compute Engine for VM-level container deployment.

Running containers on Compute Engine instances is handy in a number of scenarios: when you need to optimize CI/CD pipeline for applications running on VMs, finetune VM shape and infrastructure configuration for a specialized workload, integrate a containerized application into your existing IaaS infrastructure or launch a one-off instance of an application.

To run your container on a VM instance, or a managed instance group, simply provide an image name and specify your container runtime options when creating a VM or an instance template. Compute Engine takes care of the rest including supplying an up-to-date Container-Optimized OS image with Docker and starting the container upon VM boot with your runtime options.

You can now easily use containers without having to write startup scripts or learn about container orchestration tools, and can migrate to full container orchestration with Kubernetes Engine when you’re ready. Better yet, standard Compute Engine pricing applies VM instances running containers cost the same as regular VMs.

How to deploy a container to a VM

To see the new container deployment method in action, let’s deploy an NGINX HTTP server to a virtual machine. To do this, you only need to configure three settings when creating a new instance:

  • Check Deploy a container image to this VM instance.
  • Provide Container image name. 
  • Check Allow HTTP traffic so that the VM instance can receive HTTP requests on port 80. 

Here’s how the flow looks in Google Cloud Console:

Run a container from the gcloud command line

You can run a container on a VM instance with just one gcloud command:

gcloud beta compute instances create-with-container nginx-vm \
  --container-image http://bit.ly/2neALil \
  --tags http-server

Then, create a firewall rule to allow HTTP traffic to the VM instance so that you can see the NGINX welcome page:

gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 --target-tags=http-server

To update such a container is just as easy:

gcloud beta compute instances update-container nginx-vm \
  --container-image http://bit.ly/2zDEWpu

Run a container on a managed instance group

With managed instance groups, you can take advantage of VM-level features like autoscaling, automatic recreation of unhealthy virtual machines, rolling updates, multi-zone deployments and load balancing. Running containers on managed instance groups is just as easy as on individual VMs and takes only two steps: (1) create an instance template and (2) create a group.

Let’s deploy the same NGINX server to a managed instance group of three virtual machines.

Step 1: Create an instance template with a container.

gcloud beta compute instance-templates create-with-container nginx-it \
  --container-image http://bit.ly/2neALil \
  --tags http-server

The http-server tag allows HTTP connections to port 80 of the VMs, created from the instance template. Make sure to keep the firewall rule from the previous example.

Step 2: Create a managed instance group.

gcloud compute instance-groups managed create nginx-mig \
  --template nginx-it \
  --size 3

The group will have three VM instances, each running the NGINX container.

Get started!

Interested in deploying containers on Compute Engine VM instances or managed instance groups? Take a look at the detailed step-by-step instructions and learn how to configure a range of container runtime options including environment variables, entrypoint command with parameters and volume mounts. Then, help us help you make using containers on Compute Engine even easier! Send your feedback, questions or requests to [email protected].

Sign up for Google Cloud today and get $300 in credits to try out running containers directly on Compute Engine instances.

Amazon EC2 Bare Metal Instances with Direct Access to Hardware

The content below is taken from the original ( Amazon EC2 Bare Metal Instances with Direct Access to Hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

When customers come to us with new and unique requirements for AWS, we listen closely, ask lots of questions, and do our best to understand and address their needs. When we do this, we make the resulting service or feature generally available; we do not build one-offs or “snowflakes” for individual customers. That model is messy and hard to scale and is not the way we work.

Instead, every AWS customer has access to whatever it is that we build, and everyone benefits. VMware Cloud on AWS is a good example of this strategy in action. They told us that they wanted to run their virtualization stack directly on the hardware, within the AWS Cloud, giving their customers access to the elasticity, security, and reliability (not to mention the broad array of services) that AWS offers.

We knew that other customers also had interesting use cases for bare metal hardware and didn’t want to take the performance hit of nested virtualization. They wanted access to the physical resources for applications that take advantage of low-level hardware features such as performance counters and Intel® VT that are not always available or fully supported in virtualized environments, and also for applications intended to run directly on the hardware or licensed and supported for use in non-virtualized environments.

Our multi-year effort to move networking, storage, and other EC2 features out of our virtualization platform and into dedicated hardware was already well underway and provided the perfect foundation for a possible solution. This work, as I described in Now Available – Compute-Intensive C5 Instances for Amazon EC2, includes a set of dedicated hardware accelerators.

Now that we have provided VMware with the bare metal access that they requested, we are doing the same for all AWS customers. I’m really looking forward to seeing what you can do with them!

New Bare Metal Instances
Today we are launching a public preview the i3.metal instance, the first in a series of EC2 instances that offer the best of both worlds, allowing the operating system to run directly on the underlying hardware while still providing access to all of the benefits of the cloud. The instance gives you direct access to the processor and other hardware, and has the following specifications:

  • Processing – Two Intel Xeon E5-2686 v4 processors running at 2.3 GHz, with a total of 36 hyperthreaded cores (72 logical processors).
  • Memory – 512 GiB.
  • Storage – 15.2 terabytes of local, SSD-based NVMe storage.
  • Network – 25 Gbps of ENA-based enhanced networking.

Bare Metal instances are full-fledged members of the EC2 family and can take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, Auto Recovery, and so forth. They can also access the full suite of AWS database, IoT, mobile, analytics, artificial intelligence, and security services.

Previewing Now
We are launching a public preview of the Bare Metal instances today; please sign up now if you want to try them out.

You can now bring your specialized applications or your own stack of virtualized components to AWS and run them on Bare Metal instances. If you are using or thinking about using containers, these instances make a great host for CoreOS.

An AMI that works on one of the new C5 instances should also work on an I3 Bare Metal Instance. It must have the ENA and NVMe drivers, and must be tagged for ENA.

Jeff;

 

H1 Instances – Fast, Dense Storage for Big Data Applications

The content below is taken from the original ( H1 Instances – Fast, Dense Storage for Big Data Applications), to continue reading please visit the site. Remember to respect the Author & Copyright.

The scale of AWS and the diversity of our customer base gives us the opportunity to create EC2 instance types that are purpose-built for many different types of workloads. For example, a number of popular big data use cases depend on high-speed, sequential access to multiple terabytes of data. Our customers want to build and run very large MapReduce clusters, host distributed file systems, use Apache Kafka to process voluminous log files, and so forth.

New H1 Instances
The new H1 instances are designed specifically for this use case. In comparison to the existing D2 (dense storage) instances, the H1 instances provide more vCPUs and more memory per terabyte of local magnetic storage, along with increased network bandwidth, giving you the power to address more complex challenges with a nicely balanced mix of resources.

The instances are based on Intel Xeon E5-2686 v4 processors running at a base clock frequency of 2.3 GHz and come in four instance sizes (all VPC-only and HVM-only):

Instance Name vCPUs
RAM
Local Storage Network Bandwidth
h1.2xlarge 8 32 GiB 2 TB Up to 10 Gbps
h1.4xlarge 16 64 GiB 4 TB Up to 10 Gbps
h1.8xlarge 32 128 GiB 8 TB 10 Gbps
h1.16xlarge 64 256 GiB 16 TB 25 Gbps

The two largest sizes support Intel Turbo and CPU power management, with all-core Turbo at 2.7 GHz and single-core Turbo at 3.0 GHz.

Local storage is optimized to deliver high throughput for sequential I/O; you can expect to transfer up to 1.15 4.5 gigabytes per second if you use a 2 megabyte block size. The storage is encrypted at rest using 256-bit XTS-AES and one-time keys.

Moving large amounts of data on and off of these instances is facilitated by the use of Enhanced Networking, giving you up to 25 Gbps of network bandwith within Placement Groups.

Launch One Today
H1 instances are available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions. You can launch them in On-Demand or Spot Form. Dedicated Hosts, Dedicated Instances, and Reserved Instances (both 1-year and 3-year) are also available.

Jeff;

Folders: a powerful tool to manage cloud resources

The content below is taken from the original ( Folders: a powerful tool to manage cloud resources), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re excited to announce general availability of folders in Cloud Resource Manager, a powerful tool to organize and administer cloud resources. This feature gives you the flexibility to map resources to your organizational structure and enable more granular access control and configuration for those resources.

Folders can be used to represent different departments, teams, applications or environments in your organization. With folders, you can give teams and departments the agility to delegate administrative rights and enable them to run independently.

Folders help you scale by enabling you to organize and manage their resources hierarchically. By enforcing Identity and Access Management (IAM) policies on folders, admins can delegate control over parts of the resource hierarchy to the appropriate teams. Using organization-level IAM roles in conjunction with folders, you can maintain full visibility and control over the entire organization without needing to be directly involved in every operation.

“Our engineering team manages several hundred projects within GCP, and the resource hierarchy makes it easy to handle the growing complexity of our environment. We classify projects based on criteria such as department, geography, product, and data sensitivity to ensure the right people have access to the right information. With folders, we have the flexibility we need to organize our resources and manage access control policies based on those criteria.” 

Alex Olivier, Technical Product Manager, Qubit

Folders establish trust boundaries between resources. By assigning Cloud IAM roles to folders, you can help isolate and protect production critical workloads while still allowing your teams to create and work freely. For example, you could grant a Project Creator role to the entire team on the Test folder, but only assign the Log Viewer role on the Production folder, so that users can do necessary debugging without the risk of compromising critical components.

The combination of organization policy and folders lets you define organization-level configurations and create exceptions for subtrees of the resource hierarchy. For example, you can constrain access to an approved set of APIs across the organization for compliance reasons, but create an exception for a Test folder, where a broader set of APIs is allowed for testing purposes.

Folders are easy to use and, as any other resource in GCP, they can be managed via API, gcloud and the Cloud Console UI. Watch this demo to learn how to incorporate folders into your GCP hierarchy.


To learn more about folders, read the beta launch blog post or the documentation.

!OBrowse reviewed

The content below is taken from the original ( !OBrowse reviewed), to continue reading please visit the site. Remember to respect the Author & Copyright.

!OBrowse was originally released as a ‘freebie/thank-you’ to anyone who had put money into RISC OS developments. It has been updated for the London Show and was also available for sale at 40 pounds (providing a simple way for people who wanted to contribute smaller amounts to the project).

!OBrowse is a front-end for the !Otter port for RISC OS. As R-Comp were very keen to stress, !Otter is not their project and is a free release which no-one has to pay for. What they have written is some code which allows you to run the software in a much more RISC OS friendly way. They are offering their front end as a way to finance their plans (which will also be available as free software to anyone). They have already made an impact by funding one of the ROOL bounties to bring RISC OS networking support up to date.

!OBrowse is not tied to any release of !Otter and does not ‘add’ any additional functionality. What it does do is make !Otter into a much more compliant and better behaved RISC OS application. It has a proper iconbar entry, can be run to open HTML files or a URL. It can take over all the protocols which use a browser, supports global clipboard, drag and drop, etc It does this role very well and you get a polished RISC OS application which works as you would expect and plays nicely with the rest of the system.

If you are an investor or want to support RISCOS Developments, it is a very nice to have application installed and a no brainer. If you just want to experiment with accessing the internet under RISC OS, !Otter will run perfectly well on your machine without it.

10 No comments in forum

NetApp’s back, baby, flaunting new tech and Azure cloud swagger

The content below is taken from the original ( NetApp’s back, baby, flaunting new tech and Azure cloud swagger), to continue reading please visit the site. Remember to respect the Author & Copyright.

By George (Kurian), he’s done it

Analysis There’s a new energy at NetApp. The Microsoft Azure NFS deal was a great confidence booster, and the two recent acquisitions of Greenqloud and Plexistor provide stepping stones to a high-performance, on-premises storage future and a stronger hybrid cloud play.…

Hacking the IKEA Trådfri Light Bulb

The content below is taken from the original ( Hacking the IKEA Trådfri Light Bulb), to continue reading please visit the site. Remember to respect the Author & Copyright.

[BasilFX] wanted to shoehorn custom firmware onto his IKEA Trådfri light bulb. The product consists of a GU10-size light bulb with a LED driver as well as IKEA’s custom ZigBee module controlling it all. A diffuser, enclosure shell, and Edison-screw base give the whole thing the same form factor as a standard A-series bulb. The Trådfri module, which ties together IKEA’s home automation products, consists of an ARM Cortex M4 MCU with integrated 2.4Ghz radio and 256 Kb of flash — not bad for 7 euros!

Coincidentally, [BasilFX] had just contributed EFM32 support to RIOT-OS (“the friendly OS for IoT”) so he was already halfway there. He used a JTAG/SWD-compatible debugger to flash the chip on the light bulb while the chip was still attached.

[BasilFX] admits the whole project is a proof of concept with no real use yet, though he has turned his eye toward getting the radio to work, with a goal of creating a network of light bulbs. You can find more info on his code repository.

We ran a post on Trådfri hacking earlier this year, as well as one on the reverse-engineering process used to suss out the bulb’s secrets.

Filed under: home hacks

DNS resolver 9.9.9.9 will check requests against IBM threat database

The content below is taken from the original ( DNS resolver 9.9.9.9 will check requests against IBM threat database), to continue reading please visit the site. Remember to respect the Author & Copyright.

Group Co-founded by City of London Police promises ‘no snooping on your requests’

The Global Cyber Alliance has given the world a new free Domain Name Service resolver, and advanced it as offering unusually strong security and privacy features.…

Roomba gets IFTTT functionality

The content below is taken from the original ( Roomba gets IFTTT functionality), to continue reading please visit the site. Remember to respect the Author & Copyright.

 iRobot’s been talking a lot about its plans to make the Roomba an essential part of the connected home. The process has been a bit slow going — the company added WiFi connectivity in 2015 and Alexa functionality this year — but it’s getting there, slowly but surely. Today, the world’s best selling robotic vacuum takes another important step with the addition of… Read More

Get notified when Azure service incidents impact your resources

The content below is taken from the original ( Get notified when Azure service incidents impact your resources), to continue reading please visit the site. Remember to respect the Author & Copyright.

When an Azure service incident affects you, we know that it is critical that you are equipped with all the information necessary to mitigate any potential impact. The goal for the Azure Service Health preview is to provide timely and personalized information when needed, but how can you be sure that you are made aware of these issues?

Today we are happy to announce a set of new features for creating and managing Service Health alerts. Starting today, you can:

  • Easily create and manage alerts for service incidents, planned maintenance, and health advisories.
  • Integrate your existing incident management system like ServiceNow®, PagerDuty, or OpsGenie with Service Health alerts via webhook.

So, let’s walk through these experiences and show you how it all works!

Creating alerts during a service incident

Let’s say you visit the Azure Portal, and you notice that your personalized Azure Service Health map is showing some issues with your services. You can gain access to the specific details of the event by clicking on the map, which takes you to your personalized health dashboard. Using this information, you are able to warn your engineering team and customers about the impact of the service incident.

 

Azure Service Health map

Notification

If you have not pinned a map to your dashboard yet, check out these simple steps.

In this instance, you noticed the health status of your services passively. However, the question you really want answered is, “How can I get notified the next time an event like this occurs?” Using a single click, you can create a new alert based on your existing filters.

Create service health alert

Click the “Create service health alert” button, and a new alert creation blade will appear, prepopulated with the filter settings you selected before. Name the alert, and quickly ensure that the other settings are as you expect. Finally, create a new action group to notify when this alert fires, or use an existing group set up in the past.

Add activity log alert

Once you click “OK”, you will be brought back to the health dashboard with a confirmation that your new alert was successfully created!

Create and manage existing Service Health alerts

In the Health Alerts section, you can find all your new and existing Service Health alerts. If you click on an alert, you will see that it contains details about the alert criteria, notification settings, and even a historical log of when this alert has fired in the past. If you want to make edits to your new or existing Service Health alert, you can select the more options button (“…”) and immediately get access to manage your alert.

Create and manage exisiting service health alerts

During this process, you might think of other alerts you want to set up, so we make it easy for you to create new alerts by clicking the “Create New Alert” button, which gives you a blank canvas to set up your new notifications.

Configure health notifications for existing incident management systems via webhook

Some of you may already have an existing incident management system like ServiceNow, PagerDuty, or OpsGenie which contains all of your notification groups and incident management systems. We have worked with engineers from these companies to bring direct support for our Service Health webhook notifications making the end to end integration simple for you. Even if you use another incident management solution, we have written details about the Service Health webhook payload, and suggestions for how you might set up an integration on your own. For complete documentation on all of these options, you can review our instructions.

Each of the different incident management solutions will give you a unique webhook address that you can add to the action group for your Service Health alerts:

Webhook

Once the alert fires, your respective incident management system will automatically ingest and parse the data to make it simple for you to understand!

ServiceNow

Special thanks to the following people who helped us make this so simple for you all:

  • David Shackelford and David Cooper from PagerDuty
  • Çağla Arıkan and Berkay Mollamustafaoglu from OpsGenie
  • Manisha Arora and Sheeraz Memon from ServiceNow

Closing

I hope you can see how our updates to the Azure Service Health preview bring you that much closer to the action when an Azure service incident affects you. We are excited to continually bring you better experiences and would love any and all feedback you can provide. Reach out to me or leave feedback right in the portal. We look forward to seeing what you all create!

 

– Shawn Tabrizi (@shawntabrizi)

Launching preview of Azure Migrate

The content below is taken from the original ( Launching preview of Azure Migrate), to continue reading please visit the site. Remember to respect the Author & Copyright.

At Microsoft Ignite 2017, we announced Azure Migrate – a new service that provides guidance, insights, and mechanisms to assist you in migrating to Azure. We made the service available in limited preview, so you could request access, try out, and provide feedback. We are humbled by the response received and thankful for the time you took to provide feedback.

Today, we are excited to launch the preview of Azure Migrate. The service is now broadly available and there is no need to request access.

Azure Migrate enables agentless discovery of VMware-virtualized Windows and Linux virtual machines (VMs). It also supports agent-based discovery. This enables dependency visualization, for a single VM or a group of VMs, to easily identify multi-tier applications.

Application-centric discovery is a good start but not enough to make an informed decision. So, Azure Migrate enables quick assessments that help answer three questions:

  • Readiness: Is a VM suitable for running in Azure?
  • Rightsizing: What is the right Azure VM size based on utilization history of CPU, memory, disk (throughput and IOPS), and network?
  • Cost: How much is the recurring Azure cost considering discounts like Azure Hybrid Benefit?

The assessment doesn’t stop there. It also suggests workload-specific migration services. For example, Azure Site Recovery (ASR) for servers and Azure Database Migration Service (DMS) for databases. ASR enables application-aware server migration with minimal-downtime and no-impact migration testing. DMS provides a simple, self-guided solution for moving on-premises SQL databases to Azure.

Once migrated, you want to ensure that your VMs stay secure and well-managed. For this, you can use various other Azure offerings like Azure Security Center, Azure Cost Management, Azure Backup, etc.

Azure Migrate is offered at no additional charge, supported for production deployments, and available in West Central US region. It is worthwhile to note that availability of Azure Migrate in a particular region does not affect your ability to plan migrations for other target regions. For example, even if a migration project is created in West Central US, the discovered VMs can be assessed for West US 2 or UK West or Japan East.

You can get started by creating a migration project in Azure portal:

Azure Migrate (preview)

You can also:

  • Get and stay informed by referring documentation.
  • Seek help by posting a question on forum or contacting Microsoft Support.
  • Provide feedback by posting (or voting for) an idea on user voice.

 

Hope you will find Azure Migrate useful in your journey to Azure!

The Best PC Games of the 1990s

The content below is taken from the original ( The Best PC Games of the 1990s), to continue reading please visit the site. Remember to respect the Author & Copyright.

Given the prominence of PC gaming these days, it is easy to see how some believe the platform is enjoying a golden age. This belief isn’t meritless. With lower-priced parts, the continued ease […]

The post The Best PC Games of the 1990s appeared first on Geek.com.

Managing GDPR with Teams, Planner, and Compliance Manager

The content below is taken from the original ( Managing GDPR with Teams, Planner, and Compliance Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Driving for Better Compliance

Following its announcement at Ignite 2017, Microsoft launched the preview of its Compliance Manager on November 16. The Compliance Manager is available to all organizations with a paid or trial subscription to a Microsoft cloud service, except tenants of the Office 365 datacenter regions in China and Germany.

Microsoft describes Compliance Manager as: “A dashboard that summarizes Microsoft’s and your organization’s control implementation progress for Office 365 across various standards and regulations, such as the EU General Data Protection Regulation (GDPR), ISO 27001, and ISO 27018.”

To access Compliance Manager, log into this site using your Microsoft cloud credentials.

Office 365 and GDPR

Although Azure is in the mix (due in early 2018), given the widespread presence of personal information (PII) in documents and email, I suspect that the new tool will be of interest to Office 365 tenants who operate anywhere in the European Union and the other countries, like Norway and Switzerland, where the General Data Protection Regulation (GDPR) becomes effective in six short months.

Office 365 already includes many compliance features to help an organization control data, including data loss prevention (DLP) and retention policies, classification labels, encryption and rights management for documents and email, content searches, and auditing. Some of the features are easier to use with higher-priced plans (like auto-label policies in Office 365 E5) and some will extra software (like Azure Information Protection P2).

The issue is not of having enough technology to control the misuse of PII; it’s more often the case that the people in the organization need help to understand what data needs protection and how best to protect the data.

The Compliance Manager Dashboard

Compliance Manager is a dashboard, but it is a passive instrument. Unlike other Office 365 dashboards like Secure Score or the Data Governance dashboard in the Security and Compliance Center, it does not try to analyze the settings of a target organization against any baselines to report gaps and problems. Microsoft intends to improve functionality in this area in the future and will generate a “compliance score” for a tenant.

For now, Compliance Manager lists standards and regulations that organizations and service providers might want to satisfy and delivers some practical advice about how tenants can start dealing with those standards. The plan is to add more standards to the dashboard over time. When I started Compliance Manager, it offered the option to work with GDPR and ISO 27001-2013 (Figure 1).

Compliance Manager dashboard

Figure 1: The Compliance Manager dashboard (image credit: Tony Redmond)

Controls

Each standard applied to a platform like Office 365 is decomposed into a set of controls. You can think of a control as something that either a service provider (in this case, Microsoft) or a tenant must do as part of the work to satisfy a regulation or meet a standard. The biggest benefit of the Compliance Manager is how Microsoft has broken down complex regulations like GDPR into the controls. For GDPR, 71 controls are assigned to Microsoft and 47 to the customer (see Jussi Roine’s review).

Microsoft’s controls are all passed after testing by an independent auditor. Given that all 71 controls are checked, one interpretation is that Microsoft believes that Office 365 satisfies GDPR, even if they have made no such claim. Microsoft does not say who carried out the audit or what plan or other software (like add-ons) the examined Office 365 tenant used. This is disappointing because a big difference exists in the compliance functionality available in different plans. For example, if you run Office 365 E5, you can deploy auto-label policies (part of advanced Office 365 data governance) to find and classify documents that hold PII data.

Assigning Work Through Controls

With 47 controls to satisfy, any Office 365 tenant has a lot of work to do to make sure that they can cope with GDPR. Compliance Manager tells them what needs to be done but gives no practical assistance to manage the actual work. You can assign people to work on a control (the list of names comes from the GAL), but you cannot assign a group or multiple people (Figure 2). And then you must tell the assignee that they have work to do because the email notification does not work yet (it’s coming soon).

GDPR control Compliance Manager

Figure 2: : Assigning someone to a GDPR control (image credit: Tony Redmond)

Of course, email assumes that an Office 365 tenant uses Exchange Online. Most do, but some do not.

You can also upload documents to Compliance Manager for each control. Presumably these are documents to prove that the work is done. But the documents are not stored inside Office 365. All in all, using the Compliance Manager to track work is an exhaustingly manual process.

Leveraging Office 365 to Satisfy GDPR

If Office 365 has anything, it possesses collaboration technology. Why not harness technology to automate what is essentially an exercise in paperwork that probably involved collaboration with people drawn from across the organization.

Two obvious candidates present themselves. Planner to track the tasks involved in satisfying controls and Teams for collaboration. Outlook or Yammer Groups could also be used, but Teams and Planner are more tightly integrated at this point.

Creating a GDPR Plan

To implement the solution, I first created a new plan with Planner. Creating a new plan also creates a new Office 365 Group, to which I added the people who would work on the GDPR controls as members. I then created a set of buckets in the plan matching the categories Microsoft uses to divide up the GDPR controls.

Next, I created a task for each control in the appropriate bucket and assigned it to the individuals responsible (Figure 3). The description is cut and pasted from the Compliance Center. You can tailor the text to meet the unique needs of the organization, add checklist items, and add attachments that the person assigned the task might need to understand what must be done. Planner also has colored tabs for tasks that could be used to indicate departments, like IT, Finance, Legal, and so on.

Planner for GDPR

Figure 3: Creating a task for a control (image credit: Tony Redmond)

After the tasks are created and assigned, it is easy to track progress through Planner (Figure 4). Although Planner has only a few graphs now, the Planner developers have promised that a new schedule view will be available soon.

GDPR progress with Planner

Figure 4: : Tracking progress towards GDPR (image credit: Tony Redmond)

Involving Teams

Teams also use Office 365 Groups for their identity and membership, so it did not take long to team-enable the group. I then added a Planner tab and connected it to the plan (Figure 5). Team members can collaborate to achieve the necessary controls. Any documents needed can be assembled in Teams and stored in the SharePoint document library for the group.

Using Teams with GDPR

Figure 5: Using Teams to collaborate on a GDPR control (image credit: Tony Redmond)

Voilà! I now have the ability for people to work through the controls necessary for the organization to satisfy GDPR.

Sponsored

Of course, it would be nice if Microsoft built the necessary intelligence into Compliance Manager to create the Office 365 Group, plan, and team and export the controls information to the plan, probably using the Microsoft Graph APIs. However, this is preview software and it therefore only the start of what might happen in the future. Feel free to automate the process yourself if you feel like a challenge!

Compliance is Difficult

Compliance is easy in concept, but difficult to implement in reality. People are always the weakest link. Microsoft’s Compliance Manager breaks down complex regulations into digestible chunks. Using collaboration software like Planner and Teams to help people work together to achieve prepare for something like GDPR just makes sense. Being able to base that activity on those digestible chunks is even better.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Managing GDPR with Teams, Planner, and Compliance Manager appeared first on Petri.

Buoy uses AI and machine learning to keep your water bills low

The content below is taken from the original ( Buoy uses AI and machine learning to keep your water bills low), to continue reading please visit the site. Remember to respect the Author & Copyright.

Buoy is a device that puts machine learning to work to save on your water bill. The IoT device connects to your home's WiFi network and water supply to monitor how much is going where on a use-by-use basis (faucet shower, washing machine, etc..), in…

What is Microsoft Windows 10 Signature Edition?

The content below is taken from the original ( What is Microsoft Windows 10 Signature Edition?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Signature

For users planning to buy a new Windows 10 PC, completely devoid of bloatware, Microsoft has an answer – Microsoft Windows 10 Signature Edition! This new line of PCs represents a stripped-down version of Windows that is stronger, faster and […]

This post What is Microsoft Windows 10 Signature Edition? is from TheWindowsClub.com.

Microsoft Office is now available for all Chromebooks

The content below is taken from the original ( Microsoft Office is now available for all Chromebooks), to continue reading please visit the site. Remember to respect the Author & Copyright.

It took its sweet time, but Microsoft Office for Android is now available on all Play Store-compatible Chromebooks, according to Chrome Unboxed. The software's convoluted journey en route to Google's laptops is well documented. As a recap, when Andro…