When is a Barracuda not a Barracuda? When it’s really AWS S3

The content below is taken from the original (When is a Barracuda not a Barracuda? When it’s really AWS S3), to continue reading please visit the site. Remember to respect the Author & Copyright.

Barracuda’s backup appliances can now replicate data to Amazon’s S3 cloud silos.

According to the California-based outfit, its backup appliance is now available in three flavors:

  • On-premises physical server
  • On-premises virtual server
  • In-cloud virtual server
Barracuda_AWS_Chart

Barracuda backup schematic

Data can be backed up from Office 365, physical machines, and virtual machines running in Hyper-V or VMware systems, to an appliance. This box can then replicate its backups to a second appliance, typically at a remote site, providing a form of disaster recovery, or send the data to S3 buckets in AWS. For small and medium businesses with no second data centre, replicating to Amazon’s silos provides an off-site protection resource.

Barracuda’s customers, resellers and managed services providers can add this replicate-to-Amazon option to their Barracuda product set. Users don’t have to deploy compute processes inside AWS or become AWS infrastructure experts; Barracuda taking care of that side of things, apparently. The biz provides the backend service, which piggybacks on AWS, and your boxes connect to it.

Barracuda Backup replication to AWS is available now in North America, with worldwide deployment expected to be rolled out in the coming months. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Ghost, the open source blogging system, is ready for prime time

The content below is taken from the original (Ghost, the open source blogging system, is ready for prime time), to continue reading please visit the site. Remember to respect the Author & Copyright.

Four long years ago John O’Nolan released a content management system for bloggers that was as elegant as it was spooky. Called Ghost, the original app was a promising Kickstarter product with little pizazz. Now the app is ready to take on your toughest blogs.

O’Nolan just released version 1.0 of the software, a move that updates the tool with the best of modern blogging tools. You can download the self-hosted version here or use O’Nolan’s hosting service to try it out free.

“About four years ago we launched Ghost on Kickstarter as a tiny little prototype of an idea to create the web’s next great open source blogging platform,” said O’Nolan. After “2,600 commits” he released the 1.0 version complete with a new editor and improved features.

The platform uses a traditional Markdown editor and a new block-based editor called Koenig. The new editor lets you edit posts more cleanly within blocks, a feature that uses something called MobileDoc and Ember.js to render complex pages quickly and easily. The team also started a journalism program to support content providers.

While tools like WordPress still rule the day, it’s good to know that there are still strong alternatives out there for the content manager. Although this software has a name that portends dark sorcery and dread magic, I still think it has a “ghost” of a chance.

Softlab transforms ’empty’ space with light and mirrors

The content below is taken from the original (Softlab transforms ’empty’ space with light and mirrors), to continue reading please visit the site. Remember to respect the Author & Copyright.

Recently, Engadget visited The Lab, HP’s trippy art exhibition incongruously placed in the middle of the Panorama Music Festival in NYC. It proved surprisingly popular among festival-goers thanks to the visual and auditory sensory experiences (and possibly because illegal substances were involved). One in particular stood out from a technological and artistic point of view, however: "Volume," an installation by NYC’s SOFTLab.

The installation (below) is made up of 100 mirrored panels that move individually through custom servos, tracking viewers as they move around them via a depth camera array. "Using a weighted average of the various people being tracked, the mirrors rotate to face the nearest person," SOFTLab explains. Those mirrors reflect only the light and the viewers, thanks to the sparse setting around them.

Meanwhile, LEDs controlled by microphones move the mirror panels up and down based on the ambient sound coming from around the installation. The overall effect is of light pulsing and swarming back and forth, with mirrors reflecting the spectators in weird slices, all set to appropriately spacey music.

The whole thing is controlled by a computer with a visual interface depicting the mirrors that can be rotated in 3D. It can be tweaked for greater intensity and the number of exhibition viewers. (For more on how the exhibit was done technically, check out the making-of video.)

Like the other Panorama installations, Volume was designed to invoke "whoa" reactions from the viewers and present good selfie opportunities. However, you can read more into it if you’re into techie art. "The installation was inspired by the ability of light and sound to form space through reflection and their dependence on atmosphere," SOFTLab points out. In other words, it’s meant to make us think a bit more about space that we normally consider empty.

The designers aim to show that it’s a good thing it’s not empty. "Small changes in this volume of transparent material allows light and sound to move through space," it notes. "The mirrors in our installation represent these particles acting in harmony to challenge and enhance what we see."

To look at it another way, the exhibition is showing that there’s often more behind things than what you can see. By tracking your movement and sounds, and responding dramatically in kind, "Volume" illustrates neatly that the shallow way we often perceive things and people can completely change how they behave in return.

Via: Design Boom

Source: Softlab

Launch – AWS Glue Now Generally Available

The content below is taken from the original (Launch – AWS Glue Now Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re excited to announce the general availability of AWS Glue. Glue is a fully managed, serverless, and cloud-optimized extract, transform and load (ETL) service. Glue is different from other ETL services and platforms in a few very important ways.

First, Glue is “serverless” – you don’t need to provision or manage any resources and you only pay for resources when Glue is actively running. Second, Glue provides crawlers that can automatically detect and infer schemas from many data sources, data types, and across various types of partitions. It stores these generated schemas in a centralized Data Catalog for editing, versioning, querying, and analysis. Third, Glue can automatically generate ETL scripts (in Python!) to translate your data from your source formats to your target formats. Finally, Glue allows you to create development endpoints that allow your developers to use their favorite toolchains to construct their ETL scripts. Ok, let’s dive deep with an example.

In my job as a Developer Evangelist I spend a lot of time traveling and I thought it would be cool to play with some flight data. The Bureau of Transportations Statistics is kind enough to share all of this data for anyone to use here. We can easily download this data and put it in an Amazon Simple Storage Service (S3) bucket. This data will be the basis of our work today.

Crawlers

First, we need to create a Crawler for our flights data from S3. We’ll select Crawlers in the Glue console and follow the on screen prompts from there. I’ll specify s3://crawler-public-us-east-1/flight/2016/csv/ as my first datasource (we can add more later if needed). Next, we’ll create a database called flights and give our tables a prefix of flights as well.

The Crawler will go over our dataset, detect partitions through various folders – in this case months of the year, detect the schema, and build a table. We could add additonal data sources and jobs into our crawler or create separate crawlers that push data into the same database but for now let’s look at the autogenerated schema.

I’m going to make a quick schema change to year, moving it from BIGINT to INT. Then I can compare the two versions of the schema if needed.

Now that we know how to correctly parse this data let’s go ahead and do some transforms.

ETL Jobs

Now we’ll navigate to the Jobs subconsole and click Add Job. Will follow the prompts from there giving our job a name, selecting a datasource, and an S3 location for temporary files. Next we add our target by specifying “Create tables in your data target” and we’ll specify an S3 location in Parquet format as our target.

After clicking next, we’re at screen showing our various mappings proposed by Glue. Now we can make manual column adjustments as needed – in this case we’re just going to use the X button to remove a few columns that we don’t need.

This brings us to my favorite part. This is what I absolutely love about Glue.

Glue generated a PySpark script to transform our data based on the information we’ve given it so far. On the left hand side we can see a diagram documenting the flow of the ETL job. On the top right we see a series of buttons that we can use to add annotated data sources and targets, transforms, spigots, and other features. This is the interface I get if I click on transform.

If we add any of these transforms or additional data sources, Glue will update the diagram on the left giving us a useful visualization of the flow of our data. We can also just write our own code into the console and have it run. We can add triggers to this job that fire on completion of another job, a schedule, or on demand. That way if we add more flight data we can reload this same data back into S3 in the format we need.

I could spend all day writing about the power and versatility of the jobs console but Glue still has more features I want to cover. So, while I might love the script editing console, I know many people prefer their own development environments, tools, and IDEs. Let’s figure out how we can use those with Glue.

Development Endpoints and Notebooks

A Development Endpoint is an environment used to develop and test our Glue scripts. If we navigate to “Dev endpoints” in the Glue console we can click “Add endpoint” in the top right to get started. Next we’ll select a VPC, a security group that references itself and then we wait for it to provision.


Once it’s provisioned we can create an Apache Zeppelin notebook server by going to actions and clicking create notebook server. We give our instance an IAM role and make sure it has permissions to talk to our data sources. Then we can either SSH into the server or connect to the notebook to interactively develop our script.

Pricing and Documentation

You can see detailed pricing information here. Glue crawlers, ETL jobs, and development endpoints are all billed in Data Processing Unit Hours (DPU) (billed by minute). Each DPU-Hour costs $0.44 in us-east-1. A single DPU provides 4vCPU and 16GB of memory.

We’ve only covered about half of the features that Glue has so I want to encourage everyone who made it this far into the post to go read the documentation and service FAQs. Glue also has a rich and powerful API that allows you to do anything console can do and more.

We’re also releasing two new projects today. The aws-glue-libs provide a set of utilities for connecting, and talking with Glue. The aws-glue-samples repo contains a set of example jobs.

I hope you find that using Glue reduces the time it takes to start doing things with your data. Look for another post from me on AWS Glue soon because I can’t stop playing with this new service.
Randall

Mighty fills an iPod shuffle-sized hole for Spotify subscribers

The content below is taken from the original (Mighty fills an iPod shuffle-sized hole for Spotify subscribers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Everything that’s old is new again. That’s how the tech game is played. Products evolve and leave vacuums for new startups to rush in and fill. For Mighty, the long, slow death of the MP3 player presented just such an opening.

The company launched a Kickstarter last year with the promise of “streaming music without your phone,” delivering the still-tangible benefits of devoted music hardware for the Spotify generation. The startup built up enough excitement to hit $300k on the crowdfunding site, and the recent end of the iPod shuffle and nano have only furthered that interest among consumers.

CEO Anthony Mendelson tells us that company has embraced the media’s christening of the devices as an “iPod shuffle for Spotify.” While much of the underlying technology is different, the principle is basically the same: a screenless, clip-on player for taking music on the go.
It’s a small niche in the overall music listening market, and clearly Apple didn’t see enough value in continuing to produce the things.

But there are certain scenarios that aren’t served by phones alone. Fitness is probably the largest — going for a run with a smartphone is a pain. There’s also an opening for users with an underground commute and people who frequently have to switch into airplane mode. Parents have expressed interest in the device so they don’t have to hand kids their phone to listen to music and people have also picked a Mighty up for elderly parents.

Price was alway the other key to the shuffle’s success, and Mighty has taken great pains to keep its first player under $100. That’s easier said than done for a relatively small run from a brand new company, but the device (known as the M1, internally) is priced at $89 — pricier than the last Shuffle, but at a considerably higher (8GB) capacity. Hitting that price point required some comprises on the company’s part, and there are still a fair number of wrinkles to iron out here, but the first Mighty will tick most of the important boxes for users.

Small, but…

The Mighty’s not a bad looking player. It’s a bit boxy and fairly reminiscent of the shuffle — the form factor, after all is fairly limiting. The startup’s industrial design lead came over form Samsung and took the necessary precautions to avoid coming too close to Apple’s design language here.

The result is a similar circular button array, with play/pause at the center. The circle sports volume and track advance options, though, oddly, there’s no fast forward or rewind yet. It’s a pretty big blindspot for a music player, but apparently the Spotify API that allows for it creates a pretty big lag. Mendelson tells me the company is working on a fix, but didn’t offer a timeframe.

Above all that is a button designed specifically cycling through playlists, the primary method for inputting music from Spotify, at present. Playlists certainly make sense as a method for grouping together music on a playing without a screen. Otherwise it’s all a big crapshoot. There’s also a robot voice that lets you know the name of each playlist before it begins.

The player is made from a plastic composite. It feels a bit cheap — one of those aforementioned cost cutting measures. Mendelson tells me the company is looking into building an aluminum model for the next go-round, but the current model is still reasonably rugged. It can withstand a quick dip water and will likely survive a couple of falls onto the harsh concrete below.

There’s also a big clip on the back, a la the shuffle, and headphone jack up top. The device will connect to Bluetooth headphones, but excluding a hardwired option doesn’t make a lot of sense on a device that counts cost among its main selling points.

Stream team

Apparently Spotify was so impressed with Mighty’s Kickstarter that it decided to work closely with the startup. For Spotify, it mean a piece of hardware devoted exclusively to its service, without having to invest in any R&D. For Mighty, it means a figurative seal of approval, along with a literal one on the product’s packaging.

As far as actual integration goes, the setup requires tying a bunch of accounts together. Mighty has its own app that needs to be associated both with the Spotify app. Users can also tie it directly to Facebook for quicker account set up. Direct integration with Spotify would be nice, but the streaming service isn’t really set up for that. Instead, users drag and drop content through the Mighty app.

That means there’s a lot of switching back and forth — you make the playlists on Spotify and import them through Mighty. It feels like a bit of a workaround, but unless Spotify ends up embracing this sort of technology more fully, it’s a necessary one. At very least, it gets the job done.

Mighty’s full embrace of the Spotify SDK means offline streaming works exactly the way the service wants. The built-in WiFi means the device connects to Spotify’s server at regular intervals, ensuring that the account associated with the device is still active (up to three offline devices can be associate to one account). If it goes 30 days without checking in, the system tacks a vocal watermark to the music, telling you to sync it, and assumedly ruining the listening experience in the process.

One step beyond

The Mighty is a good first step for the company. And if you’ve had a shuffle-sized hole in your life since streaming conquered the music industry, this should fill it pretty nicely. But there’s still a lot of room for the product to grow, and in some ways this first product feels a little rushed. Mendelson told me during our conversation that the company worked fast to be “first to market.”

It’s odd phrasing for a device that’s essentially a streaming update to a bygone product, but Mighty clearly wasn’t the only company circling the idea. Pebble was flirting with the idea with Core before the company imploded and was subsequently swallowed up by Fitbit. If the LTE-enabled Apple Watch rumors prove true, that could potentially cut into a significant piece of Mighty’s niche, by playing to runners looking for musical accompaniment beyond the phone.

Mighty is clearly already looking beyond this initial release. Mendelson has already previewed a number of things the company is working on to both improve this first generation and offer up in the inevitable M2 device.

Some of the key things coming to this first device include:

  • Fast forward/rewind
  • Shuffle (another glaring omission, given all of the iPod comparisons
    Support for dragging and dropping non-playlist content, i.e. dragging songs in by album or folder
  • The ability to use the product as a storage device for non-music content

And for the M2, we can expect some combination of the following

  • Aluminum backing and clip (the front will remain plastic for bluetooth transmission purposes)
  • Full waterproofing
  • The addition GPS and an accelerometer for fitness functionality, including syncing music to running speed.

You can probably expect that model to arrive roughly a year from now, at a higher price point than the current version. More than likely it will exist alongside its predecessor as Mighty continues to work to improve functionality on that unit. It remains to be seen how large the market really is for these devices, but as it stands, the Mighty does a pretty admirable job filling in the gap left by the iPod shuffle and its ilk.

Microsoft Just Released a Powershell Module Browser

The content below is taken from the original (Microsoft Just Released a Powershell Module Browser), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft just released a Powershell module browser that lets you quickly search through different powershell modules: http://bit.ly/2vqLXZC

Here's the announcement: http://bit.ly/2uFbhct

The Plan to Prove Microdosing Makes You Smarter – a new placebo-controlled study of LSD microdosing with participants being tested with brain scans while playing Go against a computer.

The content below is taken from the original (The Plan to Prove Microdosing Makes You Smarter – a new placebo-controlled study of LSD microdosing with participants being tested with brain scans while playing Go against a computer.), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2uEZo6s

Announcing the PowerShell Module Browser

The content below is taken from the original (Announcing the PowerShell Module Browser), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2uFbhct

How the hospitality industry will profit from the IoT

The content below is taken from the original (How the hospitality industry will profit from the IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

Connecting the world changes everything. That’s what businesses and consumers are learning as they embrace the Internet of Things (IoT) for everything from household garage door openers to smart-city applications that solve traffic congestion and reduce crime.

But IoT is more significant than just adding connectivity to existing products or services. In fact, it is about changing the way products and services deliver value. In the process, products are becoming services, and services are becoming more intelligent.

The hospitality industry is not immune to this evolution, and, in fact, it is well positioned to benefit from IoT. That’s because the industry is poised to improve the customer experience while simultaneously reducing costs.

The modern hotel room is far from modern in that it is mostly disconnected. Hotel operations rely on property management systems that require mostly manual entries to track resources. Much of this work centers around the front desk — a once-critical part of the hotel stay that is on the verge of obsolescence.

IoT: Taking the Temperature

Many hotels already use IoT to control in-room thermostats. By switching to a connected thermostat, hotels can adjust room temperatures at check-in and checkout. The connected thermostat eliminates the cost of cooling or heating vacant rooms. It also reduces the likelihood of marring the first impression of a room with an uninviting, uninhabitable temperature.

Taking heating and cooling a bit further, when hotels combine the thermostat with other sensors, the air conditioning can turn off automatically when a guest opens a window or balcony door. Another opportunity is to tie in automated window coverings that can mitigate temperature swings due to afternoon sunshine. Time of day or temperature sensors could activate these environmental adjustments.

Too much automation can make guests uncomfortable, however, so algorithms could differ depending on whether or not the guest is in the room. This development requires occupancy detection, which could provide several additional benefits.

IoT beyond the thermostat

Today, hotels really don’t know when a guest room is empty. Knocking is not the ideal solution. Knocking can wake or interrupt a guest, and the lack of a response to a knock is not conclusive. Intelligent sensors, though, can help detect occupancy. If the last detected motion was near the door, combined with an opening of the door, it may be reasonable to assume the room is empty.

Just a little bit newer than the knock is the in-room alarm clock. These disconnected devices invariably display the wrong time and sometimes come pre-programmed with the last guest’s alarm setting. A connected clock could be centrally or automatically adjusted (and corrected for daylight savings time). Then, when a guest checks out, the system could clear prior alarm settings, as current telecom systems clear voicemails.

IoT calling

The phone, too, is ripe for replacement. One big opportunity: using the phone’s speaker for paging. For example, if the fire alarm sounds, the hotel could provide instructions or information via a paging system. Also, and this sounds a bit invasive, a tone-first, hands-free intercom could also be useful and helpful. For example, it would enable hotel staff to check in on a guest to see if assistance is needed. It’s a better first option than breaking down a door.

One more proposed change to in-room phones: one-way video. High-end hotels should embrace one-way video on their internal phone system. When a guest calls for service, they should see the hotel employee on the other end. If you truly want to create rapport and personalized service, a smile goes a long way. It’s a cost-effective way to differentiate in a service business.

In-room entertainment also gets better with connectivity. Hotels understand that guests value entertainment options, but premises-based movie systems are expensive and complex. Premium movie channels are a common alternative, but they offer a limited selection at fixed times.

When hospitality more closely embraces IoT, hotels can improve the guest experience and lower costs, and when done right, they can avoid interfering negatively in a guest’s stay.

Marriott has gone further with a Netflix option. The hotel provides televisions equipped for Netflix, and the guest just needs to enter their personal Netflix account information. It’s an interesting compromise, as the guests provide their own account but Marriott provides the premium internet service. (Often, the same experience on a personal computer would require the guest to purchase premium internet.) Upon the guest’s checkout, Marriott automatically erases the guest’s account information.

Assuming the guest has a Netflix subscription, it’s a nice win for both parties. Guests get a large catalog of on-demand entertainment, and the hotel gets out of the movie business. Marriott includes premium movie channels as an alternative for those who don’t subscribe to Netflix. I’m hopeful that in the future, Marriott will auto-load my credentials using information that I store in my Marriott profile.

Automating the right touch

Some hotels, such as Hilton, are experimenting with connected, Bluetooth door locks so that a guest can use their smartphone as a key. I’m not a fan of this approach because travel is already tough on my smartphone battery. I use it for my boarding pass, ride-hailing apps, online reading, and navigation, so I feel lucky to arrive at my hotel with a working phone.

A better approach is a kiosk for self check-in, similar to what the airlines do. The smartphone app may play a role, but I’d prefer to get a separate card key from the kiosk. The modern card key is small, is waterproof, is disposable, and doesn’t require batteries. Plus they are already installed and even provide the hotel advertising revenue — so why replace a good thing?  

Too much automation can be detrimental. For example, high-end restaurants are unlikely to move to automated server-bots anytime soon. However, people do value and appreciate efficiency. When hospitality more closely embraces IoT, hotels can improve the guest experience and lower costs, and when done right, they can avoid interfering negatively in a guest’s stay.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Find similar free alternatives to paid Fonts

The content below is taken from the original (Find similar free alternatives to paid Fonts), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are a website developer, you will know how important it is to choose a good looking and easy-to-read font for the text. There are many websites out there that use a bad font and give a poor user experience to its visitors. It is therefore important that you use a good web font. If you are a website developer or you are building your website, you might have already checked some sites to get inspiration. Once you have identified the font or if you have found out a premium font that you want to use, but do not want to spend money, here is how to find free alternatives to paid Fonts using Alternatype.

Alternatype is a free web app that helps people find out best possible alternatives to some popular commercial fonts. As it is illegal to use fonts downloaded from torrent sites, you may use a free alternative to paid fonts. This web tool gets you a download link for the free alternatives right on the result screen.

Find free alternatives to paid Fonts

It is very easy to use the tool, and you do not have to create any account to use it. Head over to the Alternatype website and enter a font name. If your desired font is in their database, you can find that in instant search result. Choose the font from the given list. If the font is not listed in the instant search result, too bad!

Alternatype lets you find free alternatives to paid fonts

The result page looks like this-

Alternatype lets you find free alternatives to paid fonts

From here, you can get the download link, or check a specimen.

It is as simple as that. The only problem with this tool is it has a minimal database. You may not find alternatives to some of latest fonts. However, if your desired font is very modern, there is a very high chance to get the result right away.

Tools to find similar Fonts

When it comes to finding a free alternative to paid fonts, Alternatype seems to be the only player in this business. However, if you want to find similar fonts – paid or free – you can use these following tools.

1] Identifont

Find free alternatives to paid fonts

Identifont is a great web app that allows people find similar fonts to paid commercial fonts. Talking about the database, the Identifont database is much bigger than Alternatype. You can find minimum thirty similar font suggestions based on your every query. On the other hand, using this tool is pretty easy as well.

Visit the Identifont website, enter your font name, and press the Enter button. You should find some similar font suggestions on your left-hand side. This list should contain paid as well as free font suggestions.

2] FontSquirrel Matcherator

Find free alternatives to paid fonts

This is another awesome tool to find the exact or similar font based on your image text. In other words, you need to upload an image containing your desired font. The FontSquirrel Matcherator will do some background check and let you find the font that you want to download or buy.

Go to the FontSquirrel website, upload your image, select the text, click on Matcherate it button. You will get all the similar font suggestions right away.

Related read: Free Fonts download, for logos and commercial use.

Email Reply

The content below is taken from the original (Email Reply), to continue reading please visit the site. Remember to respect the Author & Copyright.

I would be honored, but I know I don't belong in your network. The person you invited was someone who had not yet inflicted this two-year ordeal upon you. I'm no longer that person.

UK issues stricter security guidelines for connected cars

The content below is taken from the original (UK issues stricter security guidelines for connected cars), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nervous about the thought of your connected car falling victim to hacks, especially when self-driving cars hit the streets in earnest? So is the British government — it just issued tougher guidelines for the security of networked vehicles. It wants security to be part of the design process across every partner involved, even at the board of directors’ level, and for companies to keep cars updated throughout their lifespans. UK officials also call for a "defence-in-depth" strategy that minimizes vulnerabilities (such as by walling off systems to limit the damage of a hack), and very limited use of personal data that gives you control over what the car transmits.

The government has also reiterated its hopes to build a "new framework" for self-driving car insurance, making it clear who pays when an autonomous vehicle crashes.

This doesn’t guarantee that British cars will be significantly safer, or that manufacturers will take the spirit of the guidelines to heart. Between this and advice from American regulators, though, it won’t be surprising if automakers use official recommendations as a starting point. And to some extent, companies already do. Tesla, for instance, encourages researchers to report vulnerabilities and has isolated systems like the brakes and powertrain. In that sense, the UK is merely codifying principles that have already existed for a while.

Via: Reuters

Source: Gov.uk

Researchers create instant hydrogen from water and aluminum

The content below is taken from the original (Researchers create instant hydrogen from water and aluminum), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hydrogen power seemed all the rage for awhile, until we had to face the practical considerations of using it. Yes, it’s clean and abundant, but it’s also incredibly difficult to transport. One team may have accidentally found the key to jump starting this struggling economy, though; researchers at the US Army Research Laboratory at Aberdeen Proving Ground Maryland made a chance discovery when they poured water on a new aluminum alloy. It began to give off hydrogen automatically.

It is possible for hydrogen to be a byproduct of a reaction between water and regular aluminum, but only at extremely high temperatures or with added catalysts. Additionally, it would take hours for the hydrogen to be produced and had an efficiency of only about 50 percent.

That’s not the case with this new reaction; "Ours does it to nearly 100 per cent efficiency in less than 3 minutes," team leader Scott Grendahl told New Scientist. That’s a pretty impressive statistic, especially when you consider it’s an automatic reaction. Aluminum and water can be transported easily and are stable. This can easily be turned into a situation where a lot of hydrogen can be produced on demand, in a short amount of time. This eliminates many of the issues that forced companies to seek alternatives to hydrogen for a clean energy source.

That doesn’t mean this is the solution to all our hydrogen problems, though. There are still many questions that need to be answered. First of all, can this be replicated outside the lab? All signs point to yes, but experiments can often work in a lab setting and fail in field tests. How difficult is this new aluminum alloy to produce, and what would the costs of mass production be? How much of it would you need to make this work? What are the environmental costs of producing this increased amount of aluminum alloy? This is an encouraging first step to be sure, but there’s a lot more we need to know before we declare this the salvation of hydrogen fuel.

Source: New Scientist

The GDPR Deadline is Fast Approaching; How Enterprises are Readying Themselves

The content below is taken from the original (The GDPR Deadline is Fast Approaching; How Enterprises are Readying Themselves), to continue reading please visit the site. Remember to respect the Author & Copyright.

The deadline for compliance with the European Union General Data Protection Regulation (GDPR) is May 25, 2018.  Many organizations have spent countless hours already in their preparation for the deadline, while other organizations are just getting around to reading up on it.  GDPR, like Y2K of a couple decades ago, has international implications that for some organizations HAS to be addressed as GDPR will impact the lifeblood of their operations, whereas for most organizations, some due diligence needs to be done to ensure they are within the compliance of the regulation.

GDPR is Today’s Y2K

I reference Y2K as I was one of the advisors to the United States White House on Y2K and spent the latter part of the decade before the Millennium switchover traveling around the globe helping organizations prepare for 1/1/2000.  Today with GDPR as I did then with Y2K believe there are fundamental things every organization needs to do to be prepared for the deadline, but to NOT get caught up in the hype and over speculation to the Nth degree detail that’ll drive you crazy. 

What is GDPR?

To help those catch up on what GDPR is, the regulation technically went into effect in 2016 and the deadline for compliance is May 25, 2018.  The thing that scares people is that fines for non-compliance are up to 20-million Euros or 4% of the company’s prior year worldwide revenue, which is an alarming number that gets everyone’s attention. 

While there are many tenants to GDPR, I net it down to 3 major things:

  • Prevention of “Tracking” Individuals:  This is the big thing in GDPR that goes after the big Internet companies (Google, Facebook, Amazon) that gather personal information on individuals, track the Websites they visit through cookies, and actively advertise to individuals through that tracked information.  GDPR directly addresses the practice and process of gathering information on what individuals buy, sites they visit, and content they’ve searched for by not only requiring consent but also have clear stated purpose WHAT that information will be used for.
  • Prevention of Retaining Personally Identifiable Information (PII):  This tenant is not so new and has been a big piece of legislation around the world to protect individual’s privacy.  GDPR, like other global regulations on PII, sets limits on what personal information can be gathered (name, date of birth, address, etc), how that personal information needs to be stored and protected, and what needs to be done in the case of breach.
  • Cross-Border Transfer of Information:  GDPR stipulates that EU residents (citizens and even individuals that are temporarily working and living in the EU) information should remain in the EU –or– if the information leaves the EU that the target destination for the storage of the information meets specific European Commission approvals

Of the three major tenants I note for GDPR, the second and third are things that we’ve been addressing for some time now with the predecessor to GDPR (the DPD 95/46/ec) and the various Privacy/PII laws that are already in effect.  So the big thing in GDPR is around the collection, storage, and tracking mechanisms commonly used by Internet organizations for web-based shoppers and social media participants.  THOSE are the organizations that have been working very hard the past couple years already devising ways to inform, get consent, and handle tracking in a manner that fits within the requirements of GDPR.

Tenants of GDPR

There are other tenants of GDPR that organizations need to address and are commonly discussed in conversations about GDPR.  They include: 

  • Data Protection Officer and Vendor Management: GDPR stipulates that organizations impacted by GDPR need to have a Data Protection Offices identified and have a process for vendor management as it relates to GDPR.  This individual will have the role of overseeing the compliance with GDPR internally and with vendor/suppliers.
  • Codes of Conduct:  GDPR requires organizations to have stated codes of conduct how data will be extracted, used, timeframe for use, how the organization will protect the privacy and rights of the individuals the data was extracted from, and provide users the right to request that “their data” be purged.
  • Data Profiling / Data Consent:  As noted previously, GDPR has tight rules as it relates to using data to profile individuals that can be directly associated back to a named individual.  Use of identifiable information (like Cookies) requires explicit consent.
  • Cross-Border Transfers:  Also as previously noted, GDPR has tight rules on EU data remaining in the EU –or– that the target destination of EU data complies by the same standards expected of information stored in the EU
  • Data Portability:  GDPR has a data portability tenant that allows users to request their information to be allowed to be “moved” to another provider.  Just like phone number portability in the United States that allows an individual to keep their phone number as they switch from one phone carrier to another, GDPR data portability gives users the right to request their emails, photos, documents, and the like to be transferrable.
  • Pseudonymizing of Personal Data:  Fancy word, but effectively the randomizing of data so that it cannot be attributed back to any particular individual, effectively making the data anonymous.  However GDPR does stipulate that just because the data is randomized doesn’t allow an organization to just collect and use the information as they please.  GDPR has stipulations that require an organization to justify why they are collecting the information, what they plan to do with the data, and with clear definitions how the data will be eliminated when those stated purposes are no longer valid or applicable
  • Data Breach Notifications:  GDPR tightens the timeframe that cybersecurity breach notification is made, with requirements for notification in as little as 72-hours from an organization being made aware of the breach.  There are some variations to this notification where individuals need to be notified if information that can be attributed back to them (personally) has been breached, however if information has been pseudonymizied, that only the European Commission needs to be informed.

GDPR for Enterprises (not Web/Social Media Providers)

With much of the heft of GDPR focused on Web/Social Media Providers (Facebook, Google, Amazon, etc), the common question for Enterprises (corporations, small businesses, companies headquartered in/out of the EU) is what does a typical business need to think about relative to GDPR?

First of all GDPR is not a bigger thing nor a smaller thing based on the size of the enterprise.  The requirements of GDPR are the same no matter the size, where the organization is headquartered, or the type of industry the organization is in.  GDPR also applies to every organization that does business with companies in the EU, has employees that are citizens of the EU, or even has employees that are foreign citizens but are residing and working in the EU.  So the umbrella on who has to comply with GDPR is pretty broad.

A common question is whether an email system hosted in the United States can fit within GDPR requirements.  For organizations that have migrated to services hosted by Microsoft (like Office 365) or Google (G-Suite), both Microsoft http://bit.ly/2vzuJfw and Google http://bit.ly/2uoZGlJ have officially stated their cloud services WILL be GDPR compliant before the May 18, 2018 deadline.  The way these services will be compliant is because the European Commissions has already approved and adopted the EU-US Privacy Shield.  While GDPR does not specifically refer to the EU-US Privacy Shield, it does explicitly acknowledge the current requirements for Binding Corporate Rules (BCR) for processors and controllers. BCR confirmation is acquired by having auditors validate and certify compliance for organizations in their movement of data globally.  The EU-US Privacy Shield fits within this certified Binding Corporate Rules deemed acceptable for GDPR as it allows the European Commission to conduct periodic reviews to assure that an adequate level of data protection exists in the transferring of data cross-border.  What remains for these cloud providers is a formal “sign-off” that they do indeed meet the provisions of GDPR which are anticipated to be approved without resistance.

Note:  For the topic of cross-border transfers, one might hear that the most common cross-border certification, “Safe Harbor,” has been invalidated for GDPR, that is true.  On October 6, 2015, the European Court of Justice invalidated the US-EU Safe Harbor Framework.  However Binding Corporate Rules (BCRs) do remain valid.  Additionally, organizations can rely on Standard Contractual Clauses (SCCs) that are approved by the European Commission.  SCCs are agreements between the EU exporter (ie: EU subsidiary) and the data importer (ie: US parent company or service provider) on the handling of cross-border transfers.  Large enterprises are seeking certification under SCC approvals so that they can move corporate data between corporate offices and datacenters around the world.  The SCC validations are not easy to acquire as they require an audit of the data management, security, handling, and processing of information throughout an enterprise.  However once an enterprise has an SCC, they can more freely move information throughout their organization.

Handling GDPR for Internal Documents and Content

A common question by enterprises is whether email messages and business documents fall under the requirements of GDPR.  The answer is generally no, a business document is a business document for the purpose of conducting the business of the organization.  Of course if the document includes the names of employees, their home addresses, their mobile phone numbers, and other personally identifiable information, then the document falls under GDPR as well as other existing laws and regulations on information privacy.  However a business contract, marketing materials, client documents, architectural drawings, and the like exchanged during the normal course of business are not “personal documents” embedded with “personal data” for non-legitimate business uses.

The KEY to handling internal documents and content to ensure the documents do not contain content subject to GDPR or other PII restricted regulations is to use content classification. Technologies built in to Microsoft’s Office 365 has the ability to scan content (emails, documents, memos) and auto-classify the content as having content that appears to include PII (birth dates, social security numbers, etc).  http://bit.ly/2vzrc0P  By auto-classifying the content, policy rules can be applied to the content that allows the creator of the content to choose who can access the information.  By giving control to the originator of the content, that satisfies the requirements of GDPR by giving the content owner the free and direct control of the content, and to whom the content can be shared with.

Can Employers Force Employees to Give Consent of their PII?

The short answer is NO, GDPR is very clear that consent is not valid unless it is “freely given, specific, informed, and unambiguous.”  That means an employee cannot be reprimanded nor discriminated against for choosing to not consent to blanket policies.  This is why content classification becomes so important, it enables an organization to require users to provide consent, classify, or reclassify content as they deem appropriate on a case by case basis.

Collecting and Handling Employee and Customer Information under GDPR

GDPR is clear that an organization needs to provide users, visitors, and employees detailed information on what data is collected and how it will be used.  Obviously this first makes the assumption that personal information about an individual is being collected in the first place.  While some simple common business applications like the Web Browser that an employee uses likely by default has cookies enabled and is storing and tracking the Web access of the user of the Web Browser, a simple enterprise fix it to set all Web Browsers to Private or “Incognito” mode.  This will prevent cookie tracking and storage of data protected by GDPR.  A user can be allowed to turn off the Private mode if they choose, that will be their decision and their personal consent to having content potentially tracked. 

Under the Hood of AMD’s Threadripper

The content below is taken from the original (Under the Hood of AMD’s Threadripper), to continue reading please visit the site. Remember to respect the Author & Copyright.

Under the Hood of AMD’s Threadripper

Although AMD has been losing market share to Intel over the past decade, they’ve recently started to pick up steam again in the great battle for desktop processor superiority. A large part of this surge comes in the high-end, multi-core processor arena, where it seems like AMD’s threadripper is clearly superior to Intel’s competition. Thanks to overclocking expert [der8auer] we can finally see what’s going on inside of this huge chunk of silicon.

The elephant in the room is the number of dies on this chip. It has a massive footprint to accommodate all four dies, each with eight cores. However, it seems as though two of the cores are deactivated due to a combination of manufacturing processes and thermal issues. This isn’t necessarily a bad thing, either, or a reason not to use this processor if you need to utilize a huge number of cores, though; it seems as though AMD found it could use existing manufacturing techniques to save on the cost of production, while still making a competitive product.

Additionally, a larger die size than required opens the door for potentially activating the two currently disabled chips in the future. This could be the thing that brings AMD back into competition with Intel, although both companies still maintain the horrible practice of crippling their chips’ security from the start.

Posted in computer hacksTagged , , , , ,

crystaldiskinfo.portable (7.1.0)

The content below is taken from the original (crystaldiskinfo.portable (7.1.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

CrystalDiskInfo is a HDD/SSD utility software which shows the health status much more clearly than similar tools.

Automotive Grade Linux shops for hypervisor to accelerate smart cars

The content below is taken from the original (Automotive Grade Linux shops for hypervisor to accelerate smart cars), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Automotive Grade Linux has revealed it’s going shopping for a hypervisor so that in-car computers can handle lots of different jobs.

The effort’s previously delivered an infotainment platform that Toyota has adopted. That platform has now landed as the Automotive Grade Linux Unified Code Base (AGL UCB) 4.0, which the Linux Foundation happily says represents 70-80 per cent of the work needed to build an in-vehicle entertainment hub. Auto-makers won’t mind the need to finish things off, the Foundation reckons, because it gives them the chance to add some differentiation.

The new release adds Application Services APIs for Bluetooth, Audio, Tuner and CAN signaling, supprts x86, ARM 32 and ARM 64 architectures and is now based on version 2.2 of the Yocto embedded Linux project. Myriad other features are detailed in the release notes.

With all that out of the way, the Linux Foundation has now raised the bar for future AGL releases by adding projects for telematics, instrument cluster and heads-up-displays – all in the same piece of hardware and running as VMs. The organisation has therefore “formed a new Virtualization Expert Group (EG-VIRT) to identify a hypervisor and develop an AGL virtualization architecture that will help accelerate time-to-market, reduce costs and increase security.”

Which is kind of a big deal because cars gain more and more-integrated electronic components each year, but the links between them are often found to be insecure. Integration of multiple units is also challenging.

The Linux Foundation reckons “An open virtualization solution could allow for the consolidation of multiple applications such as infotainment, instrument cluster, heads-up-display and rear-seat entertainment, on a single multicore CPU through resource partitioning.”

“This can potentially reduce development costs by enabling OEMs to run independent operating systems simultaneously from a single hardware board. Virtualization can also add another layer of security by isolating safety critical functions from the rest of the operating system, so that the software can’t access critical controls like the vehicle CAN bus.”

Left unsaid is that auto-makers should like this because it has the chance to reduce the bill of materials in a car, while also making it easier to add more devices to vehicles.

Even if AGL sorts this stuff out in a flash, it could be many years before we see it in cars because the auto industry works on very long product development cycles. Device-makers would then need to target their wares to different in-car boards and package them to run inside whatever hypervisor the Foundation chooses.

The Register imagines Xen’s Embedded and Automotive PV Drivers is almost certainly on the Foundation’s to-be-considered list, with KVM, OpenVZ and LXC worth a look. The global market for cars is approaching 80m units a year, so perhaps VMware and Microsoft might also be willing to talk about how their x86-only, proprietary, hypervisors could hit the road. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

CMD.EXE gets first makeover in 20 years in new Windows 10 build

The content below is taken from the original (CMD.EXE gets first makeover in 20 years in new Windows 10 build), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s making over the Windows Console, the tool that throws up a command line interface and which has hung around in Windows long after DOS was sent to the attic and told not to show itself in polite company.

The company’s revealed that in Windows 10 build 16257 the Console will get new … colours.

Yup, that’s all. Colours. No new syntax. Nothing cloudy. Just colours.

Microsoft says the change is needed because Windows Console currently uses a lot of blue in hues that are “very difficult to read on a modern high-contrast displays.”

“During the past 20 years, screens & display technology, contrast ratio, and resolution have changed significantly, from CRT’s through TFT LCD’s to modern-day nano-scale 4K displays,” writes Microsoft’s Craig Loewen. “The legacy default scheme was not built for modern displays and does not render as well on newer high-contrast LCD displays.”

Hence the change depicted below.

Microsoft's new colour scheme for Windows Console

Microsoft’s new colour scheme for Windows Console. Click here to embiggen

The Register imagines that its readers will welcome anything that makes a CLI easier to use, so feels duty-bound to point out that for now you need to do a fresh install of Build 16257 to get your eyes on the new palette. Loewen also says “We’ll soon be publishing a tool that will help you apply this new scheme and a selection of alternative color schemes to your Windows Console.”

Those of you who upgrade to Build 16257 will not find any custom palettes over-written, or overridden. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Durham Uni builds supercomputer from secondhand parts

The content below is taken from the original (Durham Uni builds supercomputer from secondhand parts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Durham Uni builds supercomputer from secondhand parts

She may not look like much, but she’s got it where it counts, kid

COSMA6 can be used for galactic simulations

Durham University has built itself a secondhand supercomputer from recycled parts and beefed up its contribution to DiRAC (distributed research utilising advanced computing), the integrated facility for theoretical modelling and HPC-based research in particle physics, astronomy and cosmology.

The Institute for Cosmological Computing (ICC) at Durham runs a COSMA5 system as its DiRAC contribution.

There are five DiRAC installations in the UK, which is a world leader in these HPC fields:

  • Cambridge HPCS Service: Data Analytic Cluster – 9,600 cores (200TFLOP/s), 0.75PB (raw) parallel file store, high performance infiniband IO and interconnect (node-to-storage 7GB/s), a single 600 port non-blocking switch and 4GB RAM per core
  • Cambridge COSMOS SHARED MEMORY Service – 1,856 cores (42 TFLOP/s), 14.8TB globally shared memory (8GB RAM per core), 146TB high Performance scratch storage, 31 Intel Xeon Phi co-processors capability
  • Leicester IT Services: Complexity Cluster – 4,352 cores (95TFLOP/s), 0.8PB parallel file store, high performance IO and interconnect, non-blocking switch architecture, 8GB RAM per core
  • Durham ICC Service: Data Centric Cluster – 6,500 cores, 2PB parallel file store, high performance IO and interconnect, 2:1 blocking switch architecture, 8GB RAM per core
  • Edinburgh 6144 node BlueGene/Q – 98,304 cores, 5D Torus Interconnect, high performance IO and interconnect

The Durham cluster listed above is a COSMA5 system, which features 420 IBM iDataPlex dx360 M4 servers with a 6m720 2.6 GHz Intel Sandy Bridge Xeon E5-2670 CPU cores. There is 53.76TB of DDR3 RAM and Mellanox FDR10 Infiniband in a 2:1 blocking configuration.

It has 2.5PB of DDN storage with two SD12K controllers configured in fully redundant mode. It’s served by six GPFS servers connected into the controllers over full FDR and using RDMA over the FDR10 network into the compute cluster. COSMA5 uses the GPSF file system with LSF as its job scheduler.

The ICC and DiRAC needed to strengthen this system and found that the Hartree Centre at Daresbury had a supercomputer it needed rid of. This HPC system was installed in April 2012 but had to go because Daresbury had newer kit.

Durham had a machine room with power and cooling that could take it. Even better, its configuration was remarkably similar to COSMA5.

So HPC, storage and data analytics integrator OCF, and server relocation and data centre migration specialist Technimove dismantled, transported, and rebuilt the machine at the ICC. The whole exercise was funded by the Science and Technology Facilities Council.

COSMA6 arrived at Durham in April 2016, and was installed and tested at the ICC. It now extends Durham’s DiRAC system as part of DiRAC 2.5.

COSMA6 has:

  • 497 IBM iDataPlex dx360 M4 server compute nodes
  • 7,952 Sandy Bridge Xeon E5-2670 cores
  • More than 35TB of DDR3 DRAM
  • Mellanox FDR10 InfiniBand switches in 2:1 blocking configuration connects the cores
  • DDN Exascalar storage:
    • 2.5PB data space served by 8 OSSs and 2 MDSs
    • 1.8PB Intel Lustre Scratch space served by six OSSs and two MDSs using IP over IB and RDMA to the cluster

The Lustre filesystem and SLURM are used for its job submission system.

COSMA6

COSMA6 racks

Lydia Heck, ICC technical director, said: “While it was quite an effort to bring it to its current state, as it is the same architecture and the same network layout as our previous system, we expect this to run very well.”

Durham now has both COSMA5 (6,500 cores) and COSMA6 (8,000 cores) contributing to DiRAC and available for researchers.

Find out how to access and use DiRAC here. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

10 Things Data Center Operators Can Do to Prepare for GDPR

The content below is taken from the original (10 Things Data Center Operators Can Do to Prepare for GDPR), to continue reading please visit the site. Remember to respect the Author & Copyright.

10 Things Data Center Operators Can Do to Prepare for GDPR

As we explained in an article earlier this week, the new European General Data Protection Regulation, which goes into effect next May, has wide-reaching implications for data center operators in and outside of Europe. We asked experts what steps they would recommend operators take to prepare. Here’s what they said:

Ojas Rege, chief marketing and strategy officer at MobileIron, a mobile and cloud security company based in Mountain View, California:

Every corporate data center holds an enormous amount of personal data about employees and customers. GDPR compliance will require that only the essential personal data is held and that it is effectively protected from breach and loss. Each company should consider a five-step process:

  • Do an end-to-end data mapping of the data stored in its data center to identify personal data.
  • Ensure that the way this personal data is used is consistent with GDPR guidelines.
  • Fortify its protections for that personal data since the penalties for GDPR compliance are so extensive.
  • Proactively establish a notification and forensics plan in the case of breach.
  • Extensively document its data flows, policies, protections, and remediation methods for potential GDPR review.

Neil Thacker, deputy CISO at Forcepoint, a cybersecurity company based in Austin, Texas:

Data centers preparing for GDPR must be in position to identify, protect, detect, respond, and recover in case of a data breach. Some of the key actions they should take include:

  • Perform a complete analysis of all data flows from the European Economic Area and establish in which non-EEA countries processing will be undertaken.
  • Review cloud service agreements for location of data storage and any data transfer mechanism, as relevant.
  • Implement cybersecurity practices and technologies that provide deep visibility into how critical data is processed across their infrastructure, whether on-premises, in the cloud, or in use by a remote workforce.
  • Monitor, manage, and control data — at rest, in use, and in motion.
  • Utilize behavioral analytics and machine learning to discover broken business processes and identify employees that elevate risk to critical data.

See also: What Europe’s New Data Protection Law Means for Data Center Operators

Online training for Azure Data Lake

The content below is taken from the original (Online training for Azure Data Lake), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are pleased to announce the availability of new, free online training for Azure Data Lake. We’ve designed this training to get developers ramped up fast. It covers all the topics a developer needs to know to start being productive with big data and how to address the challenges of authoring, debugging, and optimizing at scale.

Explore the training

Click on the link below to start!

Microsoft Virtual Academy: Introduction to Azure Data Lake

Looking for more?

You can find this training and many more resources for developers.

Course outline

1 | Introduction to Azure Data Lake

Get an overview of the entire Azure Data Lake set of services including HDI, ADL Store, and ADL Analytics.

2 | Introduction to Azure Data Lake Tools for Visual Studio

Since ADL developers of all skill levels use Azure Data Lake Tools for Visual Studio, review the basic set of capabilities offered in Visual Studio.

3 | U-SQL Programming

Explore the fundamentals of the U-SQL language, and learn to perform the most common U-SQL data transformations.

4 | Introduction to Azure Data Lake U-SQL Batch Job

Find out what’s happening behind the scenes, when running a batch U-SQL script in Azure.

5 | Advanced U-SQL

Learn about the more sophisticated features of the U-SQL language to calculate more useful statistics and learn how to extend U-SQL to meet many diverse needs.

6 | Debugging U-SQL Job Failures

Since, at some point, all developers encounter a failed job, get familiar with the causes of failure and how they manifest themselves.

7 | Introduction to Performance and Optimization

Review the basic concepts that drive performance in a batch U-SQL job, and examine strategies available to address those issues when they come up, along with the tools that are available to help.

8 | ADLS Access Control Model

Explore how Azure Data Lake Store uses the POSIX Access Control model, which is very different for users coming from a Windows background.

9 | Azure Data Lake Outro and Resources

Learn about course resources.

OpenStack Developer Mailing List Digest July 22-28

The content below is taken from the original (OpenStack Developer Mailing List Digest July 22-28), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

  • Nova placement/resource providers update 30 1
  • TC Report 30 2
  • POST /api-wg/news 3
  • Release countdown for week R-4, July 28 – Aug 4 4
  • Technical Committee Status update, July 28 5

Project Team Gathering Planning

  • Nova 6
  • Keystone 7
  • Sahara 8
  • Cinder 9
  • Oslo 10
  • Neutron 11
  • Documentation 12

Oslo DB Network Database Base namespace throughout OpenStack Projects

  • Mike Bayer has been working with Octave Oregon on adding the network database storage engine for mysql to oslo.db module so other projects can take advantage of it. Mike Bayer notes:
    • The code review 13
    • Support in Nova 14
    • Support in Neutron 15
  • New data types that map to types from the NDB namespace.
    • oslo_db.sqlalchemy.types.SmallString
    • oslo_db.sqlalchemy.types.String
  • Full thread: 16
  1. http://bit.ly/2tYdbp6l
  2. http://bit.ly/2vmipysl
  3. http://bit.ly/2tYhv7wl
  4. http://bit.ly/2vm2dgvl
  5. http://bit.ly/2tYF40al
  6. http://bit.ly/2vmjF4Bl
  7. http://bit.ly/2tYvBG4l
  8. http://bit.ly/2vmdjSSl
  9. http://bit.ly/2tXO7yml
  10. http://bit.ly/2vm6T68l
  11. http://bit.ly/2tY3WFsl
  12. http://bit.ly/2vmmcvMl
  13. http://bit.ly/2tXO7yj/
  14. http://bit.ly/2vmmcfg3
  15. http://bit.ly/2tXMGjr/
  16. http://bit.ly/2vmqpiJ7

HP made a VR backpack for on-the-job training

The content below is taken from the original (HP made a VR backpack for on-the-job training), to continue reading please visit the site. Remember to respect the Author & Copyright.

To date, VR backpack PCs have been aimed at gamers who just don’t want to trip over cords while they’re fending off baddies. But what about pros who want to collaborate, or soldiers who want to train on a virtual battlefield? HP thinks it has a fix. It’s launching the Z VR Backpack, a spin on the Omen backpack concept that targets the pro crowd. It’s not as ostentatious as the Omen, for a start, but the big deal is its suitability to the rigors of work. The backpack is rugged enough to meet military-grade drop, dust and water resistance standards, and it uses business-class hardware that includes a vPro-enabled quad Core i7 and Quadro P5200 graphics with a hefty 16GB of video memory.

The wearable computer has tight integration with the HTC Vive Business Edition, but HP stresses that you’re not obligated to use it — it’ll work just fine with an Oculus Rift or whatever else your company prefers. The pro parts do hike the price, though, as you’ll be spending at least $3,299 on the Z VR Backpack when it arrives in September. Not that cost is necessarily as much of an issue here — that money might be trivial compared to the cost of a design studio or a training environment.

There’s even a project in the works to showcase what’s possible. HP is partnering with a slew of companies (Autodesk, Epic Games, Fusion, HTC, Launch Forth and Technicolor) on a Mars Home Planet project that uses VR for around-the-world collaboration. Teams will use Autodesk tools to create infrastructure for a million-strong simulated Mars colony, ranging from whole buildings to pieces of clothing. The hope is that VR will give you a better sense of what it’d be like to live on Mars, and help test concepts more effectively than you would staring at a screen. You can sign up for the first phase of the project today.

Source: HP (1), (2)

Google just made scheduling work meetings a little easier

The content below is taken from the original (Google just made scheduling work meetings a little easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a little bit of good news for people juggling both Google G Suite tools and Microsoft Exchange for their schedule management at work. Google has released an update that will allow G Suite users to access coworkers’ real-time free/busy information through both Google Calendar’s Find a Time feature and Microsoft Outlook’s Scheduling Assistant interchangeably.

G Suite admins can enable the new Calendar Interop management feature through the Settings for Calendar option in the admin console. Admins will also be able to easily pinpoint issues with the setup via a troubleshooting tool, which will also provide suggestions for resolving those issues, and can track interoperability successes and failures for each user through logs Google has made available.

The new feature is available on Android, iOS and web versions of Google Calendar as well as desktop, mobile and web clients for Outlook 2010+, for admins who choose to enable it. Google says the full rollout should be completed within three days.

Via: TechCrunch

Source: Google (1), (2)

Microsoft Teams – explainer video