New features in Notepad in Windows 10 v1809

The content below is taken from the original ( New features in Notepad in Windows 10 v1809), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has updated the good old Notepad app in Windows 10 v 1809. The humble Notepad in Windows is a very basic text editor you can use for simple documents.  Let us take a look at the new features in […]

This post New features in Notepad in Windows 10 v1809 is from TheWindowsClub.com.

RISC OS changes hands

The content below is taken from the original ( RISC OS changes hands), to continue reading please visit the site. Remember to respect the Author & Copyright.

And unlike the post on 1st April, this one is NOT a joke! It has just been announced that RISC OS Developments Ltd, the company formed last year by Andrew Rawnsley and Richard Brown, has acquired Castle Technology Ltd, and with it RISC OS itself. The operating system was originally developed by Acorn Computers for […]

BlackBerry races ahead of security curve with quantum-resistant solution

The content below is taken from the original ( BlackBerry races ahead of security curve with quantum-resistant solution), to continue reading please visit the site. Remember to respect the Author & Copyright.

Quantum computing represents tremendous promise to completely alter technology as we’ve known it, allowing operations that weren’t previously possible with traditional computing. The downside of these powerful machines is that they could be strong enough to break conventional cryptography schemes. Today, BlackBerry announced a new quantum-resistant code signing service to help battle that possibility.

The service is meant to anticipate a problem that doesn’t exist yet. Perhaps that’s why BlackBerry hedged its bets in the announcement saying, “The saying,”The new solution will allow software to be digitally signed using a scheme that will be hard to break with a quantum computer.” Until we have fully functioning quantum computers capable of breaking current encryption, we probably won’t know for sure if this works.

But give BlackBerry credit for getting ahead of the curve and trying to solve a problem that has concerned technologists as quantum computers begin to evolve. The solution, which will be available next month, is actually the product of a partnership between BlackBerry and Isara Corporation, a company whose mission is to build quantum-safe security solutions. BlackBerry is using Isara’s cryptographic libraries to help sign and protect code as security evolves.

“By adding the quantum-resistant code signing server to our cybersecurity tools, we will be able to address a major security concern for industries that rely on assets that will be in use for a long time. If your product, whether it’s a car or critical piece of infrastructure, needs to be functional 10-15 years from now, you need to be concerned about quantum computing attacks,” Charles Eagan, BlackBerry’s chief technology officer, Eagan BlackBerry’s Chief Technology Officer said in a statement.

While experts argue how long it could take to build a fully functioning fully-functioning quantum computer, most agree that it will take between 50 and 100 qubit computers to begin realizing that vision. IBM released a 20 qubit Qubit computer last year and introduced a 50 qubit prototype. A qubit Qubit prototype. A Qubit represents a single unit of quantum information.

At TechCrunch Disrupt last month, Dario Gil, IBM’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who is founder and CEO at Rigetti Computing, predicted we could be just three years away from the point where a quantum computer surpasses traditional computing.

IBM Quantum Computer

IBM Quantum Computer. Photo: IBM

Whether it happens that quickly or not remains to be seen, but experts have been expressing security concerns around quantum computing as they grow more powerful, and BlackBerry is addressing that concern by coming up with a solution today, arguing that if you are creating critical infrastructure you need to future-proof your security.

BlackBerry, once known for highly secure phones, and one of the earliest popular business smartphones, popular, business smart phones, has pivoted to be more of a security company in recent years. This announcement, made at the BlackBerry Security Summit, is part of the company’s focus on keeping enterprises secure.

How the data collected by dockless bikes can be useful for cities (and hackers)

The content below is taken from the original ( How the data collected by dockless bikes can be useful for cities (and hackers)), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the 18 months or so since dockless bike-share arrived in the US, the service has spread to at least 88 American cities. (On the provider side, at least 10 companies have jumped into the business; Lime is one of the largest.) Some of those cities now have more than a year of data related to the programs, and they’ve started gleaning insights and catering to the increased number of cyclists on their streets.

Technology Review writer Elizabeth Woyke looks at ways how city planners in Seattle, WA and South Bend, IN use the immense stream of user-generated location data from dockless-bike-sharing programs to improve urban mobility — and how hackers could potentially access and abuse this (supposedly anonymous) information. “In theory, the fact that people can park dockless bikes outside their exact destinations could make it easier for someone who hacked into the data to decode the anonymous identities that companies assign their users,” Woyke writes.

California bans default passwords on any internet-connected device

The content below is taken from the original ( California bans default passwords on any internet-connected device), to continue reading please visit the site. Remember to respect the Author & Copyright.

In less than two years, anything that can connect to the internet will come with a unique password — that is, if it's produced or sold in California. The "Information Privacy: Connected Devices" bill that comes into effect on January 1, 2020, e…

D-Wave offers the first public access to a quantum computer

The content below is taken from the original ( D-Wave offers the first public access to a quantum computer), to continue reading please visit the site. Remember to respect the Author & Copyright.

Outside the crop of construction cranes that now dot Vancouver’s bright, downtown greenways, in a suburban business park that reminds you more of dentists and tax preparers, is a small office building belonging to D-Wave. This office — squat, angular and sun-dappled one recent cool Autumn morning — is unique in that it contains an infinite collection of parallel universes.

Founded in 1999 by Geordie Rose, D-Wave worked in relative obscurity on esoteric problems associated with quantum computing. When Rose was a PhD student at the University of British Columbia, he turned in an assignment that outlined a quantum computing company. His entrepreneurship teacher at the time, Haig Farris, found the young physicists ideas compelling enough to give him $1,000 to buy a computer and a printer to type up a business plan.

The company consulted with academics until 2005, when Rose and his team decided to focus on building usable quantum computers. The result, the Orion, launched in 2007, and was used to classify drug molecules and play Sodoku. The business now sells computers for up to $10 million to clients like Google, Microsoft and Northrop Grumman.

“We’ve been focused on making quantum computing practical since day one. In 2010 we started offering remote cloud access to customers and today, we have 100 early applications running on our computers (70 percent of which were built in the cloud),” said CEO Vern Brownell. “Through this work, our customers have told us it takes more than just access to real quantum hardware to benefit from quantum computing. In order to build a true quantum ecosystem, millions of developers need the access and tools to get started with quantum.”

Now their computers are simulating weather patterns and tsunamis, optimizing hotel ad displays, solving complex network problems and, thanks to a new, open-source platform, could help you ride the quantum wave of computer programming.

Inside the box

When I went to visit D-Wave they gave us unprecedented access to the inside of one of their quantum machines. The computers, which are about the size of a garden shed, have a control unit on the front that manages the temperature as well as queuing system to translate and communicate the problems sent in by users.

Inside the machine is a tube that, when fully operational, contains a small chip super-cooled to 0.015 Kelvin, or -459.643 degrees Fahrenheit or -273.135 degrees Celsius. The entire system looks like something out of the Death Star — a cylinder of pure data that the heroes must access by walking through a little door in the side of a jet-black cube.

It’s quite thrilling to see this odd little chip inside its super-cooled home. As the computer revolution maintained its predilection toward room-temperature chips, these odd and unique machines are a connection to an alternate timeline where physics is wrestled into submission in order to do some truly remarkable things.

And now anyone — from kids to PhDs to everyone in-between — can try it.

Into the ocean

Learning to program a quantum computer takes time. Because the processor doesn’t work like a classic universal computer, you have to train the chip to perform simple functions that your own cellphone can do in seconds. However, in some cases, researchers have found the chips can outperform classic computers by 3,600 times. This trade-off — the movement from the known to the unknown — is why D-Wave exposed their product to the world.

“We built Leap to give millions of developers access to quantum computing. We built the first quantum application environment so any software developer interested in quantum computing can start writing and running applications — you don’t need deep quantum knowledge to get started. If you know Python, you can build applications on Leap,” said Brownell.

To get started on the road to quantum computing, D-Wave built the Leap platform. The Leap is an open-source toolkit for developers. When you sign up you receive one minute’s worth of quantum processing unit time which, given that most problems run in milliseconds, is more than enough to begin experimenting. A queue manager lines up your code and runs it in the order received and the answers are spit out almost instantly.

You can code on the QPU with Python or via Jupiter notebooks, and it allows you to connect to the QPU with an API token. After writing your code, you can send commands directly to the QPU and then output the results. The programs are currently pretty esoteric and require a basic knowledge of quantum programming but, it should be remembered, classic computer programming was once daunting to the average user.

I downloaded and ran most of the demonstrations without a hitch. These demonstrations — factoring programs, network generators and the like — essentially turned the concepts of classical programming into quantum questions. Instead of iterating through a list of factors, for example, the quantum computer creates a “parallel universe” of answers and then collapses each one until it finds the right answer. If this sounds odd it’s because it is. The researchers at D-Wave argue all the time about how to imagine a quantum computer’s various processes. One camp sees the physical implementation of a quantum computer to be simply a faster methodology for rendering answers. The other camp, itself aligned with Professor David Deutsch’s ideas presented in The Beginning of Infinity, sees the sheer number of possible permutations a quantum computer can traverse as evidence of parallel universes.

What does the code look like? It’s hard to read without understanding the basics, a fact that D-Wave engineers factored for in offering online documentation. For example, below is most of the factoring code for one of their demo programs, a bit of code that can be reduced to about five lines on a classical computer. However, when this function uses a quantum processor, the entire process takes milliseconds versus minutes or hours.

Classical

# Python Program to find the factors of a number

define a function

def print_factors(x):

This function takes a number and prints the factors

print(“The factors of”,x,”are:”)
for i in range(1, x + 1):
if x % i == 0:
print(i)

change this value for a different result.

num = 320

uncomment the following line to take input from the user

#num = int(input(“Enter a number: “))

print_factors(num)

Quantum

@qpu_ha
def factor(P, use_saved_embedding=True):

####################################################################################################

get circuit

####################################################################################################

construction_start_time = time.time()

validate_input(P, range(2 ** 6))

get constraint satisfaction problem

csp = dbc.factories.multiplication_circuit(3)

get binary quadratic model

bqm = dbc.stitch(csp, min_classical_gap=.1)

we know that multiplication_circuit() has created these variables

p_vars = [‘p0’, ‘p1’, ‘p2’, ‘p3’, ‘p4’, ‘p5’]

convert P from decimal to binary

fixed_variables = dict(zip(reversed(p_vars), “{:06b}”.format(P)))
fixed_variables = {var: int(x) for(var, x) in fixed_variables.items()}

fix product qubits

for var, value in fixed_variables.items():
bqm.fix_variable(var, value)

log.debug(‘bqm construction time: %s’, time.time() – construction_start_time)

####################################################################################################

run problem

####################################################################################################

sample_time = time.time()

get QPU sampler

sampler = DWaveSampler(solver_features=dict(online=True, name=’DW_2000Q.*’))
_, target_edgelist, target_adjacency = sampler.structure

if use_saved_embedding:

load a pre-calculated embedding

from factoring.embedding import embeddings
embedding = embeddings[sampler.solver.id]
else:

get the embedding

embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist)
if bqm and not embedding:
raise ValueError(“no embedding found”)

apply the embedding to the given problem to map it to the sampler

bqm_embedded = dimod.embed_bqm(bqm, embedding, target_adjacency, 3.0)

draw samples from the QPU

kwargs = {}
if ‘num_reads’ in sampler.parameters:
kwargs[‘num_reads’] = 50
if ‘answer_mode’ in sampler.parameters:
kwargs[‘answer_mode’] = ‘histogram’
response = sampler.sample(bqm_embedded, **kwargs)

convert back to the original problem space

response = dimod.unembed_response(response, embedding, source_bqm=bqm)

sampler.client.close()

log.debug(’embedding and sampling time: %s’, time.time() – sample_time)

 

“The industry is at an inflection point and we’ve moved beyond the theoretical, and into the practical era of quantum applications. It’s time to open this up to more smart, curious developers so they can build the first quantum killer app. Leap’s combination of immediate access to live quantum computers, along with tools, resources, and a community, will fuel that,” said Brownell. “For Leap’s future, we see millions of developers using this to share ideas, learn from each other and contribute open-source code. It’s that kind of collaborative developer community that we think will lead us to the first quantum killer app.”

The folks at D-Wave created a number of tutorials as well as a forum where users can learn and ask questions. The entire project is truly the first of its kind and promises unprecedented access to what amounts to the foreseeable future of computing. I’ve seen lots of technology over the years, and nothing quite replicated the strange frisson associated with plugging into a quantum computer. Like the teletype and green-screen terminals used by the early hackers like Bill Gates and Steve Wozniak, D-Wave has opened up a strange new world. How we explore it us up to us.

Perhaps The Ultimate Raspberry Pi Case: Your PC

The content below is taken from the original ( Perhaps The Ultimate Raspberry Pi Case: Your PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the great joys of owning a 3D printer is being able to print custom cases for boards like the Raspberry Pi. What’s more, if you are using a desktop PC, you probably don’t have as many PCI cards in it as you used to. Everything’s moved to the motherboard. [Sneekystick] was using a Pi with a PC and decided the PC itself would make a great Pi case. He designed a bracket and it looks handy.

The bracket just holds the board in place. It doesn’t connect to the PC. The audio, HDMI, and power jacks face out for access. It would be tempting and possible to power the board from the PC supply, but to do that you have to be careful. Connecting the GPIO pins to 5V will work, but bypasses the input protection circuitry. We’ve read that you can find solder points near the USB plug and connect there, but if you do, you should block out the USB port. It might be nice to fill in that hole in the bracket if you planned to do that.

Of course, it isn’t hard to sequester a Pi inside a hard drive bay or some other nook or cranny, but the bracket preserves at least some of the output connectors. If you are really wanting to bury a Pi in a piece of gear, you can always design a custom board and fit a “compute module” in it. These are made to embed which means they have a row of pins instead of any I/O connectors. Of course, that also means more real work if you need any of those connectors.

We’ve seen cases that aim to turn the Pi into a desktop PC before. We’ve also seen those compute modules jammed into Game Boy cases more than once.

Southampton meeting – 9th October

The content below is taken from the original ( Southampton meeting – 9th October), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Southampton RISC OS User Group (SROUG) will next meet up on Tuesday, 9th October. Running from 7:00pm until 9:00pm, the meeting will take place at: Itchen College Sports Centre,Itchen College,Deacon Road,Southampton. There is no entry fee, and anyone with an interest in RISC OS is welcome to attend. There will be at least one […]

A rough guide to your next (or first) fog computing deployment

The content below is taken from the original ( A rough guide to your next (or first) fog computing deployment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Like any kind of large-scale computing system deployment ever, the short answer to the question “what should my fog compute deployment look like” is going to be “it varies.” But since that’s not a particularly useful piece of information, Cisco principal engineer and systems architect Chuck Byers gave an overview on Wednesday at the 2018 Fog World Congress of the many variables, both technical and organizational, that go into the design, care and feeding of a fog computing setup.

Byers offered both general tips about the architecture of fog computing systems, as well as slightly deeper dives into the specific areas that all fog computing deployments will have to address, including different types of hardware, networking protocols, and security.

To read this article in full, please click here

Disney’s spray-painting drone could end the need for scaffolding

The content below is taken from the original ( Disney’s spray-painting drone could end the need for scaffolding), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve seen some pretty interesting work come out of Disney Research in the past, like techniques for digitally recreating teeth, makeup-projecting lamps, a group AR experience and a stick-like robot that can perform backflips. One of its latest proje…

In a quest to centralize all your media, Plex now includes web series

The content below is taken from the original ( In a quest to centralize all your media, Plex now includes web series), to continue reading please visit the site. Remember to respect the Author & Copyright.

At CES, media software maker Plex said it would this year add support for podcasts, web series and other digital media to its platform. It then rolled out podcasts this May, and now it’s introducing its own curated collection of web series. The company today is launching Plex Web Shows in beta, which will offer unlimited, on-demand streaming of online shows from brands like GQ, Saveur, Epicurious, Pitchfork, Condé Nast, The New Yorker, Fandor, Vanity Fair, and others.

The shows will span a range of interests, including food, home and garden, science, technology, entertainment and pop culture, says Plex. In addition shows from the big-name brand partners, which also include Bonnier, TWiT, Ovation and more, there will also be a handful of shows from indie creators, like Epic Meal Time, ASAPscience, Household Hacker, People are Awesome, and The Pet Collective. 

Plex tells us that there will be over 19,000 episodes across 67 shows at launch, with more on the way.

Some partners and Plex have revenue share agreements in place, the company also says, based on ad sales that Plex manages. The details are undisclosed.

Plex got its start as software for organizing users’ home media collections of video, music, and photos, but has in recent months been shifting its focus to address the needs of cord cutters instead. It launched tools for watching live TV through an antenna, and recording shows and movies to a DVR.

It’s also recently said it’s shutting down other features, like support for streaming content from Plex Cloud as well as Plex’s directory of plugins, in order to better focus on its new ambitions.

Along the way, Plex has also rolled out other features designed for media consumption including not only podcast, but also the addition of a news hub within its app, thanks its acquisition of streaming news startup, Watchup.

With the launch of Web Shows, Plex is again finding a way to give its users something to watch without having to make the sort of difficult content deals that other live TV streaming services today do – like Sling TV, PlayStation Vue, or YouTube TV, for example.

“We really care about each user’s personal media experience, and want to be ‘the’ platform for the media that matters most to them,” Plex CEO Keith Valory tells TechCrunch. “This started with helping people with their own personal media libraries, and then we added over-the-air television, news, podcasts, and now web shows. Sources for quality digital content continues to explode, but the user experience for discovering and accessing it all has never been worse. It’s chaos,” he continues.

“This is the problem we are solving. We’re working hard to make tons of great content available in one beautiful app, and giving our users the tools to customize their own experience to include only the content that matters to them,” Valory adds.

Access to Web Shows is available across devices through the Plex app, where there’s now a new icon for “Web Shows.” From here, you’ll see the familiar “On Deck” section – those you’re following – as well as personalized recommendations, trending episodes, and links to view all the available web shows and a list of you’re subscribed to.

You can also browse shows by category – like “Arts & Entertainment,” “Computers & Electronics,” “Science,” etc. Or find those that were “Recently Played” or are “New” to Plex.

Each episode will display information like the show’s length, synopsis, publish date and title, and will let you play, pause, mark as watched/unwatched, and add to your queue.

The launch of Web Shows is another step Plex is making towards its new goal of becoming a platform for all your media – not just your home collection, but everything that streams, too – like podcast, web series and TV. (All it’s missing now is a Roku-like interface for jumping into your favorite on-demand streaming apps. That’s been on its long-term roadmap, however.)

Web Shows will be free to all of Plex’s now 19 million registered users, not just Plex Pass subscribers. The feature is live on iOS, Android, Android TV, and Apple TV.

Update This Classic Children’s Toy With a Raspberry Pi

The content below is taken from the original ( Update This Classic Children’s Toy With a Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Everyone under the age of 60 remembers the frustration of trying to generate a work of art on an Etch A Sketch. The mechanical drawing toy, introduced in 1960 by the Ohio Art […]

The post Update This Classic Children’s Toy With a Raspberry Pi appeared first on Geek.com.

Microsoft Ignite – New Windows 10 Features Coming to Intune

The content below is taken from the original ( Microsoft Ignite – New Windows 10 Features Coming to Intune), to continue reading please visit the site. Remember to respect the Author & Copyright.

Intune plays an important part in Microsoft’s modern desktop strategy, allowing organizations to deploy and manage Windows 10 without an on-premises Active Directory domain. Microsoft announced several new features that will make it easier to manage Windows 10 using Intune.

Deploy Win32 Apps with Intune

Currently in preview, Microsoft announced the ability to deploy ‘most’ legacy Win32 apps using the Intune Management Extension, including MSI, setup.exe, and MSP files. System administrators will also be able to use Intune to remove these apps. Intune already had the ability to install line-of-business (LOB) and Microsoft Store apps but this new capability will enable businesses to manage more legacy business apps using Intune. LOB applications are those that rely on a single MSI file with no external dependencies.

Microsoft says that this new feature was built by the same team that created the Windows app deployment capabilities in System Center Configuration Manager (SCCM) and that Intune will be able to evaluate requirement rules before an app starts to download and install, notifying users via the Action Center of install status and if a reboot is required. Legacy Win32 apps are packaged using the Intune Win32 application packaging tool, which converts installers into .intunewin files.

Security Baselines

Microsoft publishes security baselines for supported client and server versions of Windows as part of the Security Compliance Toolkit (SCT), which replaced the Security Compliance Manager. But the baselines are provided as Group Policy Object backups, which can’t be used with Intune because it relies on Mobile Device Management (MDM) rather than Group Policy.

For more information on SCT, see Microsoft Launches the Security Compliance Toolkit 1.0 on Petri.

To help organizations meet security requirements when using Intune to manage Windows 10, Microsoft will be making security baselines available in the Intune portal over the next couple of weeks. The baselines will be updated and maintained in the cloud and have been developed in coordination with the same team that develops the Group Policy security baselines.

Organizations will be able to deploy the baselines as provided or modify them to suit their own needs. But the best news is that Intune will validate whether devices are compliant and report if any devices aren’t meeting the required standards.

Third-Party Certification Authority Support

Finally, Microsoft announced that third-party certification authority (CA) support is coming to Intune. Third-party CAs, including Entrust Datacard and GlobalSign, have already signed up to deliver certificates to mobile devices running Windows, iOS, Android, and macOS using the Simple Certificate Enrollment Protocol (SCEP).

Microsoft is planning to add many new features to Intune, including a public preview for Android Enterprise fully managed devices, machine risk-based conditional access with threat protection for Microsoft 365 users, and deeper integration with Outlook mobile controls. For a complete list of the new features and improvements coming to Intune, Configuration Manager, and Microsoft 365, check out Microsoft’s blog post here.

The post Microsoft Ignite – New Windows 10 Features Coming to Intune appeared first on Petri.

HPE Simplifies Hybrid Cloud Data Protection With New Solutions for HPE Nimble Storage and HPE 3PAR

The content below is taken from the original ( HPE Simplifies Hybrid Cloud Data Protection With New Solutions for HPE Nimble Storage and HPE 3PAR), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hewlett Packard Enterprise (HPE) today announced new hybrid cloud data protection and copy data management solutions for its intelligent storage… Read more at VMblog.com.

The drywall-installing robot you’ve always dreamed about is finally here

The content below is taken from the original ( The drywall-installing robot you’ve always dreamed about is finally here), to continue reading please visit the site. Remember to respect the Author & Copyright.

The HRP-5P is a humanoid robot from Japan’s Advanced Industrial Science and Technology institute that can perform common construction tasks including — as we see above — install drywall.

HRP-5P — maybe we can call it Herb? — uses environmental measurement, object detection and motion planning to perform various tasks.

Ever had to install large sections of drywall and wondered if there wasn’t a machine available that could do that for you while you take care of a bowl of the nachos? Well, now there is: Japanese researchers have developed a humanoid worker robot, HRP-5P, which appears to be capable of performing the backbreaking task over and over again without breaking into sweat.

Sure, the robot still needs to pick up the pace a bit to meet construction deadlines, but it’s a start, and the machine could—maybe, one day—become a helpful tool in Japan’s rapidly aging society where skilled workers become increasingly rare.

Ooof, these are heavy!

Let’s see…this one goes here I guess.

Done and done! Now let’s see what my buddies from Skynet SkyNet are up to this afternoon.

All images via 産総研広報’s video on YouTube.

How to Optimize Amazon S3 Performance

The content below is taken from the original ( How to Optimize Amazon S3 Performance), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resilient environment. Your S3 objects are likely being read and accessed by your applications, other AWS services, and end users, but is it optimized for its best performance? This post will discuss some of the mechanisms and techniques that you can apply to ensure you are getting the most optimal performance when using Amazon S3.

How to optimize Amazon S3 performance: Four best practices

 

Four best practices when working with S3

1. TCP Window Scaling

This is a method used which enables you to enhance your network throughput performance by modifying the header within the TCP packet using a window scale which allows you to send data in a single segment larger than the default 64KB. This isn’t something specific that you can only do with Amazon S3, this is something that operates at the protocol level and so you can perform window scaling on your client when connecting to any other server using this protocol. More information on this can be found in RFC-1323

When TCP establishes a connection between a source and destination, a 3-way handshake takes place which originates from the source (client). So logically when looking at this from an S3 perspective, your client might need to upload an object to S3. Before this can happen a connection to the S3 servers needs to be created. The client will send a TCP packet with a specified TCP window scale factor in the header, this initial TCP request is known as a SYN request, part 1 of the 3-way handshake. S3 will receive this request and respond with a SYN/ACK message back to the client with it’s supported window scale factor, this is part 2. Part 3 then involved an ACK message back to the S3 server acknowledging the response. On completion of this 3 way handshake, a connection is then established and data can be sent between the client and S3.

By increasing the window size with a scale factor (window scaling) it allows you to send larger quantities of data in a single segment and therefore allowing you to send more data at a quicker rate.

Window Scaling

2. TCP Selective Acknowledgement (SACK)

Sometimes multiple packets can be lost when using TCP and understanding which packets have been lost can be difficult to ascertain within a TCP window. As a result, sometimes all of the packets can be resent, but some of these packets may have already been received by the receiver and so this is ineffective. By using TCP selective acknowledgment (SACK), it helps performance by notifying the sender of only failed packets within that window allowing the sender to simple resend only failed packets.

Again, the request for using SACK has to be initiated by the sender (the source client) within the connection establishment during the SYN phase of the handshake. This option is known as SACK-permitted. More information on how to use and implement SACK can be found within RFC-2018.

3. Scaling S3 Request Rates

On top of TCP Scaling and TCP SACK communications, S3 itself is already highly optimized for a very high request throughput. In July 2018, AWS made a significant change to these request rates as per the following AWS S3 announcement. Prior to this announcement, AWS recommended that you randomized prefixes within your bucket to help optimize performance, this is no longer required. You can now achieve exponential growth of request rate performance by using multiple prefixes within your bucket.

You are now able to achieve 3,500 PUT/POST/DELETE request per second along with 5,500 GET requests. These limitations are based on a single prefix, however, there are no limitations of the number of prefixes that can be used within an S3 bucket. As a result, if you had 20 prefixes you could reach 70,000 PUT/POST/DELETE and 110,000 GET requests per second within the same bucket.

S3 storage operates across a flat structure meaning that there is no hierarchical level of folder structures, you simply have a bucket and ALL objects are stored in a flat address space within that bucket. You are able to create folders and store objects within that folder, but these are not hierarchical, they are simply prefixes to the object which help make the object unique. For example, if you have the following 3 data objects within a single bucket:
Presentation/Meeting.ppt
Project/Plan.pdf
Stuart.jpg

The ‘Presentation’ folder acts as a prefix to identify the object and this pathname is known as the object key. Similarly with the ‘Project’ folder, again this is the prefix to the object. ‘Stuart.jpg’ does not have a prefix and so can be found within the root of the bucket itself.

Learn how to create your first Amazon S3 bucket in this Hands-on Lab.

4. Integration of Amazon CloudFront

Another method used to help optimization, which is by design, is to incorporate Amazon S3 with Amazon CloudFront. This works particularly well if the main request to your S3 data is a GET request. Amazon CloudFront is AWS’s content delivery network that speeds up the distribution of your static and dynamic content through its worldwide network of edge locations.

Normally when a user requests content from S3 (GET request), the request is routed to the S3 service and corresponding servers to return that content. However, if you’re using CloudFront in front of S3 then CloudFront can cache commonly requested objects. Therefore the GET request from the user is then routed to the closest edge location which provides the lowest latency to deliver the best performance and return the cached object. This also helps to reduce your AWS S3 costs by reducing the number of GET requests to your buckets.]

This post has explained a number of different options that are available to help you identify ways to optimize the performance when working with S3 objects.

For further information on some of the topics mentioned in this post please take a look at our library content.

The post How to Optimize Amazon S3 Performance appeared first on Cloud Academy.

Adding custom intelligence to Gmail with serverless on GCP

The content below is taken from the original ( Adding custom intelligence to Gmail with serverless on GCP), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are using G Suite at work, you probably have to keep track of tons of data spread across Gmail, Drive, Docs, Sheets and Calendar. If only there was a simple, but scalable way to tap this data and have it nudge you based on signals of your choice. In this blog post, we’ll show you how to build powerful Gmail extensions using G Suite’s REST APIs, Cloud Functions and other fully managed services on Google Cloud Platform (GCP).

gsute_cloudfunction.png

There are many interesting use cases for GCP and G Suite. For example, you could mirror your Google Drive files into Cloud Storage and run it through the Cloud Data Loss Prevention API. You could train your custom machine learning model with Cloud AutoML. Or you might want to export your Sheets data into Google BigQuery to merge it with other datasets and run analytics at scale. In this post, we’ll use Cloud Functions to specifically talk to Gmail via its REST APIs and extend it with various GCP services. Since email remains at the heart of how most companies operate today, it’s a good place to start and demonstrate the potential of these services working in concert.

Architecture of a custom Gmail extension

High email volumes can be hard to manage. Many email users have some sort of system in place, whether it’s embracing the “inbox zero,” setting up an elaborate system of folders and flags, or simply flushing that inbox and declaring “email bankruptcy” once in a while.

Some of us take it one step further and ask senders to help us prioritize our emails: consider an auto-response like “I am away from my desk, please resend with URGENT123 if you need me to get back to you right away.” In the same vein, you might think about prioritizing incoming email messages from professional networks such as LinkedIn by leaving a note inside your profile such as “I pay special attention to emails with pictures of birds.” That way, you can auto-prioritize emails from senders who (ostensibly) read your entire profile.

gmail_star.png
The email with a picture of an eagle is starred, but the one with a picture of a ferret is not.

Our sample app does exactly this, and we’re going to fully describe what this app is, how we built it, and walk through some code snippets.

Here is the architectural diagram of our sample app:

architectural_diagram.png

There are three basic steps to building our Gmail extension:

Gmail_extension.png

Without further ado, let’s dive in!

How we built our app

Building an intelligent Gmail filter involves three major steps, which we’ll walk through in detail below. Note that these steps are not specific to Gmail—they can be applied to all kinds of different G Suite-based apps.

Step 1. Authorize access to G Suite data

The first step is establishing initial authentication between G Suite and GCP. This is a universal step that applies to all G Suite products, including Docs, Slides, and Gmail.

GSuite_data.png

In order to authorize access to G Suite data (without storing the user’s password), we need to get an authorization token from Google servers. To do this, we can use an HTTP function to generate a consent form URL using the OAuth2Client:

This function redirects a user to a generated URL that presents a form to the user. That form then redirects to another “callback” URL of our choosing (in this case, oauth2callback) with an authorization code once the user provides consent. We save the auth token to Cloud Datastore, Google Cloud’s NoSQL database. We’ll use that token to fetch email on the user’s behalf later. Storing the token outside of the user’s browser is necessary because Gmail can publish message updates to a Cloud Pub/Sub topic, which triggers functions such as onNewMessage. (For those of you who aren’t familiar with Cloud Pub/Sub, it’s a GCP distributed notification and messaging system that guarantees at-least-once delivery.)

Let’s take a look at the oauth2callback function:

This code snippet uses the following helper functions

  • getEmailAddress: gets a user’s email address from an OAuth 2 token

  • fetchToken: fetches OAuth 2 tokens from Cloud Datastore (and auto-refreshes them if they are expired)

  • saveToken: saves OAuth 2 tokens to Cloud Datastore

Now, let’s move on to subscribing to Gmail changes by initializing a Gmail watch.

Step 2: Initialize a ‘watch’ for Gmail changes

watch_gmail_changes.png

Gmail provides the watch mechanism for publishing new email notifications to a Cloud Pub/Sub topic. When a user receives a new email, a message will be published to the specified Pub/Sub topic. We can use this message to invoke a Pub/Sub-triggered Cloud Function that processes the incoming message.

In order to start listening to incoming messages, we must use the OAuth 2 token that we obtained in Step 1 to first initialize a Gmail watch on a specific email address,  as shown below. One important thing to note is that the G Suite API client libraries don’t support promises at the time of writing, so we use pify to handle the conversion of method signatures.

Now that we have initialized the Gmail watch, there is an important caveat to consider: the Gmail watch only lasts for seven days and must be re-initialized by sending an HTTP request containing the target email address to initWatch. This can be done either manually or via a scheduled job. For brevity, we used a manual refresh system in this example, but an automated system may be more suitable for production environments. Users can initialize this implementation of Gmail watch functionality by visiting the oauth2init function in their browser. Cloud Functions will automatically redirect them (first to oauth2callback, and then initWatch) as necessary.

We’re ready to take action on the emails that this watch surfaces!  

Step 3: Process and take action on emails

Now, let’s talk about how to process incoming emails. The basic workflow is as follows:

take_action.png

As discussed earlier, Gmail watches publish notifications to a Cloud Pub/Sub topic whenever new email messages arrive. Once those notifications arrive, we can fetch the new message IDs using the gmail.users.messages.list, and their contents using the gmail.users.messages.get API call. Since we’re only processing email content once, we don’t need to store any data externally.

Once our function extracts the images within an email, we can use the Cloud Vision API to analyze these images and check if they contain a specific object (such as birds or food). The API returns a list of object labels (as strings) describing the provided image. If that object label list contains bird, we can use the gmail.users.messages.modify API call to mark the message as “Starred”.

This code snippet uses the following helper functions

  • listMessageIds: fetches the most recent message IDs using gmail.users.messages.list

  • getMessageById: gets the most recent message given its ID using gmail.users.messages.get

  • getLabelsForImages: detects object labels in each image using the Cloud Vision API

  • labelMessage: adds a Gmail label to the message using gmail.users.messages.modify

Please note that we’ve abstracted most of the boilerplate code into the helper functions. If you want to take a deeper dive, the full code can be seen here along with the deployment instructions.

Wrangle your G Suite data with Cloud Functions and GCP

To recap, in this tutorial we built a scalable application that processes new email messages as they arrive to a user’s Gmail inbox and flags them as important if they contain a picture of a bird. This is of course only one example. Cloud Functions makes it easy to extend Gmail and other G Suite products with GCP’s powerful data analytics and machine learning capabilities—without worrying about servers or backend implementation issues. With only a few lines of code, we built an application that automatically scales up and down with the volume of email messages in your users’ accounts, and you will only pay when your code is running. (Hint: for small-volume applications, it will likely be very cheap—or even free—thanks to Google Cloud Platform’s generous Free Tier.)

To learn more about how to programmatically augment your G Suite environment with GCP, check out our Google Cloud Next ‘18 session G Suite Plus GCP: Building Serverless Applications with All of Google Cloud, for which we developed this demo. You can watch that entire talk here:

G Suite Plus GCP: Building Serverless Applications with All of Google Cloud (Cloud Next '18)

You can find the source code for this project here.

Happy building!

Azure Advisor has new recommendations for you

The content below is taken from the original ( Azure Advisor has new recommendations for you), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Advisor is your free, personalized guide to Azure best practices. It analyzes your Azure usage and configurations and helps you optimize your resources for high availability, security, performance, and cost. We’re constantly adding more to Advisor and are excited to share a bundle of new recommendations and integrations so you can get more out of Azure.

blog-image-1 (1)

Create or update table statistics in your SQL Data Warehouse tables

Table statistics are important for ensuring optimal query performance. The SQL Data Warehouse query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result, which generates a higher-quality query plan for faster performance.

Advisor now has recommendations to help you boost your SQL Data Warehouse query performance. It will identify tables with outdated or missing table statistics and recommend that you create or update them.

Remove data skew on your SQL Data Warehouse table

Data skew occurs when one distribution has more data than others and can cause unnecessary data movement or resource bottlenecks when running your workload, slowing your performance. Advisor will detect distribution data skew greater than 15 percent and recommend that you redistribute your data, and revisit your table distribution key selections.

Enable soft delete on your Azure Storage blobs

Enable soft delete on your storage account so that deleted Azure Storage blobs transition to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. This allows you to recover in the event of accidental deletion or overwrites. Advisor now identifies Azure Storage accounts that don’t have soft delete enabled and suggests you enable it.

Migrate your Azure Storage account to Azure Resource Manager

Azure Resource Manager (ARM) is the most up-to-date way to manage Azure resources, with template deployments, additional security options, and the ability to upgrade to a GPv2 account for utilization of Azure Storage’s latest features. Azure will identify any stand-alone Storage accounts that are using the classic deployment model and recommend migrating to the ARM deployment model.

Create additional Azure ExpressRoute circuits for customers using Microsoft Peering for Office 365

Customers using Microsoft Peering for Office 365 should have at least two ExpressRoute circuits at different locations to avoid having a single point of failure. Advisor will identify when there is only one ExpressRoute circuit and recommend creating another.

Azure Advisor is now integrated into the Azure Virtual Machines (VMs) experience

When you are viewing your VM resources, you will now see a notification if you have Azure Advisor recommendations that are related to that resource. There will be a blue notification at the top of the experience that indicates the number of Advisor recommendations you have and the description of one of those recommendations. Clicking on the notification will take you to the full Advisor experience where you can see all the recommendations for that resource.

blog-image-2

blog-image-3

Azure Advisor recommendations are available in Azure Cost Management

Azure Advisor recommendations are now integrated in the new Azure Cost Management experience that is in public preview for Enterprise Agreement (EA) enrollments. Clicking on Advisor recommendations on the left menu will open Advisor to the cost tab. Integrating Advisor with Azure Cost Management creates a single location for cost recommendations. This allows you to have the same experience whether you are coming from Azure Cost Management or looking at cost recommendations directly from Azure Advisor.

blog-image-4

Review your Azure Advisor recommendations

Learn more about Azure Advisor and review your Advisor recommendations in the Azure portal today to start optimizing your Azure resources for high availability, security, performance, and cost. For help getting started, visit the Advisor documentation.

The Google graveyard: Remembering three dead search engines

The content below is taken from the original ( The Google graveyard: Remembering three dead search engines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Buffy the Vampire Slayer was the first show on American television to use the word "Google" as a transitive verb. It was 2002, in the fourth episode of the show's seventh and final season. Buffy, Willow, Xander and the gang are trying to help Cassie,…

Announcing private preview of Azure VM Image Builder

The content below is taken from the original ( Announcing private preview of Azure VM Image Builder), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today I am excited to announce the private preview of Azure VM Image Builder, a service which allows users to have an image building pipeline in Azure. Creating standardized virtual machine (VM) images allow organizations to migrate to the cloud and ensure consistency in the deployments. Users commonly want VMs to include predefined security and configuration settings as well as application software they own. However, setting up your own image build pipeline would require infrastructure and setup. With Azure VM Image Builder, you can take an ISO or Azure Marketplace image and start creating your own golden images in a few steps.

How it works

Azure VM Image Builder lets you start with either a Linux-based Azure Marketplace VM or Red Hat Enterprise Linux (RHEL) ISO and begin to add your own customizations. Your customizations can be added in the form of a shell script, and because the VM Image Builder is built on HashiCorp Packer, you can also import your existing Packer shell provisioner scripts. As the last step, you specify where you would like your images hosted, either in the Azure Shared Image Gallery or as an Azure Managed Image. See below for a quick video on how to create a custom image using the VM Image Builder.

part1v2

For the private preview, we are supporting these key features:

  • Migrating an existing image customization pipeline to Azure. Import your existing shell scripts or Packer shell provisioner scripts.
  • Migrating your Red Hat subscription to Azure using Red Hat Cloud Access. Automatically create Red Hat Enterprise Linux VMs with your eligible, unused Red Hat subscriptions.
  • Integration with Azure Shared Image Gallery for image management and distribution
  • Integration with existing CI/CD pipeline. Simplify image customization as an integral part of your application build and release process as shown here:

part2v2

If you are attending Microsoft Ignite, feel free to join us at breakout session BRK3193 to learn more about this service.

Frequently asked questions

Will Azure VM Image Builder support Windows?

For private preview, we will support Azure Marketplace Linux images (specifically Ubuntu 16.04 and 18.04). Support for Windows VM is on our roadmap.

Can I integrate Azure VM Image Builder into my existing image build pipeline?

You can call the VM Image Builder API from your existing tooling.

Is VM Image Builder essentially Packer as a Service?

The VM Image Builder API shares similar style to Packer manifests, and is optimized to support building images for Azure, supporting Packer shell provisioner scripts.

Do you support image lifecycle management in the preview?

For private preview, we will only support creation of images, but not ongoing updates. The ability to update an existing custom image is on our roadmap.

How much does VM Image Builder cost?

For private preview, Azure VM Image Builder is free. Azure Storage used by images are billed at standard pricing rates.

Sign up today for the private preview

I hope you sign up for the private preview and give us feedback. Register and we will begin sending out more information in October.

Azure Active Directory authentication for Azure Files SMB access now in public preview

The content below is taken from the original ( Azure Active Directory authentication for Azure Files SMB access now in public preview), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are excited to announce the preview of Azure Active Directory authentication for Azure Files SMB access leveraging Azure AD Domain Services (AAD DS). Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. Integration with AAD enables SMB access to Azure file shares using AAD credentials from AAD DS domain joined Windows VMs. In addition Azure Files supports preserving, inheriting, and enforcing Microsoft file system NTFS ACLs on all folders and files in a file share.

With this capability, we can extend the traditional identity-based share access experience that you are most familiar with to Azure Files. For lift and shift scenarios, you can sync on-premises AD to AAD, migrate existing files with ACLs to Azure Files, and enable your organization to access file shares with the same credentials with no impact to the business.

In addition to this, we have enhanced our access control story by enforcing granular permission assignment on the share, folder, and file levels. You can use Azure Files as the storage solution for project collaboration, leveraging folder or file level ACLs to protect your organization’s sensitive data.

Previously, when you imported files to Azure file shares, only the data was preserved, not the ACLs. If you used Azure Files as a cloud backup, all access assignments would be lost when you restored your existing file shares from Azure Files. Now, Azure Files can preserve your ACLs along with your data, providing you a consistent storage experience.

image

Here are the key capabilities introduced in this preview:

  • Support share Level permission assignment using Role Based Access Control (RBAC).

Similar to the traditional Windows file sharing schema, you can give authorized users share level permissions to access your Azure File Share.

  • Enforce NTFS folder and file level permission.

Azure Files enforces standard NTFS file permission on the folder and file level, including the root directory. You can simply use the icacls or Powershell command tool to set or change permissions over mounted file shares.

  • Continue to support storage account key for Super User experience.

Mounting Azure file shares using the storage account key will continue to be supported for Super User scenario. It will surpass all access control restrictions configured on share, folder, or file level.

  • Preserve NTFS ACLs for data import to Azure Files.

We support preserving NTFS ACLs for data import to Azure Files over SMB. You can copy the ACLs on your directory/file simply with robocopy command.

    Getting started

    You can read more about the benefits of Azure Files AAD Authentication and follow this step by step guidance to get started. We support Azure Files AAD Integration Public Preview in a couple of in selected regions. Also refer to Azure Preview guidelines for general information on using preview features.

    Feedback

    We look forward to hearing your feedback on this feature, please email us at [email protected].

    A quick and easy way to set up an end-to-end IoT solution on Google Cloud Platform

    The content below is taken from the original ( A quick and easy way to set up an end-to-end IoT solution on Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

    In today’s world of dispersed and perpetually connected devices, many business operations involve receiving and processing large volumes of messages both in batch (typically bulk operations on historical data) and in real-time. In these contexts, the real differentiator is the back end, which must efficiently and reliably handle every request, allowing end users to not only access the data, but also to analyze it and extract useful insights.

    Google Cloud Platform (GCP) provides a holistic services ecosystem suitable for these types of workloads. In this article, we will present an example of a scalable, serverless Internet-of-Things (IoT) environment that runs on GCP to ingest, process, and analyze your IoT messages at scale.

    Our simulated scenario, in a nutshell

    In this post, we’ll simulate a collection of sensors, distributed across the globe, measuring city temperatures. Data, once collected, will be accessible by users who will be able to:

    • Monitor city temperatures in real time

    • Perform analysis to extract insights (e.g. what is the hottest city in the world?)

    simulated_scenario.png

    As the main user interface of our sample application, a user starts the simulation by clicking the “Start” button. The system will generate all 190 simulated devices (with corresponding city locations) that will start immediately sensing and reporting data. Temperatures of different cities can then be monitored in real time directly through the UI clicking on the button “Update” and selecting the preferred marker. Once completed, the simulation can be turned off by clicking on the “Stop” button.

    Data_streamed.png
    Data is also streamed in parallel, into a data warehouse solution for additional historical or batch analysis.

    Proposed architecture

    Proposed_architecture.png

    The simulation, as explained above, starts from (1) App Engine that is a fully managed PaaS to implement scalable applications. In the App Engine frontend, the user triggers the generation of (simulated) devices and through APIs calls, the application will generate several instances on (2) Google Compute Engine, the IaaS solution to manage Virtual Machines (VMs). The following steps are executed at the startup of each VM:

    • A public-and-private key pair is generated for each simulated device

    • An instance of a (3) Java application that performs the following actions is launched for each simulated device:

      • registration of the device in Cloud IoT Core, a fully managed service designed to easily and securely connect, manage, and ingest data from globally dispersed devices

      • generation of a series of temperatures for a specified city

      • encapsulation of generated data into MQTT messages to make them available to Cloud IoT Core

    Collected messages containing temperature values will then be published to a topic on (4) Cloud Pub/Sub, an enterprise message-oriented middleware. Here messages will be read in streaming mode by (5) Cloud Dataflow, a simplified stream and batch data processing solution, and then ingested into:

    • (1) Cloud Datastore, a highly scalable, fully managed NoSQL database

    • (6) BigQuery, a fast, highly scalable, cost-effective, and fully managed cloud data warehouse for analytics

    Cloud Datastore will save data to be displayed directly into the UI of the App Engine application, while BigQuery will act as a data warehouse that will enable the execution of more in depth analysis.

    All the logs generated by all components will then be ingested and monitored via (9) Stackdriver, a monitoring and management tool for services, containers, applications, and infrastructure. Permission and access will be managed via (9) Cloud IAM, a fine-grained access control and visibility tool for centrally managing cloud resources.

    Deploy the application on your environment

    The application, written mostly in Java, is available inthis GitHub repository. Detailed instructions on how to deploy the solution in a dedicated Google Cloud Platform project can be found directly in the repository’s Readme.

    Please note that in deploying this solution you may incur some charges.

    Next steps

    This application is just an example of the multitude of solutions you can develop and deploy on Google Cloud Platform leveraging the vast number of available services. A possible evolution could be the design of a simpleGoogle Data Studio dashboard to visualize some information and temperature trends, or it could be the implementation of a machine learning model that predicts the temperatures.

    We’re eager to learn what new IoT features you will implement in your own forks!”

    License caps and CCTV among ride-hailing rule changes urged in report to UK gov’t

    The content below is taken from the original ( License caps and CCTV among ride-hailing rule changes urged in report to UK gov’t), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Uber and similar services could be facing caps on the number of licenses for vehicles that can operate ride-hailing services in London and other UK cities under rule changes being recommended to the government.

    CCTV being universally installed inside licensed taxis and private hire vehicles for safety purposes is another suggestion.

    A working group in the UK’s Department for Transport has published a report urging a number of changes intended to modernize the rules around taxis and private hire vehicles to take account of app-based technology changes which have piled extra pressure on already long outdated rules.

    In addition to suggesting that local licensing authorities should have the power to cap vehicle licenses, the report includes a number of other recommendations that could also have an impact on ride-hailing businesses, such as calling for drivers to be able to speak and write English to a standard that would include being able to deal with “emergency and other challenging situations”; and suggesting CCTV should be universally installed in both taxis and PHVs (“subject to strict data protection measures”) — to mitigate safety concerns for passengers and drivers.

    The report supports maintaining the current two-tier system, so keeping a distinction between ‘plying for hire’ and ‘prebooking’, although it notes that technological advancement has “blurred the distinction between the two trades” — and so suggests the introduction of a statutory definition of both.

    “This definition should include reviewing the use of technology and vehicle ‘clustering’ as well as ensuring taxis retain the sole right to be hailed on streets or at ranks. Government should convene a panel of regulatory experts to explore and draft the definition,” it suggests.

    Legislation for national minimum standards for taxi and PHV licensing — for drivers, vehicles and operators — is another recommendation, though with licensing authorities left free to set additional higher standards if they wish.

    The report, which has 34 recommendations in all, also floats the idea that how companies treat drivers, in terms of pay and working conditions, should be taken into account by licensing authorities when they are determining whether or not to grant a license.

    The issues of pay and exploitation by gig economy platform operators has risen up the political agenda in the UK in recent years — following criticism over safety and a number of legal challenges related to employment rights, such as a 2016 employment tribunal ruling against Uber. (Its first appeal also failed.)

    “The low pay and exploitation of some, but not all, drivers is a source of concern,” the report notes. “Licensing authorities should take into account any evidence of a person or business flouting employment law, and with it the integrity of the National Living Wage, as part of their test of whether that person or business is ‘fit and proper’ to be a PHV or taxi operator.”

    UK MP Frank Field, who this summer published a critical report on working conditions for Deliveroo riders, said the recommendations in the working group’s report put Uber “on notice”.

    “In my view, operators like Uber will need to initiate major improvements in their drivers’ pay and conditions if they are to be deemed ‘fit and proper’,” he said in a response statement. “The company has been put on notice by this report.”

    Though the report’s recommendation on this front do not go far enough for some. Also responding in a statement, the IWGB UPHD’s branch chair, James Farrar — who was one of the former Uber drivers who successfully challenged the company at an employment tribunal — criticized the lack of specific minimum wage guarantees for drivers.

    “While the report has some good recommendations, it fails to deal with the most pressing issue for minicab drivers — the chronic violation of minimum wage laws by private hire companies such as Uber,” he said. “By proposing to give local authorities the power to cap vehicle licenses rather than driver licenses, the recommendations risk giving more power to large fleet owners like Addison Lee, while putting vulnerable workers in an even more precarious position.

    “Just days after the New York City Council took concrete action to guarantee the minimum wage, this report falls short of what’s needed to tackle the ongoing abuses of companies operating in the so-called ‘gig economy’.”

    We’ve reached out to Uber for comment on the report.

    Field added that he would be pushing for additional debate in parliament on the issues raised and to “encourage the government to safeguard drivers’ living standards by putting this report into action”.

    “In the meantime, individual licensing authorities have an important part to play by following New York’s lead in using their licensing policies to guarantee living wage rates for drivers,” he also said.

    London’s transport regulator, TfL, has been lobbying for licensing authorities to be given the power cap the number of private hire vehicles in circulation for several years, as the popularity of ride-hailing has led to a spike in for-hire car numbers on London’s streets, making it more difficult for TfL to manage knock-on issues such as congestion and air quality (which are policy priorities for London’s current mayor).

    And while TfL can’t itself (yet) impose an overall cap on PHV numbers it has proposed and enacted a number of regulatory tweaks, such as English language proficiency tests for drivers — changes that companies such as Uber have typically sought to push back against.

    Earlier this year TfL also published a policy statement, setting out a safety-first approach to regulating ride-sharing. And, most famously, it withdrew Uber’s licence to operate in 2017.

    Though the company has since successfully appealed, after making a number of changes to how it operates in the UK, gaining a provisional 15-month license to operate in London this summer. But clearly any return to Uber’s ‘bad old days‘ would be dealt very short shrift.

    In the UK primary legislation would be required to enable local licensing authorities to be able to cap PHV licenses themselves. But the government is now being urged to do so by the DfT’s own working group, ramping up the pressure for it act — though with the caveat that any such local caps should be subject to “a public interest test” to prove need.

    “This can help authorities to solve challenges around congestion, air quality and parking and ensure appropriate provision of taxi and private hire services for passengers, while maintaining drivers’ working conditions,” the report suggests.

    Elsewhere, the report recommends additional changes to rules to improve access to wheelchair accessible vehicles; beef up enforcement against those that flout rules; as well as to support disability awareness training for drivers.

    The report also calls on the government to urgently review the evidence and case for restricting the number of hours that taxi and PHV drivers can drive on the same safety grounds that restrict hours for bus and lorry drivers.

    It also suggests a mandatory national database of all licensed taxi and PHV drivers, vehicles and operators, be established — to support stronger enforcement, generally, across all its recommended rule tweaks.

    It’s not yet clear how the government will respond to the report, nor whether it will end up taking forward all or only some of the recommendations.

    Although it’s under increased pressure to act to update regulations in this area, with the working group critically flagging ministers’ failure to act following a Law Commission review the government commissioned, back in 2011, writing: “It is deeply regrettable that the Government has not yet responded to the report and draft bill which the Commission subsequently published in 2014. Had the government acted sooner the concerns that led to the formation of this Group may have been avoided.”

    4 hidden cloud computing costs that will get you fired

    The content below is taken from the original ( 4 hidden cloud computing costs that will get you fired), to continue reading please visit the site. Remember to respect the Author & Copyright.

    John just finished the first wave on cloud workload migrations for his company. With a solid 500 applications and related data sets migrated to a public cloud, he now has a good understanding of what the costs are after these applications have moved into production.

    However, where John had budgeted $1 million a month for ops costs, all in, the company is now getting dinged for $1.25 million. Where does that $250,000 go each month? And more concerning, how were those costs missed with the original cost estimates? Most important, where will John work now once the CFO and CEO get wind of the overruns?

    In my consulting work, I’m seeing the same four missed costs over and over.

    To read this article in full, please click here

    Realizing the Internet of Value Through RippleNet

    The content below is taken from the original ( Realizing the Internet of Value Through RippleNet), to continue reading please visit the site. Remember to respect the Author & Copyright.

    People and businesses increasingly expect everything in their lives to move at the speed of the web. Conditioned by smartphone and apps where nearly anything is attainable at the press of a button, these customers are often left wanting when it comes to their experiences with money and financial service providers.

    This is because an aging payments infrastructure designed more than four decades ago leads to expensive transactions that can take days to settle with little visibility or certainty as to their ultimate success. This experience runs contrary to customer expectations for an Internet of Value, where money moves as quickly and seamlessly as information.

    Delays, costs and opacity are especially true for international transactions. Not only are payments limited by inherent infrastructure challenges, but they must navigate a patchwork quilt of individual country and provider networks stitched together for cross-border transactions. This network of networks effectively stops and starts a transaction every time it encounters a new country, currency or provider network – adding even more costs and delays.

    As a global payments network, RippleNet creates a modern payments experience operating on standardized rules and processes for real time settlement, more affordable costs, and end-to-end transaction visibility. It allows banks to better compete with FinTechs that are siphoning off customers disappointed by traditional transaction banking services.

    RippleNet brings together a robust ecosystem of players for the purposes of powering the Internet of Value. This network is generally made up of banks and payment providers that source liquidity and process payments, as well as corporates and FinTechs that use RippleNet to send payments.

    For network members, RippleNet offers:

    Access: Today, banks and providers overcome a fragmented global payments system by building multiple, custom transaction relationships with individual networks. By joining RippleNet’s single worldwide network of institutions, organizations gain a single point of access to a standardized, decentralized infrastructure for consistency across all global connections.

    Certainty: Legacy international payments cannot provide clarity around transaction timing or costs, and many transactions ultimately end in failure. RippleNet’s atomic pass-fail processing ensures greater certainty in delivery, and its bi-directional messaging capability provides unprecedented end-to-end transaction visibility for fees, delivery time and status.

    Speed: Disparate networks and rules create friction and bottlenecks that slow down a transaction. RippleNet’s pathfinding capabilities cut through the clutter by identifying optimal routes for transactions that then settle instantly. With RippleNet, banks and providers can reduce transaction times from days to mere seconds.

    Savings: Existing payment networks have high processing and liquidity provisioning costs that result in fees as high as $25 or $35 per transaction. RippleNet’s standardized rules and network-wide connectivity significantly lower processing costs. RippleNet also lowers liquidity provisioning costs or can eliminate the need for expensive nostro accounts altogether through the use its digital asset XRP for on-demand liquidity. The end result is a dramatically lower cost of transactions for providers and their customers.

    RippleNet solves for the inefficiencies of the world’s payment systems through a single, global network. With RippleNet, banks and payment providers can realize the promise of the Internet of Value, meeting customer expectations for a modern, seamless global payments experience while lowering costs and opening new lines of revenue.

    For more information on the technology behind RippleNet or to learn how to join contact us.

    The post Realizing the Internet of Value Through RippleNet appeared first on Ripple.