IDG Contributor Network: MPLS or IPsec VPN: which is the best?

The content below is taken from the original (IDG Contributor Network: MPLS or IPsec VPN: which is the best?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Scouring the online IT forums, it’s hard not to get sucked-in to all the talk about how MPLS is too expensive and can easily be replaced with high-bandwidth, fiber Internet circuits and an IPsec VPN. If you currently have an MPLS network, it almost makes you want to throw a blanket over it and hope nobody notices your “antiquated” Wide Area Network. [blushing]

The final straw was when you read how username Pauly-Packet-Loss just saved thousands by scrapping his company’s MPLS and it works great. [single tear rolls down your cheek]

ThinkstockThinkstockNerves can get to you worried anxious fret nervous anxiety Thinkstock

You desperately want to ask someone what to do but you wouldn’t dare post anything on an online networking forum. Any post starting with “My MPLS network…” is certain to get snarky responses like, “You still have an MPLS network?” or “Having an MPLS network is your first problem.”

Before you assume your MPLS network is soon-to-be scrap-piled, let me assure you MPLS is still the best option for certain applications. I know, that’s “crazy talk” these days but yes, there are times when MPLS is still the way to go.

But when?

When MPLS is the best

I first started hearing the rumblings of “MPLS is dead” a few years ago when everyone started noticing how cheap bandwidth was getting. Around 2010, high speed fiber Internet (for business), started dropping at a rate of 30 percent per year, as several business ISPs (like AT&T and Verizon) were building out fiber into every major metropolitan area, replacing their corroding copper infrastructure.

It wasn’t uncommon to hear an IT professional say something similar to “we got rid of our 20M MPLS network and replaced it with 100M fiber Internet connections at each site… and cut our bills in half!”

What a testimony! This got my attention. It got yours too, right?

And I agree, this is a great idea… with 1 exception: If your company is running critical, real-time applications across the network (such as voice, video or remote desktop), moving off MPLS and into the public Internet may not be a good idea.

Sure, adding more bandwidth is never a bad thing but keep in mind, the most common culprits of bad quality (for real-time apps), are:

Real-time applications require much lower levels of these three network boogers, compared to your other applications. Real-time apps are delicate creatures and the slightest delay or mix-up in their packets creates total chaos in the user’s experience.

And no matter how large your Internet connection is, there is zero guaranteed of your levels of latency, packet loss and jitter over the public Internet. The IPsec VPN will keep your WAN traffic private but it doesn’t provide QoS for your sensitive little real-time packets as they make their journey across the big, scary public Internet.

The only way to guarantee your real-time traffic maintains low levels of latency, packet loss, and jitter, is to keep those applications running on a private network, where you have total control over the entire route the packets traverse. MPLS is one such WAN.

What about SD-WAN, you say? Good idea but that’s a topic for another article. The quick answer is technically, no. Not even SD-WAN can guarantee low levels of packet loss, latency, or jitter. Especially if your real-time apps are running on an on-prem server (as opposed to a cloud service). Stay tuned to my blog for more on MPLS vs. SD-WAN and other WAN technologies. [smirk with one eyebrow raised]

Can you throw caution to the wind, and try sending real-time applications over an IPsec VPN? Sure. But put some thought into whether your salespeople will freak when a call drops while on an important phone call… or if users will continually hound your IT department with tickets for their remote desktop screen having blackouts… or if the execs will go psycho if the video bridge is glitching during a board meeting. And no, bumping up to 10G dedicated fiber Internet connections may not fix it.

If your real-time apps are a big part of everyday life for users in your company, don’t believe the hype [as Flava Flav yells “yeaaaah boy!”] and dump your MPLS network without thorough testing. Ask yourself questions like “Will having unreliable call quality hurt our customers’/prospective customers’ impression when they call our company, as they review a bid from our competitor?” Or “Will it slow our employees down if their app is unreliable or slow?”

Those little things make for big losses. Put it this way… if your company has sales of only $25 million/year, a mere 1 percent loss in sales (due to lost customers, etc.), equates to a $250,000 loss. Add this to money lost from lost employee payroll efficiency and you can see how the execs will not be happy with dropped calls, glitchy apps, etc. And a $25 million company doesn’t have a big enough WAN to save $250,000+ from ditching their MPLS.

I know I’ve just opened myself up to those same critics who will laugh at me, pointing, as I stand next to you and your MPLS network… but I don’t care. They probably aren’t in any of these scenarios and shouldn’t be so quick to assume all WANs can be treated the same. In my opinion, your MPLS network is more necessary than they think.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

UK Data Protection Bill lands: Oh dear, security researchers – where’s your exemption?

The content below is taken from the original (UK Data Protection Bill lands: Oh dear, security researchers – where’s your exemption?), to continue reading please visit the site. Remember to respect the Author & Copyright.

The UK’s Data Protection bill has landed with a hefty thud, offering up 200-plus pages of legislation for the geeks and wonks to sink their teeth into.

The bill, launched into the House of Lords yesterday and published in full today (PDF), aims to overhaul the UK’s data protection laws and update them for the digital age.

Much of the text aims to implement the European Union’s General Data Protection Regulation, which comes into force in May 2018; confirming for the Nth time that businesses can’t rely on the idea that Brexit will get them out of complying.

As Neil Brown, tech lawyer at decoded:Legal, put it: “The message seems clear: irrespective of Brexit, the GDPR is here to stay, so you may as well get on and implement it, and do it well.”

On top of this, there are some added extras in the UK’s bill – such as new criminal offences related to dodgy data dealings – as well as some exemptions and derogations, which are to be expected when a member state implements an EU regulation.

However, any hopes that the UK’s legislation would ease the confusion – or perhaps high drama – around the GDPR have been dashed.

The document runs to 218 pages, with 194 clauses, 18 schedules and 112 pages of explanatory notes, and – as has been pointed out by many observers, parts of the text – like this eye-crossing sentence: “Terms used in Chapter 2 and in the GDPR have the same meaning in Chapter 2 as they have in the GDPR” – are fairly Kafka-esque.

Certainly, the complexity of the document – which is part and parcel of a bill that seeks to implement EU law and replace existing UK laws on data processing by both corporate and law enforcement bodies – will keep the lawyers in business for the foreseeable.

Nine months and a lot more b*llocks to go before new EU data protection rules kick in

READ MORE

Describing the bill as “a bit of a mess”, Jon Baines, chairman of the National Association of Data Protection and Freedom of Information Officers, said it was “indicative of how difficult it is, and will be, for the UK to make legislation which enables us to trade and cooperate with the EU when we leave it”.

He added that there was a “real risk” of confusion, in part because of the escalating hype around GDPR.

“Already we have organisations utterly confused about their obligations, and any number of ill- or under-informed advisers and consultants muddying the waters. This is only going to get worse, I fear,” said Baines.

A glimmer of hope for those frustrated by the prevalence of GDPR snake-oil salesmen comes in the section of the bill that will make accreditation of certification providers valid only if they are carried out by the information commissioner or the national accreditation body.

But Baines noted that the ICO has been working on something similar for years, and added: “I really hope the accreditation and certification provisions ultimately lead to a raising of standards but I’m not optimistic for the near and mid-term future.”

What’s in the bill?

The purported aim of the new legislation is to offer people more control over their data and how people use it.

It tightens up rules on consent – for instance, the much-trailed end of the dreaded pre-ticked box – allows people to withdraw consent, and gives them the right to access information on how organisations use their data, as well as to request that posts or photos about them are deleted.

Groups that are given exemptions from some of the data processing rules in the Data Protection Bill include journalists – who are allowed to process data on people if it will ”expose wrongdoing” – and bodies investigating financial fraud and doping in sport.

Fines for organisations in breach of the rules are to be paid in Sterling, and have been set at a maximum of £17m or 4 per cent of global turnover. This is (at the moment) a straight conversion of the GDPR’s max fine of €20m – if Brexit does much more damage to the exchange rate, UK firms might have something to be thankful for.

Elsewhere, the UK government sets out new recordable offences – meaning the police will record them in the national police computer – including unlawfully obtaining personal data and altering personal data in a way to prevent it being disclosed.

Re-identification of de-identified personal data will also be an offence, which comes with an unlimited fine.

However, Brown noted that the legislation does not make a specific reference to exemptions for security researchers – so they will have “to take care to ensure what they do is ‘justified in the public interest’.”

Alan Woodward, a security researcher at the University of Surrey, said that there was a real chance researchers could be caught out by this, adding that it was reminiscent of laws that make reverse-engineering of software products illegal.

“At the moment I think researchers are ‘assuming’ that if they prove that an anonymised data set can be subject to re-identification, then it would be in the public interest for that fact to be known,” he said. “Personally I would see it as equivalent to responsible disclosure of security vulnerabilities.”

But Baines argued that, although it might be preferable to have something more specific, “in practical terms, the [defences set out in the legislation] should prevent anyone being unfairly prosecuted for public interest security research”.

Observers told The Reg that they had spotted few other controversies or surprises in the document, but stressed that it was still early days, especially when the bill has yet to be debated in Parliament.

The legislation is due for its second reading in the House of Lords – the first chance for peers to discuss the legislation – on 10 October. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Linux-ready 3.5-inch SBC drives compact, fanless box-PC

The content below is taken from the original (Linux-ready 3.5-inch SBC drives compact, fanless box-PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

Advantech announced an Intel Braswell based “PCM-9310” 3.5-inch SBC with triple display support, plus a 1U height “EPC-S101” embedded PC that runs on it. Advantech announced its 3.5-inch PCM-9310 SBC in passing as the foundation for its EPC-S101 “bare-bone chassis” industrial computer, which it touts for its slim, 1U, 39mm height. We’ll focus primarily here […]

Introducing Azure confidential computing

The content below is taken from the original (Introducing Azure confidential computing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft spends one billion dollars per year on cybersecurity and much of that goes to making Microsoft Azure the most trusted cloud platform. From strict physical datacenter security, ensuring data privacy, encrypting data at rest and in transit, novel uses of machine learning for threat detection, and the use of stringent operational software development lifecycle controls, Azure represents the cutting edge of cloud security and privacy.

Today, I’m excited to announce that Microsoft Azure is the first cloud to offer new data security capabilities with a collection of features and services called Azure confidential computing. Put simply, confidential computing offers a protection that to date has been missing from public clouds, encryption of data while in use. This means that data can be processed in the cloud with the assurance that it is always under customer control. The Azure team, along with Microsoft Research, Intel, Windows, and our Developer Tools group, have been working on confidential computing software and hardware technologies for over four years. The bottom of this post includes a list of Microsoft Research papers related to confidential computing. Today we take that cutting edge one step further by now making it available to customers via an Early Access program.

Data breaches are virtually daily news events, with attackers gaining access to personally identifiable information (PII), financial data, and corporate intellectual property. While many breaches are the result of poorly configured access control, most can be traced to data that is accessed while in use, either through administrative accounts, or by leveraging compromised keys to access encrypted data. Despite advanced cybersecurity controls and mitigations, some customers are reluctant to move their most sensitive data to the cloud for fear of attacks against their data when it is in-use. With confidential computing, they can move the data to Azure knowing that it is safe not only at rest, but also in use from the following threats:

  • Malicious insiders with administrative privilege or direct access to hardware on which it is being processed
  • Hackers and malware that exploit bugs in the operating system, application, or hypervisor
  • Third parties accessing it without their consent

Confidential computing ensures that when data is “in the clear,” which is required for efficient processing, the data is protected inside a Trusted Execution Environment (TEE – also known as an enclave), an example of which is shown in the figure below. TEEs ensure there is no way to view data or the operations inside from the outside, even with a debugger. They even ensure that only authorized code is permitted to access data. If the code is altered or tampered, the operations are denied and the environment disabled. The TEE enforces these protections throughout the execution of code within it.

Azure confidential computing

With Azure confidential computing, we’re developing a platform that enable developers to take advantage of different TEEs without having to change their code. Initially we support two TEEs, Virtual Secure Mode and Intel SGX. Virtual Secure Mode (VSM) is a software-based TEE that’s implemented by Hyper-V in Windows 10 and Windows Server 2016. Hyper-V prevents administrator code running on the computer or server, as well as local administrators and cloud service administrators from viewing the contents of the VSM enclave or modifying its execution. We’re also offering hardware-based Intel SGX TEE with the first SGX-capable servers in the public cloud. Customers that want their trust model to not include Azure or Microsoft at all can leverage SGX TEEs. We’re working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.

Microsoft already uses enclaves to protect everything from blockchain financial operations, to data stored in SQL Server, and our own infrastructure within Azure. While we’ve previously spoken about our confidential computing blockchain efforts, known as the Coco Framework, today we are announcing the use of the same technology to implement encryption-in-use for Azure SQL Database and SQL Server. This is an enhancement of our Always Encrypted capability, which ensures that sensitive data within a SQL database can be encrypted at all times without compromising the functionality of SQL queries. Always Encrypted achieves that this by delegating computations on sensitive data to an enclave, where the data is safely decrypted and processed. We continue to use enclaves inside Microsoft products and services to ensure that wherever sensitive information needs to be processed, it can be secured while in use.

In addition to SQL Server, we see broad application of Azure confidential computing across many industries including finance, healthcare, AI, and beyond. In finance, for example, personal portfolio data and wealth management strategies would no longer be visible outside of a TEE. Healthcare organizations can collaborate by sharing their private patient data, like genomic sequences, to gain deeper insights from machine learning across multiple data sets without risk of data being leaked to other organizations. In oil and gas, and IoT scenarios, sensitive seismic data that represents the core intellectual property of a corporation can be moved to the cloud for processing, but with the protections of encrypted-in-use technology. 

Customers can try out Azure confidential computing through our Early Access program, which includes access to Azure VSM and SGX-enabled virtual machines, as well as tools, SDKs, and Windows and Linux support to enable any application in the cloud to protect its data while in use.

Sign up for the Azure confidential computing Early Access program.

I look forward to seeing you at Ignite, where I’ll demonstrate enclaves in Azure. There’s so many opportunities and use cases we can secure together using the Azure cloud, Intel hardware, along with Microsoft technologies, services, and products. 

Today is the exciting beginning of a new era of secure computing. Join us in Azure as we create this future.

– Mark

 

Microsoft Research papers related to confidential computing:

See how confidential computing fits within Microsoft’s broader cloud security strategy in the Microsoft Story Labs feature: Securing the Cloud.

‘Don’t Google Google, Googling Google is wrong’, says Google

The content below is taken from the original (‘Don’t Google Google, Googling Google is wrong’, says Google), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you want to write developer documentation like a Google hotshot, you’d better kill “kill”, junk “jank” and unlearn “learnings”.

Those are just a few rules from the company’s newly open-sourced (oops, two sins there, verbing and hyphenation) developer documentation guide.

Even though any Linux user knows “kill” is a command, Google would rather you not use it as a verb – “stop,” “exit,” “cancel,” or “end” are preferred. “Jank” (Wiktionary says “blocking of a software application’s user interface due to slow operations or poor interface design”) should be used with care; and thankfully, “learnings” get a simple “don’t use”.

And yes, what we said in the headline is correct. Google’s style gurus have ridden out in a crusade against verbing the corporate noun: “Don’t use as a verb or gerund. Instead, use ‘search with Google’” (we consider it admirable that Mountain View expects “gerund” to get by without a developer Googling it – sorry, “searching it with Google”).

The guide carries plenty of evidence of a long debate about what words can be both noun and verb: “login” is a noun, with “sign in” given as the preferred verb; “backoff (noun), back off (verb), back-off (adjective)” are noted; “clickthrough/click through” are similarly stipulated; and “display” is troublesome because it’s intransitive (Google it. Damn, we broke the rule again, we meant “search ‘intransitive’ with Google”).

“Interface” gets similar treatment: if you’re reaching for a verb, the preferred list is: interact, talk, speak, or communicate.

As anyone who’s tried to draft a style guide will tell you, contradictions are inevitable. So it is that “the Internet” has turned into “the internet” because other style guides (thank you very much, The Guardian) do so. However, just a couple of lines down, we find that the “internet of things” is rendered “Internet of Things” to explain why IoT is an acceptable abbreviation.

And that’s just a sampling the word list. Punctuation, grammar and syntax would make pedants everywhere proud, we guess.

The Oxford Comma, controversial almost anywhere, is given Google’s blessing (although not named):

Not recommended: I dedicate this book to my parents, Ayn Rand and God.

Recommended: I dedicate this book to my parents, Ayn Rand, and God

… and we learn that Google follows the usage of all civilised persons: it instructs devs not to capitalise the first word after the colon.

As an American company, Google can’t risk a howl of outrage by enforcing ISO date formats over the locally-preferred Month:Day:Year. Instead, it asks me to state that this article was written on September 13, 2017.

We’re sure that our readers will find many more amusements, outrages, and debates in the guide. Let us know what you find in the comments. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Introducing B-Series, our new burstable VM size

The content below is taken from the original (Introducing B-Series, our new burstable VM size), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today I am excited to announce the preview of the B-Series, a new Azure VM family that provides the lowest cost of any existing size with flexible CPU usage. For many workloads that run in Azure, like web servers, small databases, and development and test environments, the CPU performance is very bursty. These workloads will run for a long time using a small fraction of the CPU performance possible and then spike to needing the full power of the CPU due to incoming traffic or required work. With our current sizes, while running in these low points, you are still paying for the full CPU, so that you can handle the high and bursty points.

The B-Series offers a cost effective way to deploy these workloads that do not need the full performance of the CPU continuously and burst in their performance. While B-Series VMs are running in the low-points and not fully utilizing the baseline performance of the CPU, your VM instance builds up credits. When the VM has accumulated enough credit, you can burst your usage, up to 100% of the vCPU for the period of time when your application requires the higher CPU performance.

These VM sizes allow you to pay and burst as needed, using only a fraction of the CPU when you don’t need it and burst up to 100% of the CPU when you do need it (using Intel® Haswell 2.4 GHz E5-2673 v3 processors or better). This level control gives you extreme cost flexibility and flexible value.

The B-Series comes in the following 6 VM sizes during preview:

Size

vCPU’s

Memory: GiB

Local SSD: GiB

Baseline CPU Performance of VM

Max CPU Performance of VM

US East Linux Price / Hour

(Price during preview)

US East Windows Price / Hour

(Price during preview)

Standard_B1s

1

1

4

10%

100%

$ 0.012

($ 0.006)

$ 0.017

($ 0.009)

Standard_B1ms

1

2

4

20%

100%

$ 0.023

($ 0.012)

$ 0.032

($ 0.016)

Standard_B2s

2

4

8

40%

200%

$ 0.047

($ 0.024)

$ 0.065

($ 0.033)

Standard_B2ms

2

8

16

60%

200%

$ 0.094

($ 0.047)

$ 0.0122

($ 0.061)

Standard_B4ms

4

16

32

90%

400%

$ 0.188

($ 0.094)

$ 0.229

($ 0.115)

Standard_B8ms

8

32

64

135%

800%

$ 0.376

($ 0.188)

$ 0.439

($ 0.219)

Get more information on the Burstable VM sizes. To participate in this preview, request quota in the supported region that you would like. After your quota has been approved, you can use the portal or the API’s to do your deployment as you normally would.

We are launching the preview with the following regions, but expect more later this year:

  • US – West 2
  • US – East
  • Europe – West
  • Asia Pacific – Southeast

See ya around, 

Corey

Antenna Basics by Whiteboard

The content below is taken from the original (Antenna Basics by Whiteboard), to continue reading please visit the site. Remember to respect the Author & Copyright.

Antenna Basics by Whiteboard

Like a lot of people, [Bruce] likes radio controlled (RC) vehicles. In fact, many people get started in electronics motivated by their interest in RC. Maybe that’s why [Bruce] did a video about antenna basics where he spends a little more than a half hour discussing antennas. You can see the video below.

[Bruce] avoids any complex math and focuses more on intuition about antennas, which we like. Why does it matter that antennas are cut to a certain length? [Bruce] explains it using a swing and a grandfather clock as an analogy. Why do some antennas have gain? Why is polarization important? [Bruce] covers all of this and more. There’s even a simple experiment you can do with a meter and a magnet that he demonstrates.

If you know what a Smith chart is, this probably isn’t the video for you. If you don’t, this video isn’t going to cover anything like that. But if you want a better foundation about what antennas do and why they work, this is a good spend of 30 minutes.

If you want more info on the Yagi that [Bruce] mentions, we’ve talked about them before. We’ve also covered a similar intuitive antenna tutorial that uses a lot of interesting animations.

VIDEO

Posted in radio hacksTagged ,

This tiny sensor could sleep for years between detection events

The content below is taken from the original (This tiny sensor could sleep for years between detection events), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s easy enough to put an always-on camera somewhere it can live off solar power or the grid, but deep in nature, underground, or in other unusual circumstances every drop of power is precious. Luckily, a new type of sensor developed for DARPA uses none at all until the thing it’s built to detect happens to show up. That means it can sit for years without so much as a battery top-up.

The idea is that you could put a few of these things in, say, the miles of tunnels underneath a decommissioned nuclear power plant or a mining complex, but not have to wire them all for electricity. But as soon as something appears, it’s seen and transmitted immediately. The power requirements would have to be almost nil, of course, which is why DARPA called the program Near Zero Power RF and Sensor Operation.

A difficult proposition, but engineers at Northeastern University were up to the task. They call their work a “plasmonically-enhanced micromechanical photoswitch,” which pretty much sums it up. I could end the article right here. But for those of you who slept in class the day we covered that topic, I guess I can explain.

The sensor is built to detect infrared light waves, invisible to our eyes but still abundant from heat sources like people, cars, fires, and so on. But as long as none are present, it is completely powered off.

But when a ray does appear, it strikes a surface is covered in tiny patches that magnify its effect. Plasmons are a sort of special behavior of conducting material, which in this case respond to the IR waves by heating up.

Here you can see the actual gap that gets closed by the heating of the element (lower left).

“The energy from the IR source heats the sensing elements which, in turn, causes physical movement of key sensor components,” wrote DARPA’s program manager, Troy Olsson, in a blog post. “These motions result in the mechanical closing of otherwise open circuit elements, thereby leading to signals that the target IR signature has been detected.”

Think of it like a paddle in a well. It can sit there for years without doing a thing, but as soon as someone drops a pebble into the well, it hits the paddle, which spins and turns a crank, which pulls a string, which raises a flag at the well-owner’s house. Except, as Olsson further explains, it’s a little more sophisticated.

“The technology features multiple sensing elements—each tuned to absorb a specific IR wavelength,” he wrote. “Together, these combine into complex logic circuits capable of analyzing IR spectrums, which opens the way for these sensors to not only detect IR energy in the environment but to specify if that energy derives from a fire, vehicle, person or some other IR source.”

The “unlimited duration of operation for unattended sensors deployed to detect infrequent but time-critical events,” as the researchers describe it, could have plenty of applications beyond security, of course: imagine popping a few of these all over the forests to monitor the movements of herds, or in space to catch rare cosmic events.

The tech is described in a paper published today in Nature Nanotechnology.

Featured Image: DARPA / Northeastern University

Starling Bank launches Marketplace, integrates with itemised receipt and rewards startup Flux

The content below is taken from the original (Starling Bank launches Marketplace, integrates with itemised receipt and rewards startup Flux), to continue reading please visit the site. Remember to respect the Author & Copyright.

Marketplace banking — the idea that your bank will provide you with access to various third-party money-related apps and services within its own app — has long been championed by fintech startups, whilst upcoming Open Banking/PSD2 legislation in the U.K. and EU, respectively, will make third-party app integration an inevitable reality. That’s seeing a number of challenger banks skating to where the puck is going, including London-based Starling Bank, which today is launching the Starling Marketplace.

Billed by the digital-only challenger bank as a new concept in banking, the Starling Marketplace puts products from other fintech providers (and in the future “lifestyle products,” because nearly every business is going to be tempted to jump onboard the Open Banking train) within “an easily browsed ecosystem” accessible within the Starling app.

Partnering companies integrate with Starling Bank via the challenger bank’s Open Banking-compliant APIs, although, as I’ll explain below, are integrating more deeply than simply making use of those APIs, which any third-party developer, once vetted, can potentially do.

The first fintech company to be added to the Starling Marketplace, and an example of the deepest kind of integration we are likely to see, is Flux, the itemised receipt and rewards startup based in London. Founded by Tom Reay, Matty Cusden-Ross, and Veronique Barbosa, all former early employees at Revolut, the company has built a software platform that bridges the gap between the itemised receipt data captured by a merchant’s point-of-sale (POS) system and what little information typically shows up on your bank statement or mobile banking app.

The Starling integration sees Flux send real-time itemised receipts to the Starling app when a customer pays with their Starling card at any of Flux’s retail partners, which so far includes all 111 EAT stores in the U.K. and Bel-Air. As early as next week, Flux will also enable Starling users to get automated loyalty points with cash-back for Flux-supported purchases, without the need for paper coupons. Starling Bank users will need to activate Flux from the Marketplace section of the Starling app to instantly link their card.

“This is our first native integration, where you can switch on Flux from in app, and our first full bank partnership,” Flux’s Cusden-Ross tells TechCrunch. “Starling customers should switch on Flux because we’ll help them finally rid their wallets of paper receipts and paper loyalty cards without asking them to download or setup anything extra. We also think Starling customers are particularly interested in tracking their financial life in real-time and would agree it’s insane that today the only way to keep track of exactly what we buy is via little bits of paper”.

The framing of Starling’s Flux integration as a “first full bank partnership” refers to the fact that Flux originally launched with rival challenger bank Monzo, but only in a very limited pilot that restricts the number of users that can be activated as Monzo readies its own API and marketplace banking offer.

“The users we do have have been really positive on the experience and have been driving our word of mouth sign-ups to sky-rocket, they’re all patiently on a waitlist and we’re working with Monzo to understand timelines for next steps. The on-boarding experience for Monzo is also a pilot version where the user is currently on-boarding through our website and not natively in Monzo,” explains Cusden-Ross.

In a call with Megan Caywood, Chief Platform Officer at Starling Bank, she explained that the Flux partnership, when/if Starling users choose to activate the functionality, enables a greater detail of spending data to be displayed within the Starling app for purchases made at merchants that support Flux. By default, Starling users can see the retailer’s name, amount spent, date and location (complete with a map), but Flux goes further by displaying the item bought, VAT and any available loyalty stamps.

Zooming out a bit further, Caywood says that Starling third-party integrations fall into three categories. Flux and the bank’s upcoming partnership with TransferWise are examples of the deepest type of Starling app integration but isn’t typical.

Instead, whilst they’ll be discoverable and can be authorised via something akin to an app store within the Starling app, most Marketplace apps will only send back and display a limited amount of data in the form of a dashboard or ‘widget’ (my words, not Caywood’s). If I understand correctly, in certain instances this trade-off reduces the regulatory burden on Starling and, crucially, is a good compromise to stop the Starling app from becoming bloated.

A third category of integration are apps that simply make use of Starling’s API. Right now, these include Tail, the local offers startup I recently covered, or roundups app Moneybox, which use the API to authenticate, access and build on top of various levels of your Starling bank account data and functionality, with your permission, of course.

“TransferWise will be one end of the spectrum, being deeply integrated, whereas Moneybox is at the other end of the spectrum where they just integrate our API but don’t appear in Starling,” Caywood tells me. “The Marketplace portion of the app is a hybrid – where an app has integrated our API and we have integrated theirs, so users can find them and connect to them via the Starling app, and have a dashboard in Starling that gives an overview”.

Meanwhile, I’m hearing that Flux has closed $1.5 million in seed funding led by PROfounders. I also understand that Anthemis, which has invested in other fintech companies such as Currency Cloud and Azimo, participated in the round.

How External Access for Microsoft Teams Works

The content below is taken from the original (How External Access for Microsoft Teams Works), to continue reading please visit the site. Remember to respect the Author & Copyright.

Teams Splash

Teams Splash

Teams Gets Guests

Long anticipated and much promised, Microsoft launched external access for Teams on September 11 (here’s a link to Brad’s news story). Well, they posted a blog describing the principles used for the design (motherhood and apple pie – teamwork, security and compliance, and manageability). Releasing code that worked and satisfied those principles took a little longer and tenants reported some frustration that Teams and web clients had not received the updated code. It takes time to distribute new bits across Office 365 and tenants gradually received updates with the necessary code (version 1.0.00.25151 of the desktop client works).

External access for Teams is now available to all Office 365 commercial and education customers. The new feature allows Teams to compete with the likes of Slack and Hipchat. Office 365 Groups has supported external access since August 2016. However, the groups developers chose to build their external access around the existing model used for SharePoint. On the one hand, this was a good decision because it delivered external access faster. On the downside, external members of groups can access the files in the SharePoint team sites belonging to groups, but they cannot browse conversations or the group calendar.

Given that Teams brings together resources from many Office 365 applications, Microsoft needed to make sure that their selected access mechanism delivered secure, robust, and reliable access to anything in a team. Doing so created some engineering difficulties that took more time to resolve than originally predicted.

But Only for Azure AD Accounts

For now, the external access for Teams is only for users with Azure AD accounts. As Microsoft notes in their announcement:

Later, we’ll add the ability for anyone with a Microsoft Account (MSA) to be added as a guest in Teams. If the guest doesn’t have an existing MSA, they will be directed to create a free account using their current corporate or consumer email address, such as Outlook.com or Gmail.com.”

Azure AD, Guests, and Invitations

External guests are represented by guest objects in Azure Active Directory and leverage the B2B collaboration infrastructure built for the directory. You can create guest user objects through the Azure portal with the Invite a Guest option (or with PowerShell). In either case, you enter the email address of the external user.

Azure AD supports the creation of guest accounts for email addresses from another Office 365 tenant, an on-premises organization, or a consumer email service like Outlook.com or Gmail. When created, you see and can manage guest users in the Office 365 Admin or the Azure portal. You know guest users in Azure AD because their UserType property is “Guest” and their UserPrincipalName is based on their email address in the form:

Tony.Redmond_OtherDomain.com#EXT#MyDomain.onmicrosoft.com

To list all the guest user accounts for your domain, you can run the Get-AzureADUser cmdlet:

[PS] C:\> Get-AzureADUser -Filter "UserType eq 'Guest'" | Sort DisplayName | Format-Table DisplayName, UserPrincipalName

When Azure AD adds a new guest user object, it calls the Invitation API to create and send an invitation via email to the user’s address to tell them that they have been invited to start using an application in your domain. The Invitation API allows the application generating the invitation to include a link for the user to enter the application, like the access information for an Office 365 Group or Team.

If the guest already has a Microsoft account (belonging to another Office 365 tenant or an MSA account), they can go ahead and use that account. If not, Azure AD goes through a redemption process to make sure that the guest can sign in.

All Azure B2B applications work in the same manner (including Groups and Teams). The difference is the link contained in the invitation – it can point to an individual team, group, or other application and the application referenced must be ready to allow access to that guest user. Making sure that Teams is ready to handle guest user access is the core of the new feature.

Controlling Licenses

The Teams settings in the Office 365 Admin Center control whether external users have access to Teams in the tenant. Go to Settings by user/license type and select Guest in the drop-down. Then move the slider for Turn Microsoft Teams on or off for users of this type to “On” (Figure 1) and save the settings. External access is now available for all teams in the tenant.

Teams external licenses

Figure 1: Allowing external users to access Teams (image credit: Tony Redmond)

In addition, check the settings for Office 365 Groups in the Office 365 Admin Center to make sure that the slider to Let group members outside the organization access group content is On. This ensures that access to the SharePoint site owned by groups and used by Teams is available.

Adding External Members to Teams

To add a new external team member, a team owner or Office 365 Admin uses the web or desktop client to select the team and then Add members (right-click menu). Input the email address for the new member and click Add Member (Figure 2). You can input the email addresses of people who do not have Azure AD accounts connected to an Office 365 tenant (like Outlook.com) and Teams will add these as external members. However, until Microsoft supports non-Azure AD accounts, they will not be able to access Teams.

Teams Add External User

Figure 2: Adding a new guest to a team (image credit: Tony Redmond)

To make everyone in the team aware that someone external is joining, Teams posts a notification in the General channel to tell them that the owner has added a new guest.

An Invitation Arrives

Teams generates and sends the invitation, which holds all the information necessary for the recipient to know about the team they are joining, including a GUID to find the right Office 365 tenant and another GUID for the team. The recipient must redeem the invitation by clicking the Open Microsoft Teams link in the message (Figure 3).

Teams external invite

Figure 3: An invitation to join a team (image credit: Tony Redmond)

Switching Between Tenants

If the Teams desktop app is available, Teams launches it to access the target team. This means that Teams needs to sign-out from your home tenant and sign-into the target tenant (Figure 4). Switching is quick and painless. That is, if your client software is at the right version and the target tenant allows external access to Teams. If not, you might go into a whirl of unhappiness and will never be able to connect to the remote tenant.

Teams switch tenant

Figure 4: The choice to switch tenants (image credit: Tony Redmond)

The need to switch to a different context for the client within the target tenant is understandable and is like the approach used by Yammer when a user switches networks. Later, you can return to your home tenant by clicking on your photo in the lower left-hand corner and selecting your home tenant from the list of accounts (Figure 5).

Teams switch tenants

Figure 5: How to switch tenants (image credit: Tony Redmond)

Microsoft has not yet updated the Teams mobile client but have said that messages posted by external members appear in the mobile clients and that an update is on the way to support switching between tenants.

Identifying Guests

Teams gives some visual hints to users that the discussions and other content shared in the team is available to external members. First, a banner with “This team has guests” appears under the team name in the client. Second, external members have a “(Guest)” suffix in member lists (Figure 6). The suffix also shows up when guest users contribute to chats within the team. You can also see the warning that “This team contains users from outside your organization,” just in case the guest suffix is overlooked.

Teams Guests

Figure 6: Guest members listed for a team (image credit: Tony Redmond)

By default, Azure AD guest accounts do not include information like a properly-formatted display name or contact information. It is easy to add this information by editing the account using the Azure Portal or the Office 365 Admin Center. For instance, some tenants like to add the name of the company a guest belongs to in their display name.

You can remove external members from a team in the same way as you remove a tenant user. Select View team from the menu and then Remove for the user in the member list. You cannot remove a user who is a team owner until you demote them to member first. If you want to remove an external user from all teams in a tenant, you can remove their guest user object from Azure AD. Removing an external member from a team does not remove the Azure AD guest user account. If you want to remove a guest account, you can do this from the Azure portal, the Office 365 Admin Center, or by running the Remove-AzureADUser cmdlet, which then removes the guest from all teams and groups that it had joined.

What External Team Members Can Do

Table 1 lists the functionality available to external team members and compares it with the functions available to users belonging to the host tenant. All the functionality you expect is there, like being able to join in private and public chats using IM, video, and audio (and as Håvard Øverås pointed out to me, even chat with Skype for Business users). You can work with the shared OneNote notebook and the files in the SharePoint team site. However, when I tried to access Planner, the sign-in took me to my own tenant and the plans stored there. This is what I expected because Planner does not yet support external access and underlines the point that just because one app supports access, you cannot assume that everything that tenant users can get through via the app enjoy the same access.

Teams feature Tenant users External Guests
Create a channel (team owners control this ability through team settings) Yes Yes
Join in private chats Yes Yes
Join in public (channel) conversations Yes Yes
Post, delete, and edit messages Yes Yes
Upload file to document library Yes Yes
Share a file (in SharePoint) Yes Yes
Share a file in a personal or public chat Yes Yes
Add apps (bots, tabs, or connectors) Yes No
Invite guest users outside the Office 365 tenant No Yes
Create a new team Yes No
Discover and join public teams Yes No
View organization chart Yes No
Schedule meetings Yes No

Table 1: Functionality available to tenant and guest users (source: originally Microsoft – with some updates)

Some control over the features available to guests is available to team owners. Right-click a team name and select View team from the menu. Go to the Settings tab and select Guest permissions. You can then select whether guests can create, update, or remove channels. You cannot upgrade a guest to become an owner of a team,

The standard Teams features that external users cannot access are logical and in line with how external access works for Office 365 Groups. No one wants external people to be able to browse your corporate directory, create their own team, or look for other teams to join.

Teams and Groups

Teams uses Office 365 Groups for its membership service and Groups also supports external members. It therefore follows that if you have a group with external members, those members should have access to the team belonging to the group.

What confused me is that if you open the General channel for a team belonging to a group that has external members, you see a series of notifications saying that the team owner removed people from the team (Figure 7). These are the external members in the group and removing them from the team seemed to show that Teams and Groups keep separate membership lists.

Teams removes groups

Figure 7: Guests belonging to the group leave the team (image credit: Tony Redmond)

Teams and Groups share common membership and what happened here is that when Microsoft first enabled external access a process ran to remove external users from Teams. Once the tenant enabled external access for Teams, you can add new external members. However, you must add the members from the group to the team to force Teams to send invitations so that the guests go through the redemption process to gain access to Teams. If you add a new external member to the team who was never a member of the group, they show up in the group membership.

A Domain Blocking Policy

In August 2017, Microsoft released an external block policy for Groups, Teams, and Planner. The policy allows Office 365 tenants to define a block or an allow list for domains for external access (but not both). For instance, you can define a block list to stop team owners adding people from competitor domains as external members of teams.

Sponsored

Moving Forward

Adding the ability to add people with Azure AD accounts as external members of Teams is a good step forward. The true power of this capability comes when you can add anyone (policy permitting) with any email address as an external member. But for now, if you need to share Teams between Office 365 tenants, all the necessary tools are in place.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle

The post How External Access for Microsoft Teams Works appeared first on Petri.

Purple, Brown, Yellow, Red, Green Screen of Death explained

The content below is taken from the original (Purple, Brown, Yellow, Red, Green Screen of Death explained), to continue reading please visit the site. Remember to respect the Author & Copyright.

Purple, Brown, Yellow, Red, Green Screen of Death explained

Most of us are familiar with the Blue Screen of Death and the Black Screen. But did you know that there are Purple, Brown, Yellow, Red, Green Screen of Deaths also present that a software or your system may throw up? There is no simple explanation why these stop errors occur as several factors can be involved. However, it is known that the malfunctioning of hardware drivers or drivers that are installed by third-party software drive this unwanted change. Color-coding the error screen helps support staff assign the degree of urgency to several types of Stop screens and prioritize customers.

Purple, Brown, Yellow, Red, Green Screen of Death

In most cases, these messages are dubbed as fatal as it typically results in unsaved work being lost and a message advising the user to restart the computer. These are some of the infamous screens of death known till date are.

Purple Screen of Death (PSOD)

It is a diagnostic screen with white type on a purple background. Purple Screen is mainly seen when the VMkernel of an ESX/ESXi host experiences a critical error, becomes inoperative and terminates any virtual machines that are running. It is not fatal and generally considered more of a developer testing issue. When encountered, it can be fixed quickly by following the simple action of pressing and holding your computer’s power button to shut down the device.

Brown Screen of Death

Brown Screen of Death is mostly associated with gameplays, indicating the error is concerned with graphics of a computer. We know that all processors ship with a speed rating. So, running PC games with high graphics forces CPU and memory to run at speeds higher than their official speed grade, thereby causing frequent crashes.

Yellow Screen of Death

It affects the functioning of a browser, particularly Mozilla Firefox. The Yellow screen of Death makes the appearance with a weird buzzing sound in the background when the XML parser refuses to process an XML document causing a parsing error and a weird buzzing sound. The issue persists unless the computer is manually rebooted.

Red Screen of Death

Similar to other screens of death, Red Screen of Death (RSOD) sometimes appear on computers. When it occurs, the affected computer stops accepting any commands from keyboard or mouse. Issues related to graphics driver and the corresponding applications installing wrong files can be attributed as the main reasons for the occurrence of the problem. Software conflicts when a computer is booting can also cause Red Screen of Death.

Green Screen of Death

The color coding was intended for all Windows Insiders testing upcoming Windows 10 test builds and to replace archaic BSOD (Blue Screen of Death). The purpose of the color swap was to make it easier for Microsoft support staff to differentiate between errors in test builds and those in production.

Let us know if you have anything more to add.

The author Hemant Saxena is a post-graduate in bio-technology and has an immense interest in following Windows, Office and other technology developments. Quiet by nature, he is an avid Lacrosse player. Creating a System Restore Point first before installing a new software, and being careful about any third-party offers while installing freeware is recommended.

How to convert BAT to EXE file on Windows

The content below is taken from the original (How to convert BAT to EXE file on Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

Most of us are familiar with the Command Prompt and its basic commands. We usually execute a set of commands in order to complete a task or obtain some information. But this can also be done with the help of a bat file. ‘Bat’ or batch files are unformatted text files that contain the commands to be followed in order. Whenever you open a ‘bat’ file from CMD, it executes all the commands in order and outputs the result. Batch files make it easier for non-technical users to use the CMD commands as batch files can be written by someone else too.

If you write batch files yourself, you might be familiar with the process of writing one. In this post, we’ve covered a tool that will let you convert BAT files to EXE files. Converting to exe has its own benefits. First of all, it hides away your code if you do not wish to share the code you’ve written. Other than that, it makes it easier for your users as more users are comfortable with EXE files. We’ve covered two tools by the same developer, the first one is a Windows software and the second one is an online tool. Both the tools are aimed to convert your batch files to executable EXEs.

Convert BAT to EXE file

Convert BAT to EXE

Bat to Exe converter is a free Windows software available in various variants and formats. The tool is available separately for 32-Bit and 64-Bit platforms and comes in both portable and installable formats. It comes with a lot of example ‘bat’ files that you can convert to executables. Using this tool is very easy, all you need to do is open it up and select your batch file. And then choose where you would like to save your exe file.

There are a lot of customizations available that can be made to fine tune your EXE file. First of all, you can decide the visibility of your application. It can run in a hidden mode, or visible to the end user. Then you can also decide the working directory. You can choose whether the application should start in the current directory or the temporary location.

If your script generates some temporary files, then you might want to delete them once the script ends. So, you can enable deletion on exit or disable it as per your needs. Bat to Exe Converter also lets you encrypt your EXE with a password. Password encryption lets you disable unrestricted access to your file.

Other than these features, you can also specify the architecture your script is aiming at. You can compile different scripts for different architectures and distribute them separately. Also, if your script requires administrator privileges you can add the administrator manifest to the exe. There are a few other miscellaneous features available as well. You can enable ‘Overwrite Existing Files’ so that the EXE automatically overwrite existing included files. Moreover, you can also enable EXE compression using UPX.

Most of the batch scripts use some external files to complete their functionality. If your script is one of them, you can go to the ‘Include’ tab and select all the files that your script makes use of. The settings under version info let you specify version details and choose an icon for the EXE.

BAT to EXE converter

The ‘Editor’ lets you edit the ‘bat’ file. You can make your changes here before compiling the EXE file. The editor offers minimal syntax highlighting which makes it easier to view and edit the batch files.

The last ‘Program Settings’ tab lets you choose the language for your EXE file. You can choose anything from 24 available languages. Once you are done customizing your EXE file, you can hit the ‘Compile’ button to compile your batch file into an EXE. Bat to Exe Converter won’t take much longer to convert the file, and you will be able to use it very shortly. You can also reset all the entries to start afresh.

Click here to download Bat to Exe Converter for Windows.

Bat to Exe Converter online tool

The web based version of this tool works similarly well, but it offers less customization. The web app can be useful if you want to convert your file on the go or if you simply do not want many customizations. Again, using the web app is simple too. All you need to do is upload your ‘bat’ file. Then choose a few options here and there. You can customize the visibility, and then you can also specify the architecture and also include the Admin Manifest if your script includes commands that require administrator privileges. You can also specify a password to secure your EXE file. Other customizations such as version info, icon, and language settings are not yet available in the application.

Once done with the customizations, you can hit the ‘Convert’ button and download the EXE file. The final downloadable file is available in an encrypted ZIP file.

The web app can be useful if you quickly want to convert files. But if you want more customizations, I would recommend using the Windows application instead.

Bat to Exe Converter is a great add-on to convert your ‘BAT’ files to ‘EXE’ files. Converting your files to ‘EXE’ not just makes it easier for your users to execute them but also hides your code. Both the Windows app and Web app are useful in different ways. The variety of customizations offered lets you fine tune your EXE file and add more features to it.

Go here to use the Bat to Exe Online Converter.

Still in Use

The content below is taken from the original (Still in Use), to continue reading please visit the site. Remember to respect the Author & Copyright.

'Which one?' 'I dunno, it's your house. Just check each object.' 'Check it for *what*?' 'Whether it looks like it might have touched a paper towel at some point and then forgotten to let go.' '...' 'You can also Google to learn how to check which things are using which resources.' 'You know, I'll just leave the towel there and try again tomorrow.'

pfSense: A Guide to NAT, Firewall Rules and some Networking 101 – Muffin’s Lab

The content below is taken from the original (pfSense: A Guide to NAT, Firewall Rules and some Networking 101 – Muffin’s Lab), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2wT2UwA

Watch: Farmer creates giant bike out of sheep in amazing display for Tour of Britain

The content below is taken from the original (Watch: Farmer creates giant bike out of sheep in amazing display for Tour of Britain), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hors categorie shepherding skills

A Nottinghamshire farmer has taken giant bikes in fields to the next level at the Tour of Britain by coaxing his sheep into the shape of a bike.

Shropshire farm completes harvest with nothing but robots

The content below is taken from the original (Shropshire farm completes harvest with nothing but robots), to continue reading please visit the site. Remember to respect the Author & Copyright.

Researchers in Shropshire, England have managed to sow and harvest a field of barley using nothing but robots. Many aspects of farming have now been automated, but rarely is the entire process — planting, tending, monitoring and harvesting — completed without someone stepping foot inside the field. The ‘Hands-Free Hectare‘ project was set up last October by a team from Harper Adams University. With £200,000 in government funding, they modified a tractor and combine harvester with cameras, lasers and GPS systems. Drones and a robot "scout," which could scoop up and carry soil samples, helped the group monitor the field from afar.

Kit Franklin, the project’s leader and an agricultural engineer, freely admits that it was "the most expensive hectare of barley ever." So why bother? Well, commercial farms and agriculture businesses tend to use large, heavy machinery. These vehicles are efficient, but have a damaging effect on the soil and subsequent crops. "Automation is the future of farming, but we’re at a stage where farm machinery has got to unsustainable sizes," Franklin told The Times. He believes smaller, smarter vehicles are the future, as they can work with greater precision and reduce harmful soil compaction.

The Hands-Free Hectare project, while successful, was not without its challenges. The tractor failed to keep a straight line in the field, which meant some of the crops were sown in crooked strips. After drilling and rolling the crop, the team also struggled to quickly repurpose their tractor for spraying. "Sadly, we missed that target but we have since managed to get on our T1 and T2 fungicides, including a herbicide to help tackle some grass weeds we were seeing and micro-nutrients to aid the crop growth," Kieran Walsh, the team’s agronomist explains. Monitoring a field from a video feed, she adds, is harder than being out there and looking yourself.

It seems inevitable that automation’s role in farming will grow in the future. Large-scale machinery is efficient and convenient in rural locations where labourers are hard to come by. That’s not to say there won’t be a place for human farmers — someone needs to monitor the robots and make sure they’re working properly. As Walsh hints, there’s also a certain something, an ability to "read" the land that’s unique to farmers. Until a machine can replicate these observational skills, there will be a need for humans to pull on their boots and get down in the dirt every so often.

Via: Gizmodo UK

Source: The Times

Announcing Dedicated Interconnect: your fast, private on-ramp to Google Cloud

The content below is taken from the original (Announcing Dedicated Interconnect: your fast, private on-ramp to Google Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

By John Veizades, Product Manager, Dedicated Interconnect

Easy to manage, high bandwidth, private, network connectivity is essential for large enterprises. That’s why today we’re announcing Dedicated Interconnect, a new way to connect to Google Cloud and access the world’s largest cloud network.

Dedicated Interconnect lets you establish a private network connection directly to Google Cloud Platform (GCP) through one of our Dedicated Interconnect locations. Dedicated Interconnect also offers increased throughput and even a potential reduction in network costs.

Companies with data and latency-sensitive services, such as Metamarkets, a real-time analytics firm, benefit from Dedicated Interconnect.

"Accessing GCP with high bandwidth, low latency, and consistent network connectivity is critical for our business objectives. Google’s Dedicated Interconnect allows us to successfully achieve higher reliability, higher throughput and lower latency while reducing the total cost of ownership by more than 60%, compared to solutions over the public internet.” 

– Nhan Phan, VP of Engineering at Metamarkets 

Dedicated Interconnect enables you to extend the corporate datacenter network and RFC 1918 IP space into Google Cloud as part of a hybrid cloud deployment. If you work with large or real-time data sets, Dedicated Interconnect can also help you control how that data is routed.

Dedicated Interconnect features 

With Dedicated Interconnect you get a direct connection to GCP VPC networks with connectivity to internal IP addresses in RFC 1918 address space. It’s available in 10 gigabits per second (Gb/s) increments, and you can select from 1 to 8 circuits from the Cloud Console.

Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA. Please see the Dedicated Interconnect documentation for details on how to achieve these SLAs.

Because it combines point and click deployment with ongoing monitoring, Dedicated Interconnect is easy to provision and to manage. Once you have it up and running, you can add an additional VLAN with a point and click configuration — no physical plumbing necessary.

Locations 

Dedicated Interconnect is available today in many locations — with more coming soon. This means you can connect to Google’s network from almost anywhere in the world. For a full list of locations, visit the Dedicated Interconnect locations page. Note that many locations offer service from more than one facility.

Once connected, the Google network provides access to all GCP regions using a private fiber network that connects more than 100 points of presence around the globe. The Google network is the largest cloud network in the world, by several measures, including by the number of points of presence.


Is Dedicated Interconnect right for you? 

Here’s a simple decision tree that can help you determine whether Dedicated Interconnect is right for your organization

Get started with Dedicated Interconnect 

Use Cloud Console to place an order for Dedicated Interconnect.

Dedicated Interconnect will make it easier for more businesses to connect to Google Cloud. We can’t wait to see the next generation of enterprise workloads that Dedicated Interconnect makes possible.

Inside the store that only accepts personal data as currency

The content below is taken from the original (Inside the store that only accepts personal data as currency), to continue reading please visit the site. Remember to respect the Author & Copyright.

On the internet, technology companies try to track your every move. The news story you liked on Facebook last week. Your Google searches. The videos you watch on YouTube. They’re all monitored by algorithms that want to serve you highly targeted ads. We don’t realise it, but the breadcrumb trail we leave online has value. Real, monetary value. To emphasise that point, cybersecurity firm Kaspersky Lab is running a pop-up shop in London called The Data Dollar Store this week. Inside, you’ll find exclusive t-shirts, mugs and screen prints by street artist Ben Eine. The catch? You can only buy them by giving up some personal data.

Of course, I had to take a look.

Opposite Old Street station, a smattering of Eine fans and curious, spontaneous shoppers waited in line. One by one they filed in and heard the store’s unusual rules: to acquire the mug, you had to hand over three photos, or screenshots of your WhatsApp, SMS and email conversations, to Kaspersky. To buy the t-shirt, it had to be the last three photos on your Camera Roll — so you couldn’t be selective — or the last three messages on your phone. The original print, finally, forced you to hand over your phone. A member of staff would then poke around and select five photos or three screenshots. You could barter with them, but ultimately it was their choice.

There was a mixture of excitement and nervousness in the store. Some people were caught off guard and immediately started rummaging through their phones, checking photos and messages for anything that might cause embarrassment. Others looked smug — clearly, they had been told the rules beforehand and prepared their devices accordingly. "This is stuff we give away freely all the time," Eine told me. "But when you’re actually asked to exchange this private information and walk away with something that does have monetary value, people are like, ‘Whoa! What is actually on my telephone? What are the messages that I’ve sent?’ It’s a little bit scary."

Street artist Ben Eine

I went through the process myself. Originally, I had planned to go for the screen print and fully expose my phone to the people outside the store. But I’m a scaredy-cat and quickly changed my mind, opting for the mug instead. I picked out some photos of a field hockey pitch, a classic car and a longboard my older brother owns. Naturally, I went for shots that didn’t show my face or anyone I knew. The momentary panic surprised me, because I don’t consider myself a man with many secrets. But in that store, I realised just how much data I consider private or sensitive on my phone.

The Data Dollar Store is, of course, a marketing stunt. An interesting thought experiment, but little more. Neither Eine nor Kaspersky are implying that this is the future of commerce. It’s just designed to make people think. "I want people to be worried about the information they’re giving away," Eine says, "and then realise that they’re giving this information away all day, every day." Information has value, he argues, and we should know what that value is. "People should be rewarded for allowing people to have that information," he adds. "At the moment, nobody gets rewarded for giving away all of their personal information."

Source: The Data Dollar Store

Goodbye Skype for Business, Hello….Teams

The content below is taken from the original (Goodbye Skype for Business, Hello….Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft appears to be on the cusp of rebranding its business communications platform once again. What was formerly called Lync and currently known as Skype for Business is set to be rebranded once again.

As users were trying to login today to Office 365, some were being prompted with the image you see in this post. Considering that this is coming directly from Microsoft, it would seem that Skype for Business is going to be rebranded under the Teams umbrella.

This move isn’t all that surprising, the Skype for Business branding never made much sense as it was often confused with the Skype consumer platform which is fundamentally different than the business offering.

The message that showed this branding change was quickly pulled but not before it made its way to Twitter. Seeing as Ignite is coming up at the end of the month, I would expect this change to occur at that conference.

Teams has been a bright spot for Microsoft with it quickly attracting a loyal fan base. While Slack still has the brand recognition Microsoft desires, for those using Office 365, Teams is proving to be a worthy competitor.

With this announcement, it would appear that Microsoft is going to fold all of the capabilities of Skype for Business into this new tool and once again leave the Skype branding for only the consumer. Seeing as Ignite is only about three weeks away, we shouldn’t have to wait too much longer for the official statement.

[Update] Microsoft has posted a message to its admin portal that says the company is upgrading Skype for Business to Teams; you can read the announcement below. The company says that for now, this is an opt-in experience but that would suggest that everyone will be forced to upgrade at some point.

Thanks for the image Hoyot

The post Goodbye Skype for Business, Hello….Teams appeared first on Petri.

Record your band’s demo with this tiny cylindrical recording studio

The content below is taken from the original (Record your band’s demo with this tiny cylindrical recording studio), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’ve ever attempted to record a little music demo of your own, you know that these days, the wealth of technological options can be a little overwhelming. Audio company iZotope, known for its pro-level recording gear and software, might just solve this problem with a pared-down little gadget called Spire Studio. The cute, cylindrical hardware has inputs for microphones and instruments and connects to the Spire mixing and editing app on your smart phone. It even has a rechargeable lithium ion battery that the company claims will last for four hours so you can record pretty much anywhere.

Spire Studio has a built-in mic, two pre-amped inputs and even audio effects like reverb or delay to make your recording sound more professional. You can control the whole back end with the Spire app, which connects wirelessly to the Spire hardware. The recording pod is small enough to drop into your backpack and take with you anywhere. It’s even got an automatic level system which will make sure you don’t record your band at too high a level. Spire Studio has eight tracks that you can use to overdub and record on your own, or you can send your songs to bandmates who can then record their own parts (assuming they have their own Spire). And, according to FastCompany, you can export the tracks to an audio program like Pro Tools or Logic to have even more control over the multiple audio tracks.

Ultimately, having a quick way to record a basic song with actual instruments and vocal performances could benefit a wide range of customers. Professional musicians can make a demo really fast when inspiration strikes, and hobbyists can have an easy intro to multitrack recording, something that’s harder and harder to come by in the increasingly complex world of software recording. We’ve reached out to Spire for more details on price and release timeline and will update this post when we hear back.

Via: FastCompany

Source: Spire Studio

!DualHead in use

The content below is taken from the original (!DualHead in use), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Update] Please note that this review is based on version one of the software – an update was released this week which we will evaluate in a future article.

Now that we have !DualHead installed it is time to experiment with the world of dual head RISC OS desktops.

You now have one very large desktop and the ability to select screen modes to give you a very workable screen area. I have two 27inch monitors this gives me 3840 x 1200 resolution.

With 2 screens, you will have to experiment with how you want to position them. I find that my 27 inch monitors are too wide to put flat side by side without giving neck strain. Most people either tilt the 2 screens together in a V shape (as in the picture above) or have one screen at an angle to the main screen. On my Mac I generally prefer the second option with a ‘main’ screen directly infront of me and an angled second screen to the left, where I ‘park’ windows not currently in use.

R-Comp are very clear that dual head display is a work in progress. The !DualHead application is polished and runs well but does impose a number of restrictions on current use.

Firstly, I found I could not change the layout. My right hand monitor is always plugged into the second port (right port on the back of the machine looking at it from the back).

There are also different ways to handle multiscreens. On my Mac, the screens can also be separate displays (with a separate task bar on each) and you can arrange one screen under the other. On RISC OS, we have a single screen which is extended across multiple monitors. There are pros and cons to both.

!DualHead also requires the screens to run at the same resolution. You can run two different sized monitors. I tried replacing one of my 27inch monitors with an old 20inch monitor. This requires both monitors to run at the same resolution of 1600 x 1200. The results look stretched on one screen.

Different size monitors are an issue with all dual display systems. On my Mac I always use 2 identical 27 inch monitors. Moving screens between different resolutions is not ideal as you have to keep resizing them.

Quibbles aside, !DualHead is a really nice release and brings RISC OS firmly into the world of dual screen output. It will also allow developers to start adapting their software to make use of it. I tried !Paint and as expected a screenshot of the whole screen creates a sprite containing both displays.

This is an excellent first release (following on the heels of 5.23 RISC OS release) and I look forward to seeing what R-Comp have for us at the London Show…

No comments in forum

RED reveals more about its holographic smartphone display

The content below is taken from the original (RED reveals more about its holographic smartphone display), to continue reading please visit the site. Remember to respect the Author & Copyright.

When RED Camera first announced its crazy $1,200 Hydrogen smartphone with a "holographic display," a lot of folks wondered how that would actually work. Now, CEO Jim Jannard has revealed that RED is creating the screen in partnership with a company called Leia Inc. (yes, like that Leia). A spin-off from Hewlett-Packard labs, it calls itself "the leading provider of light field holographic display solutions for mobile," and the key words "light field" gives us a pretty good idea as to how it works

Light field displays use multiple layers of LCDs with a "directional backlight," letting you see two different views of the same object with each eye, producing a 3D effect. In practice, when you rotate a display, objects like buildings would appear to project from the screen, as shown in the video below. The effect shows a lot of promise for virtual and augmented reality headsets, but for external displays, viewing angles have been limited so far.

Leia says it "leverages recent breakthroughs in nano-photonic design and manufacturing to provide a complete lightfield ‘holographic’ display solution for mobile devices." It says the tech can create a holograph-like effect, "while preserving the normal operation of the display." In other words, if you turn off the 4D part, it’ll work like a regular smartphone screen. RED hasn’t showed the tech to many folks yet, but MKBHD’s Marcus Brownlee did see it, and said he was "pretty impressed," adding that it wasn’t perfect because of issues like light bleeding and stuttering for 4D gaming.

To produce content for the screen in the form of .h4v files, Jannard has told Redusers that "you can generate .h4v (holographic 4-View) by shooting 4 cameras (we are building solutions from consumer to professional), or by converting 3D to .h4v (very easy), or converting 2D to 3D (very hard) and then to .h4v."

RED has formed a "strategic partnership" with Leia and made an unnamed investment in the company, and Jannard will join its board of directors. It says the smartphone will arrive in the first half of 2018, and functional prototypes are supposed to be ready in the coming months.

Source: Reduser.net

Datadog Acquires Logmatic.io, Adding Log Management to its Full-Stack Cloud Monitoring Platform

The content below is taken from the original (Datadog Acquires Logmatic.io, Adding Log Management to its Full-Stack Cloud Monitoring Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Datadog
announced the acquisition of Logmatic.io, a Paris-based Operations Data
Platform for Log and Machine Events. The acquisition makes Datadog the
first vendor to offer Infrastructure Metrics, Application Performance
Monitoring (APM), and Log Management within a single platform. This
provides DevOps teams with the tools necessary to detect, diagnose, and
troubleshoot almost any problem in modern applications, significantly
reducing the total cost of application monitoring and time to resolution
in critical applications for enterprise customers.

Logmatic.io
was founded in 2014 by Emmanuel Gueidan, Renaud Boutet, and Amirhossein
Malekzadeh with the mission of helping organizations improve their
software and business performance. The platform analyzes billions of
data points daily with powerful log management and visualization tools
and counts marquee brands such as Accenture, BlaBlaCar, Canal+,
DailyMotion, and LVMH amongst its customers.

Earlier
this year, Datadog also announced the general availability of Datadog
APM (application performance monitoring), enabling organizations to
monitor every part of their service-oriented applications from
infrastructure to code.

“Integrating
logs with the APM and Infrastructure monitoring we already provide is
an important step in the evolution of the monitoring and analytics
category,” said Olivier Pomel, Chief Executive Officer of Datadog. “Our
new integrated platform will simplify the monitoring of modern
applications that often span clouds, containers, IoTs and mobile
devices. It will also unlock new A.I. and machine learning based
capabilities to help customers manage and improve their applications and
businesses.”

“Access
to the right machine data coming from the very core of business
operations will help drive success for our customers,” said Amirhossein
Malekzadeh, Co-Founder and CEO at Logmatic.io. “Over the last 3 years at
Logmatic.io, we saw a rapidly growing overlap in users and use-cases
between log management and monitoring platforms. Becoming a part of
Datadog was the best way to deliver a superior monitoring experience to
the rapidly evolving market.”

“We’re
already customers of both Datadog and Logmatic.io,” said Sylvain Barré,
Vice President of Scale at Dailymotion. “Having a single analytics,
monitoring, and alerting platform will dramatically improve our
productivity.”

The
Logmatic.io team has joined the Datadog Paris R&D office and an
integrated log management solution is currently available to select
customers. All current Logmatic.io customers will continue to have
access to their existing services, and will have a direct path for
migration upon the general availability of log management within
Datadog.

Convert Ps1 To Exe files using free software or online tool

The content below is taken from the original (Convert Ps1 To Exe files using free software or online tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell scripts are used for configuring various execution policies. They readily encrypt the files, and so, these files appear to be more secured against prying eyes. However, on Windows, the only native executable files are .exe and .dll. So, someone who is not much familiar with coding may find it difficult to execute the script. A tool that can convert a .Ps1 file into a .Exe file can make the process of sharing the scripts more practical. Ps1 To Exe comes across as a viable solution.

Ps1 To Exe is a simple yet useful application that offers developers a fair and quick method of converting PowerShell PS1 script files to the EXE format. The tool can generate 32-bit and 64-bit output files quickly. It also supports encryption and allows you to append additional items.

Convert Ps1 To Exe

Download the application and install it. The light weight of the application (3.1 MB size) surely adds to its advantage. Besides, it is also available as a portable application and can be uninstalled easily without leaving any traces on your PC.

When you first launch the application, its main screen displays options to convert .Ps1 file into .Exe file.

Convert Ps1 To Exe

Following the loading process of a PS1 script, you can customize various parameters associated with the output file. For instance,

  1. A user can encrypt the file
  2. Compress the file using UPX
  3. Generate a visible or invisible application that uses the 32-bit or 64-bit architecture

All you need to do is, specify the temporary data storage location and whether the file should be deleted automatically on exit or not. Other additional tabs that can be found in addition to ‘Options’ are,

  • Include – lets you edit the loaded scripts or add more info in the output files
  • Version Information – reveals more information about icon file
  • Editor – Easy to use the built-in editor to modify the imported PS1 files.
  • Program Settings – allows you to configure the desired language.

freeware

You can reset all entries at any time.

In all, Ps1 To Exe is a very dependable utility to convert PowerShell files to the EXE format. You can download it here.

Ps1 to Exe converter online

Apart from the standalone and portable application, there’s Ps1 To Exe Online Converter to help you convert PowerShell (.ps1) files to the EXE (.exe) format. Visit f2ko.de to use this online tool free.

The web application just requires you to browse the location of .Ps1 file, choose an architecture, specify visibility and enter a password (optional) for converting convert PowerShell (.ps1) files to the EXE (.exe) format, instantly.

OpenStack Developer Mailing List Digest September 2 – 8

The content below is taken from the original (OpenStack Developer Mailing List Digest September 2 – 8), to continue reading please visit the site. Remember to respect the Author & Copyright.

Successbot Says!

  • fungi: Octavia has migrated from Launchpad to StoryBoard [1]
  • sc’ : OpenStack is now on the Chef Supermarket! http://bit.ly/2xbI2mA [2]

Summaries:

  • Notifications Update Week 36 [3]

Updates:

  • Summit Free Passes [4]
    • People that have attended the Atlanta PTG or will attend the Denver PTG will receive 100% discount passes for the Sydney Summit
    • They must be used by October 27th
  • Early Bird Deadline for Summit Passes is September 8th [5]
    • Expires at 6:59 UTC
    • Discount Saves you 50%
  • Libraries Published to pypi with YYY.X.Z versions [6]
    • Moving forward with deleting the libraries from Pypi
    • Removing these libraries:
      • python-congressclient 2015.1.0
      • python-congressclient 2015.1.0rc1
      • python-designateclient 2013.1.a8.g3a2a320
      • networking-hyperv 2015.1.0
    • Still waiting on approval from PTL’s about the others
      • mistral-extra
      • networking-odl
      • murano-dashboard
      • networking-midonet
      • sahara-image-elements
      • freezer-api
      • murano-agent
      • mistral-dashboard
      • Sahara-dashboard
  • Unified Limits work stalled [7]
    • Need for new leadership
    • Keystone merged a spec [8]
  • Should we continue providing FQDN’s for instance hostnames?[9]
    • Nova network has deprecated the option that the domain in the FQDN is based on
    • Working on getting the domain info from Neutron instead of Nova, but this may not be the right direction
    • Do we want to use a FQDN as the hostnames inside the guest?
      • The Infra servers are built with the FQDN as the instance name itself
  • Cinder V1 API Removal[10]
    • Patch here[11]
  • Removing Screen from Devstack- RSN
    • It’s been merged
    • A few people are upset that they don’t have screen for debugging anymore
    • Systemd docs are being updated to include pdb path so as to be able to debug in a similar way to how people used screen [12] [13] [14]

PTG Planning

  • Video Interviews [15]

 

[1] http://bit.ly/2eLl1wd

[2] http://bit.ly/2xbI2D6

[3] http://bit.ly/2eLCcOj

[4] http://bit.ly/2xbI3XG #Free

[5] http://bit.ly/2eLCBQL

[6] http://bit.ly/2xbI4ec #YYYY

[7] http://bit.ly/2eLl1MJ

[8] http://bit.ly/2xbI4uI

[9] http://bit.ly/2eLWCqi

[10] http://bit.ly/2xbnB9c

[11] http://bit.ly/2eLl23f

[12] http://bit.ly/2eLl2jL/

[13] http://bit.ly/2xbI51K

[14] http://bit.ly/2eLl2Ah

[15] http://bit.ly/2xbnBpI