Cloud28+ turns its cloud catalog into an enterprise app store

The content below is taken from the original (Cloud28+ turns its cloud catalog into an enterprise app store), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud28+, the cloud services federation backed by Hewlett Packard Enterprise, now wants to help you install enterprise applications, not just choose them from its catalog.

Although HPE is the driving force behind Cloud28+, the federation of independent software vendors, resellers and service providers now has 225 members, which are pushing to simplify cloud software deployment.

The federation plans to open its new App Center for business later this summer, and will begin stocking its virtual shelves on June 7 with the opening of an App Onboarding Center. This will containerize workloads submitted by vendors and resellers and test them for compatibility, initially for free.

The workloads on offer will be more diverse than those contained in the original Cloud28+ catalog, as the federation is opening up to additional technology platforms. Once strictly an open source, OpenStack shop, it is now embracing Microsoft Azure, VMware, Ormuco and Docker.

Docker is key to the App Center’s operation, in fact, as Cloud28+ has adopted Docker Datacenter as its containers-as-a-service framework.

“The value of Cloud 28+ is in multi-cloud with monetization at the edge,” said Xavier Poisson, HPE’s vice president for hybrid IT in Europe, Middle East and Africa, at a meeting for Cloud28+ partners on Thursday.

That monetization at the edge means that HPE, through Cloud28+, is providing a common way for cloud infrastructure providers around Europe to deliver software and services to end users.

Another of the federation’s innovations announced Thursday is the introduction of a series of tools allowing vendors and service providers to market their services, generate customer leads and track their performance.

One of the advantages of Cloud28+ for participating service providers operating in only one country is that it makes them more visible to independent software vendors from other countries looking for new distribution channels, said Khaled Chaar, managing director of German cloud service provider Pironet NDH. That enables companies like his to offer their customers a broader range of cloud-ready software, he said.

Although many applications are already cloud-ready, for a typical business the 20 percent or so of the applications it uses that aren’t cloud-ready are probably the most valuable, industry-specific ones, he said. Making it easier to move those to the cloud will present significant advantages, he said.

Make Your PowerPoint Slides Suck Less With This New Shutterstock Plug-in

The content below is taken from the original (Make Your PowerPoint Slides Suck Less With This New Shutterstock Plug-in), to continue reading please visit the site. Remember to respect the Author & Copyright.

Preview how high-quality Shutterstock images look in your presentations before you commit to buying them.

How to monitor, measure, and manage your broadband consumption

The content below is taken from the original (How to monitor, measure, and manage your broadband consumption), to continue reading please visit the site. Remember to respect the Author & Copyright.

Forget that bass; in the digital world, it’s all about that bandwidth. You’re paying your ISP for a given amount of bandwidth, but it’s up to you to manage how it’s consumed. Whether or not you have a data cap—and even if your data cap is high enough that you never bang into it—simply letting all the devices on your network engage in a battle for supremacy is a recipe for problems.

You could experience poor video streaming, choppy VoIP calls, or debilitating lag in your online gaming sessions. And if you do have a data cap (and yes, they are evil), blowing through it can hit you in the pocketbook, expose you to throttling (where your ISP drastically, if temporarily, reduces your connection speed), or both.

Those are the problems, here are the solutions: We’ll show you how you can keep your ISP honest by measuring your Internet connection speed, so you can make sure you’re getting what you’re paying for; we’ll help you identify any bandwidth hogs on your network, so you can manage their consumption; and we’ll show you how you can tweak your router to deliver the best performance from everything on your home network.

Make sure you’re getting what you paid for

Your home network will most certainly be faster than your Internet connection, but it’s the speed of your Internet connection that will have the biggest impact on your media-streaming experience—at least when you’re streaming media from services such as Netflix, Amazon Video, Spotify, Tidal, and the like. So the first step in your bandwidth audit should be to verify that your ISP is delivering the speed you’re paying for (the vast majority of ISPs offer their services in tiers, charging more for higher speeds.

Speedtest

Results from Speedtest.net for my home connection.

The best way to do that is by visiting a third-party website such as Ookla’s Speedtest.net or—if you don’t like Flash—the HTML 5-based Speedof.me. To get accurate baseline speeds, check from a device that’s connected directly to your broadband gateway (i.e., your DSL or cable modem, not your router), with all other wired and wireless devices disconnected. You might even want to test a couple of times at different hours of the day, since speeds can vary. Additionally, run some tests while other devices are using the Internet to see the differences.

Compare your baseline results to the speeds your ISP has committed to deliver with the plan you’re paying for. If you’re seeing significantly lower speeds, call your provider ask them to check your connection. They might be able to run some diagnostics at their end and offer some suggestions to fix the problem before they send out a tech.

You also want to check the Internet speeds from any device you’re seeing performance issues on. Devices that are hardwired into the network should achieve speeds on par with your baseline if other devices on the network aren’t using much bandwidth. On wireless devices, the speeds can be greatly reduced when further away from the wireless router or if there’s interference from neighboring Wi-Fi networks, other wireless devices, or appliances that can cause interference (such as microwave ovens, which produce tremendous amounts of noise in the 2.4GHz frequency spectrum while operating).

How much bandwidth do you really need?

Keep in mind, the bandwidth your ISP promises to deliver isn’t a per-device ceiling—it’s the total bandwidth available for your Internet connection, so it’s shared among all the devices on your network. If you have a plan offering download speeds of 20Mbps and upload speeds of 1.0Mbps, for instance, and you have four devices connected to the Internet, you could say each device might see a maximum download speed of 5.0Mbps and a maximum upload speed of 0.25Mbps.

In reality, it’s not quite that simple. The manner in which your Internet bandwidth is distributed depends on your router and the demand from each device. With a simple router with factory-default settings, it’s every client device for itself in a mad scramble for bandwidth. Client devices that are sensitive to lag—media streamers, VoIP phones, and online games—can suffer in this scenario because applications that aren’t sensitive to lag—web browsers and email clients, for example—are treated the same as one that are. I’ll show you how you can manage your bandwidth later.

WRT1900ACS front

If you don’t configure your router properly, all the devices on your network will be treated equally in terms of bandwidth allocation.

To give you an idea of what’s acceptable for Internet speeds, I suggest having about 2.0Mbps of download speed per device for general usage (emailing and web browsing), and about 5.0Mbps of download speed for each HD video stream. So if one person on your network is watching YouTube videos, another is streaming a movie from Netflix, both are simultaneously using a tablet or smartphone to browse the web, while another is a on Skype video chat, I suggest having 19 Mbps of download bandwidth: that’s 5.0Mbps x 3 + 2.0Mbps x 2.

The maximum upload speed of your Internet connection typically isn’t as crucial, because most people consume more content than they create and upload to the Internet. That’s a good thing given that most ISPs deliver asymmetric service (i.e., download speeds that are much higher than upload speeds). Having said that, know that the upload speeds can make a huge difference for applications such as Skype or FaceTime since video is traveling in both directions—up and down—simultaneously. For high-quality (non-HD) video chats, I suggest adding about 0.5Mbps of upload bandwidth or about 1.5Mbps for full HD.

Your upload bandwidth also comes into play when you or others are remotely accessing devices or files on your network when you’re away from home. It’s hard to suggest a fixed number on that activity, though; just remember the faster the upload speed, the faster the file transfers and streams will be coming from your network.

Monitor your usage to identify bandwidth hogs

Whether you have a data cap or are having performance issues, consider tracking the bandwidth usage of all your devices to see who or what is hogging the most bandwidth.

You might consider using a Windows-based program like BitMeter OS (free and open-source) or NetWorx (also free), which are most useful if all or most of the Internet devices on the network are Windows PCs or laptops. These applications will track usage over time for the particular computer they’re installed on, and offer up graphs and tables of data you can review. You can also set a data quota and be alerted when a device approaches or exceeds that limit.

Networx

NetWorx provides detailed bandwidth reports, but only for the PCs on your network.

If you’re using multiple types of devices on the network—smartphones, tablets, gaming consoles, and TVs, in addition to computers running Windows—it would be ideal to track the entire network’s bandwidth from a single point, so you don’t have to setup tracking on each device. Since the Internet traffic of each device needs to be monitored, it’s not as easy as installing a simple program on a PC. The traffic must be monitored from the router or another device strategically placed between the Internet connection and the network clients.

Although most routers don’t track bandwidth consumption by network device, consider checking yours just in case. If your router doesn’t support it, consider buying another router or flashing a supported router with aftermarket firmware that does support it. If you decide to buy a new router, the enterprise-oriented Open Mesh routers and access points provide quite a bit of bandwidth usage details. Their hardware can be managed via a free online account. and it supports wireless mesh-networking technology that makes it easier to broaden your Wi-Fi coverage.

Open Mesh network

Open Mesh shows a graph of bandwidth usage of each client, top clients, top devices, top applications, and top APs on its Network Overview page.

If you don’t want to replace your router, flashing it with aftermarket firmware is a good option, provided your router has that capability.DD-WRT is one popular aftermarket firmware that supports many router brands and models; but by default, it shows only your total bandwidth usage. To find the usage per client or device, you’d also need to install an add-on like DDWRT-BWMON.

Cucumber Tony is a lesser-known firmware to consider. I reviewed it for TechHive’s sister site NetworkWorld recently and found that it supports a couple of different router brands. Gargoyle is another firmware you might not be familiar with. It offers some good bandwidth monitoring and control functionality, with support for a few router brands.

For the more adventurous, another option is to build your own router out of an old or spare PC, or even run it on your main PC with a virtual machine. Sophos UTM and Untangle, for instance, are operating systems that provide routing, firewall, web filtering, bandwidth monitoring, and many more network functions.

Cucumber

Cucumber Tony shows a graph and table of each device’s bandwidth usage on the Clients page.

Utilize your router’s QoS to distribute bandwidth

Most routers have a quality-of-service (QoS) feature, but it’s not enabled by default on some routers. The idea behind QoS is to regulate bandwidth usage in a way that ensures good performance on the network, particularly with more sensitive types of services such as video streams, VoIP calls, and online gaming, where any lag can be quite noticeable. It basically gives these types of traffic higher priority—on the network and to Internet access from the network—compared to services that aren’t sensitive to lag (e.g., file downloads, torrents, software updates, and general web browsing).

The exact QoS features and settings vary between by router brands and models, but most provide a way for you to give particular devices higher priority by tagging their MAC or IP address, or by marking types of services for higher priority. Some routers come with a collection of default QoS settings that you can tweak and customize.

Login to your router and see if it has any QoS settings. Take a look at the default settings, as it might already give the most common services higher priority. If not, see if it allows you to classify traffic based upon the service type. I suggest going that route first to help alleviate any performance issues on the network. Secondly, you could consider prioritizing any critical devices you’d like to have higher priority.

Netgear QoS

This Netgear WNR2000 802.11n router has QoS pre-configured for a limited number of applications, but you must configure your own rules for anything the manufacturer didn’t think of.

Optimize your network to increase speeds

At first thought, your Internet connection seems to be the bottleneck to the Internet. Your local network might be able to handle up to 1000Mbps of bandwidth, while your Internet-download speeds are likely less than 60Mbps (much less than that if you’re relying on DSL or—shudder—satellite Internet service). You’d think that your network could easily handle it, but sometimes that’s not the case. This is especially true when you have many devices on the network, particularly Wi-Fi devices.

You might not need super-fast speeds for every device or online service, but the quicker any device is served by the router means the more time it has to serve the other devices on the network. Thus, increasing the speeds of just one device could have an impact on the others. The more devices you get faster, the more noticeable the increased performance may be, especially for those sensitive services.

Whenever possible, connect computers and devices to the router or network via an ethernet cable. This helps alleviate the congestion on the airwaves, which is a much more complex and imperfect connection medium than a cable.

For devices that can’t be hardwired, try to utilize router’s 5GHz frequency band as much as possible, as the 2.4GHz band is much more congested and prone to interference. For network clients that can connect only to your 2.4GHz network, check channel usage so you can use the least-crowded channel available. Additionally, ensure you’re using only WPA2 security for your Wi-Fi, as enabling the first-generation WPA (or the even older, insecure WEP) limits wireless speeds.

If your wireless router doesn’t support 5GHz, I suggest upgrading to a dual-band router so you can utilize these faster and higher quality frequencies. Keep in mind, the Wi-Fi devices must also specially support 5GHz, otherwise they’ll still be connecting via 2.4GHz. For computers and devices that can be upgraded to 5GHz Wi-Fi, I suggest doing so. If you have multiple devices without 5GHz, I suggest upgrading the ones with any performance issues first.

Finally, evaluate your Wi-Fi coverage to ensure that your wireless router is placed in the most central spot around where you use the wireless devices most often. If you still regularly have low or poor Wi-Fi signals, consider extending your network.

This story, “How to monitor, measure, and manage your broadband consumption” was originally published by

TechHive.

Google has a new chip that makes machine learning way faster

The content below is taken from the original (Google has a new chip that makes machine learning way faster), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has taken a big leap forward with the speed of its machine learning systems by creating its own custom chip that it’s been using for over a year.

The company was rumored to have been designing its own chip, based partly on job ads it posted in recent years. But until today it had kept the effort largely under wraps.

It calls the chip a Tensor Processing Unit, or TPU, named after the TensorFlow software it uses for its machine learning programs. In a blog post, Google engineer Norm Jouppi refers to it as an accelerator chip, which means it speeds up a specific task.

At its I/O conference Wednesday, CEO Sundar Pichai said the TPU provides an order of magnitude better performance per watt than existing chips for machine learning tasks. It’s not going to replace CPUs and GPUs but it can speed up machine learning processes without consuming a lot more more energy.

As machine learning becomes more widely used in all types of applications, from voice recognition to language translation and and data analytics, having a chip that speeds those workloads is essential to maintaining the pace of advancements.

And as Moore’s Law slows down, reducing the gains from each new generation of processor, using accelerators for key tasks becomes even more important. Google says its TPU provides the equivalent gains to moving Moore’s Law forward by three generations, or about seven years.

The TPU is in production use across Google’s cloud, including powering the RankBrain search result sorting system and Google’s voice recognition services. When developers pay to use the Google Voice Recognition Service, they’re using its TPUs.

Urs Hölzle, Google’s senior vice president for technical infrastructure, said during a press conference at I/O that the TPU can augment machine learning processes but that there are still functions that require CPUs and GPUs.

Google started developing the TPU about two years ago, he said.

Right now, Google has thousands of the chips in use. They’re able to fit in the same slots used for hard drives in Google’s data center racks, which means the company can easily deploy more of them if it needs to.

Right now, though, Hölzle says that they don’t need to have a TPU in every rack just yet.

If there’s one thing that Google likely won’t do, it’s sell TPUs as standalone hardware. Asked about that possibility, Google enterprise chief Diane Greene said that the company isn’t planning to sell them for other companies to use.

Part of that has to do with the way application development is heading — developers are building more and more applications in the cloud only, and don’t want to worry about managing hardware configurations, maintenance and updates.

Another possible reason is that Google simply doesn’t want to give its rivals access to the chips, which it likely spent a lot of time and money developing. 

We don’t yet know what exactly the TPU is best used for. Analyst Patrick Moorhead said he expects the chip will be used for inferencing, a part of machine learning operations that doesn’t require as much flexibility.

Right now, that’s all Google is saying. We still don’t know which chip manufacturer is building the silicon for Google. Holzle said that the company will reveal more about the chip in a paper to be released this fall.

Google’s new tools make it easier to integrate apps with its spreadsheets and slides

The content below is taken from the original (Google’s new tools make it easier to integrate apps with its spreadsheets and slides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google is updating the developer tools for its Docs productivity suite in an effort to make it easier for companies to integrate third-party applications with its presentation, spreadsheet and word processing software. 

Software makers can start working with a new tool that lets them sync data between a Google Sheet and their application for easy data compilation and sharing among people who use the online spreadsheet software. In addition, Google also announced a new Slides API that will allow users to automatically populate slide decks with information from outside sources. 

Software packages like Google Docs don’t exist in a vacuum, and offering developers a way to more deeply integrate with the company’s products could lead to more companies becoming interested in picking up the productivity suite because of how it works with other software. 

Case in point: Salesforce is using the new Sheets API to sync data from a customer’s CRM into a spreadsheet, which can then be shared with other people inside or outside the user’s organization. When information gets changed in Salesforce, it’ll propagate across any spreadsheets synced with it.

That’s useful for making sure that information shared within an organization using Google Sheets is up to date and accurate. 

The Slides API integration is designed to make it easier for business users to create visual presentations without a whole lot of effort. For example, Trello is working on a feature that would let users take items stored on a “board” in its application and turn them into slides with a couple of clicks, without having to go through all the trouble of building a slide deck by hand. 

Security-conscious businesses might not be a fan of these integrations yet. Right now, it’s not possible to programmatically exclude users from seeing information they’re not supposed to while still sharing a spreadsheet or slide deck with them. If people are working on a team with tightly secured information that shouldn’t be shared with others, these features kind of backfire. 

For smaller organizations, or teams that aren’t as concerned about keeping information under wraps, the integrations these APIs open up will likely be welcome extensions to Sheets and Slides.

The company announced the updates at its I/O developer conference Wednesday, which included a number of other announcements like more details about the future of Android and information on the company’s VR ambitions. 

RES Enhances RES ONE Workspace with Native Automation Technology and Announces RES ONE Workspace Core

The content below is taken from the original (RES Enhances RES ONE Workspace with Native Automation Technology and Announces RES ONE Workspace Core), to continue reading please visit the site. Remember to respect the Author & Copyright.

RES , the leader in enabling, automating and securing digital workspaces, today announced the addition of enterprise automation to its RES ONE… Read more at VMblog.com.

I Love My Amazon WorkSpace!

The content below is taken from the original (I Love My Amazon WorkSpace!), to continue reading please visit the site. Remember to respect the Author & Copyright.

Early last year my colleague Steve Mueller stopped by my office to tell me about an internal pilot program that he thought would be of interest to me. He explained that they were getting ready to run Amazon WorkSpaces on the Amazon network and offered to get me on the waiting list. Of course, being someone that likes to live on the bleeding edge, I accepted his offer.

Getting Started
Shortly thereafter I started to run the WorkSpaces client on my office desktop, a fairly well-equipped PC with two screens and plenty of memory. At that time I used the desktop during the working day and a separate laptop when I was traveling or working from home. Even though I used Amazon WorkDocs to share my files between the two environments, switching between them caused some friction. I had distinct sets of browser tabs, bookmarks, and the like. No matter how much I tried, I could never manage to keep the configurations of my productivity apps in sync across the environments.

After using the WorkSpace at the office for a couple of weeks, I realized that it was just as fast and responsive as my desktop. Over that time, I made the WorkSpace into my principal working environment and slowly severed my ties to my once trusty desktop.

I work from home two or three days per week. My home desktop has two large screens, lots of memory, a top-notch mechanical keyboard, and runs Ubuntu Linux. I run VirtualBox and Windows 7 on top of Linux. In other words, I have a fast, pixel-rich environment.

Once I was comfortable with my office WorkSpace, I installed the client at home and started using it there. This was a giant leap forward and a great light bulb moment for me. I was now able to use my fast, pixel-rich home environment to access my working environment.

At this point you are probably thinking that the combination of client virtualization and server virtualization must be slow, laggy, or less responsive than a local device. That’s just not true! I am an incredibly demanding user. I pound on the keyboard at a rapid-fire clip, I keep tons of windows open, alt-tab between them like a ferret, and I am absolutely intolerant of systems that get in my way.  My WorkSpace is fast and responsive and makes me even more productive.

Move to Zero Client
A few months in to my WorkSpaces journey, Steve IM’ed me to talked about his plan to make some Zero Client devices available to members of the pilot program. I liked what he told me and I agreed to participate. He and his sidekick Michael Garza set me up with a Dell Zero Client and two shiny new monitors that had been taking up space under Steve’s desk. At this point my office desktop had no further value to me. I unplugged it, saluted it for its meritorious service, and carried it over to the hardware return shelf in our copy room.  I was now all-in, and totally dependent on, my WorkSpace and my Zero Client.

The Zero Client is a small, quiet device. It has no fans and no internal storage. It simply connects to the local peripherals (displays, keyboard, mouse, speakers, and audio headset) and to the network. It produces little heat and draws far less power than a full desktop.

During this time I was also doing quite a bit of domestic and international travel. I began to log in to my WorkSpace from the road. Once I did this, I realized that I now had something really cool—a single, unified working environment that spanned my office, my home, and my laptop. I had one set of files and one set of apps and I could get to them from any of my devices. I now have a portable desktop that I can get to from just about anywhere.

The fact that I was using a remote WorkSpace instead of local compute power faded in to the background pretty quickly. One morning I sent the team an email with the provocative title “My WorkSpace has Disappeared!” They read it in a panic, only to realize that I had punked them, and that I was simply letting them know that I was able to focus on my work, and not on my WorkSpace. I did report a few bugs to them,  none of which were serious, and all of which were addressed really quickly.

Dead Laptop
The reality of my transition became apparent late last year when the hard drive in my laptop failed one morning. I took it in to our IT helpdesk and they replaced the drive. Then I went back up to my office, reinstalled the WorkSpaces client, and kept on going. I installed no other apps and didn’t copy any files. At this point the only personal items on my laptop are the registration code for the WorkSpace and my stickers! I do still run PowerPoint locally, since you can never know what kind of connectivity will be available at a conference or a corporate presentation.

I also began to notice something else that made WorkSpaces different and better. Because laptops are portable and fragile, we all tend to think of the information stored on them as transient. In the dark recesses of our minds we know that one day something bad will happen and we will lose the laptop and its contents. Moving to WorkSpaces takes this worry away. I know that my files are stored in the cloud and that losing my laptop would be essentially inconsequential.

It Just Works
To borrow a phrase from my colleague James Hamilton, WorkSpaces just works. It looks, feels, and behaves just like a local desktop would.

Like I said before, I am demanding user. I have two big monitors, run lots of productivity apps, and keep far too many browser windows and tabs open. I also do things that have not been a great fit for virtual desktops up until now. For example:

Image Editing – I capture and edit all of the screen shots for this blog (thank you, Snagit).

Audio Editing – I use Audacity to edit the AWS Podcasts. This year I plan to use the new audio-in support to record podcasts on my WorkSpace.

Music – I installed the Amazon Music player and listen to my favorite tunes while blogging.

Video – I watch internal and external videos.

Printing – I always have access to the printers on our corporate network. When I am at home, I also have access to the laser and ink jet printers on my home network.

Because the WorkSpace is running on Amazon’s network, I can download large files without regard to local speed limitations or bandwidth caps. Here’s a representative speed test (via Bandwidth Place):

Sense of Permanence
We transitioned from our pilot WorkSpaces to our production environment late last year and are now provisioning WorkSpaces for many members of the AWS team. My WorkSpace is now my portable desktop.

After having used WorkSpaces for well over a year, I have to report that the biggest difference between it and a local environment isn’t technical. Instead, it simply feels different (and better).  There’s a strong sense of permanence—my WorkSpace is my environment, regardless of where I happen to be. When I log in, my environment is always as I left it. I don’t have to wait for email to sync or patches to install, as I did when I would open up my laptop after it had been off for a week or two.

Now With Tagging
As enterprises continue to evaluate, adopt, and deploy WorkSpaces in large numbers, they have asked us for the ability to track usage for cost allocation purposes. In many cases they would like to see which WorkSpaces are being used by each department and/or project. Today we are launching support for tagging of WorkSpaces. The WorkSpaces administrator can now assign up to 10 tags (key/value pairs) to each WorkSpace using the AWS Management Console, AWS Command Line Interface (CLI), or the WorkSpaces API. Once tagged, the costs are visible in the AWS Cost Allocation Report where they can be sliced and diced as needed for reporting purposes.

Here’s how the WorkSpaces administrator can use the Console to manage the tags for a WorkSpace:

Tags are available today in all Regions where WorkSpaces is available: US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney).

Learning More
If you have found my journey compelling and would like to learn more, here are some resources to get you started:

Request a Demo
If you and your organization could benefit from Amazon WorkSpaces and would like to learn more, please get in touch with our team at [email protected].

Jeff;

Open, Linux-based platform simplifies wireless IoT

The content below is taken from the original (Open, Linux-based platform simplifies wireless IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sierra Wireless and Element14 unveiled an open-spec Arduino compatible “mangOH Green IoT Platform” based on Sierra’s 3G, GNSS, and WiFi modules running Linux. Sierra Wireless announced a beta release of its AirPrime WP module and open-source “mangOH” carrier board last June. Now, the company has formally released the products with the help of Element14, which […]

Raspberry Pi Zero gains a camera connector

The content below is taken from the original (Raspberry Pi Zero gains a camera connector), to continue reading please visit the site. Remember to respect the Author & Copyright.

Raspberry Pi Zero gains a camera connector

30,000 hot Pis in stores now and factory ready to bake plenty more

Raspberry Pi Zero v. 1.3 with camera connector

The Raspberry Pi Zero 1.3 with camera connector

The Raspberry Pi Zero has added a camera connector.

Chief Pi guy Eben Upton has explained that the new connector came about as a result of colossal demand for the minuscule computer.

The factory baking Pis could not keep up with demand for the Zero and then had to pause production once the Raspberry Pi 3 debuted.

Upton says during that pause the Pi team discovered “the same fine-pitch FPC connector that we use on the Compute Module Development Kit just fits onto the right hand side of the board”. The outfit is therefore offering a cable that connects to the the FPC slot on one side and the Raspberry Pi camera module on the other.

Raspberry Pi evangelist Matt Richardson, who devised the cable, has shown it off on Twitter.

Upton says 30,000 of the new model, version 1.3, are available now and that the Pi bakery is going to keep churning them out until users’ appetites are sated. ®

Sponsored:
Middleware for the modern age

OpenStack Developer Mailing List Digest May 7-13

The content below is taken from the original (OpenStack Developer Mailing List Digest May 7-13), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Pabelanger: bare-precise has been replaced by ubuntu-precise. Long live DIB
  • bknudson: The Keystone CLI is finally gone. Long live openstack CLI.
  • Jrichli: swift just merged a large effort that started over a year ago that will facilitate new capabilities – like encryption
  • All

Release Count Down for Week R-20, May 16-20

  • Focus
    • Teams should have published summaries from summit sessions to the openstack-dev mailing list.
    • Spec writing
    • Review priority features
  • General notes
    • Release announcement emails will be tagged with ‘new’ instead of ‘release’.
    • Release cycle model tags now say explicitly that the release team manages releases.
  • Release actions
    • Release liaisons should add their name and contact information to this list [1].
    • New liaisons should understand release instructions [2].
    • Project teams that want to change their release model should do so before the first milestone in R-18.
  • Important dates
    • Newton 1 milestone: R-18 June 2
    • Newton release schedule [3]

Collecting Our Wiki Use Cases

  • At the beginning, the community has been using a wiki [4] as a default community information publication platform.
  • There’s a struggle with:
    • Keeping things up-to-date.
    • Prevent from being vandalized.
    • Old processes.
    • Projects that no longer exist.
  • This outdated information can make it confusing to use, especially newcomers, that search engines will provides references to.
  • Various efforts have happened to push information out of the wiki to proper documentation guides like:
    • Infrastructure guide [5]
    • Project team guide [6]
  • Peer reviewed reference websites:
  • There are a lot of use cases that a wiki is a good solution, and we’ll likely need a lightweight publication platform like the wiki to cover those use cases.
  • If you use the wiki as part of your OpenStack work, make sure it’s captured in this etherpad [9].
  • Full thread

Supporting Go (continued)

  • Continuing from previous Dev Digest [10].
  • Before Go 1.5 (without the -buildmode=shared) it didn’t support the concept of shared libraries. As a consequence, when a library upgrades, the release team has to trigger rebuild for each and every reverse dependency.
  • In Swift’s case for looking at Go, it’s hard to write a network service in Python that shuffles data between the network and a block device and effectively use all the hardware available.
    • Fork()’ing child processes using cooperative concurrency via eventlet has worked well, but managing all async operations across many cores and many drives is really hard. There’s not an efficient interface in Python. We’re talking about efficient tools for the job at hand.
    • Eventlet, asyncio or anything else single threaded will have the same problem of the filesystem syscalls taking a long time and the call thread can be blocked. For example:
      • Call select()/epoll() to wait for something to happen with many file descriptors.
      • For each ready file descriptor, if the file descriptor socket is readable, read it, otherwise EWOULDBLOCK is returned by the kernel, and move on to the next file descriptor.
  • Designate team explains their reasons for Go:
    • MiniDNS is a component that due to the way it works, it’s difficult to make major improvements.
    • The component takes data and sends a zone transfer every time a record set gets updated. That is a full (AXFR) zone transfer where every record in a zone gets sent to each DNS server that end users can hit.
      • There is a DNS standard for incremental change, but it’s complex to implement, and can often end up reverting to a full zone transfer.
    • Ns[1-6].example.com may be tens or hundreds of servers behind anycast Ips and load balancers.
    • Internal or external zones can be quite large. Think 200-300Mb.
    • A zone can have high traffic where a record is added/removed for each boot/destroy.
    • The Designate team is small, and after looking at options, judging the amount of developer hours available, a different language was decided.
  • Looking at Designates implementation, there are some low-hanging fruit improvements that can be made:
    • Stop spawning a thread per request.
    • Stop instantiating Oslo config object per request.
    • Avoid 3 round trips to the database every request. Majority of the request here is not spent in Python. This data should be trivial to cache since Designate knows when to invalidate the cache data.
      • In a real world use case, there could be a cache miss due to the shuffle order of multiple miniDNS servers.
  • The Designate team saw 10x improvement for 2000 record AXFR (without caching). Caching would probably speed up the Go implementation as well.
  • Go historically has poor performance with multiple cores [11].
    • Main advantages with the language could be CSP model.
    • Twisted does this very well, but we as a community consistently support eventlet. Eventlet has threaded programming model, which is poorly suited for Swift’s case.
    • PyPy got a 40% performance improvement over Cpython for a brenchmark of Twisted’s DNS component 6 years ago [12].
  • Right now our stack already has dependency C, Python, Erlang, Java, Shell, etc.
  • End users emphatically do not care about the language API servers were written in. They want stability, performance and features.
  • The Infrastructure related issues with Go for reliable builds, packaging, etc is being figured out [13]
  • Swift has tested running under PyPy with some conclusions:
    • Assuming production-ready stability of PyPy and OpenStack, everyone should use PyPy over CPython.
      • It’s just simply faster.
      • There are some garbage collector related issues to still work out in Swift’s usage.
      • There are a few patches that do a better job of socket handling in Swift that runs better under PyPy.
    • PyPy only helps when you’ve got a CPU-constrained environment.
    • The GoLang targets in Swift are related to effective thread management syscalls, and IO.
    • See a talk from the Austin Conference about this work [14].
  • Full thread

 

Google reveals the Chromium OS it uses to run its own containers

The content below is taken from the original (Google reveals the Chromium OS it uses to run its own containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google’s decided the Chromium OS is its preferred operating system for running containers in its own cloud. And why wouldn’t it – the company says it uses it for its own services.

The Alphabet subsidiary offers a thing called “Container-VM” that it is at pains to point out is not a garden variety operating system you’d ever contemplate downloading and using in your own bit barn. Container-VM is instead dedicated to running Docker and Kubernetes inside Google’s cloud.

The Debian-based version of Container-VM has been around for a while, billed as a “container-optimised OS”.

Now Google has announced a new version of Container-VM “based on the open source Chromium OS project, allowing us greater control over the build management, security compliance, and customizations for GCP.”

The new Container-VM was built “primarily for running Google services on GCP”.

We therefore have here an OS built by Google to run Google itself, and now available to you if you want to run containers on Google, which is of course a leading users of containers and creates billions of them every week.

It’s not unusual for a cloud provider to offer tight integration between their preferred operating systems and their clouds. Amazon Linux is designed to work very well in Amazon Web Services. Oracle wants you to take it as Red all the way up and down its stack. We also know that Windows 2016 Server’s container-friendly Nano Server has powered Azure since 2016.

So Google’s not ahead of the pack here. But it does now have a rather stronger container story to tell. ®

Sponsored:
Implementing high availability and disaster recovery in IBM PureApplication systems V2

Linksys will let you use open router code under new FCC rules

The content below is taken from the original (Linksys will let you use open router code under new FCC rules), to continue reading please visit the site. Remember to respect the Author & Copyright.

While the FCC’s imminent rules for wireless device interference are supposed to allow hackable WiFi routers, not every router maker sees it that way. TP-Link, for instance, is blocking open source firmware out of fear that you’ll run afoul of the regulations when they kick in on June 2nd. However, you won’t have to worry about that with Linksys’ fan-friendly networking gear. The Belkin-owned brand promises Ars Technica that its modifiable routers will allow open source firmware while obeying the FCC’s rules — you can tinker without fear of messing with nearby radar systems.

The hardware’s design is the key. Linksys says it’s been working with both the chip designers at Marvell and the developers of OpenWRT to make this work. The WRT routers separate the RF wireless data from the firmware, preventing you from stepping out of bounds. You theoretically can’t hack those limits, even though you have control over most everything else.

This won’t please those who think that any restriction on open source firmware is one too many. OpenWRT’s Imre Kaloz asks Ars why the FCC didn’t just punish infractions instead. However, Linksys’ solution shows that there’s at least some possibility of compromise between raw flexibility and safety.

Source: Ars Technica

How to Survive a Learning to Code Course, From a Recent Graduate

The content below is taken from the original (How to Survive a Learning to Code Course, From a Recent Graduate), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ever considered learning how to code? Peter Hyde shares his top tips for surviving a coding course.

The sun sets on Xbox’s ‘Project Spark’ game creation tool

The content below is taken from the original (The sun sets on Xbox’s ‘Project Spark’ game creation tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

Starting today, Project Spark, Microsoft’s quirky game creation game, is no longer for sale. And come August 12th, the servers will be shut down, Thomas Gratz of developer Team Dakota writes. As a consolation, anyone who bought the retail version "starter kit" will get a credit to their Microsoft account. If you redeemed the code inside after October 5th of last year (when the game went free-to-play) and prior to today, you’ll get a credit to use in the Xbox or Windows stores. Gratz says that the credits will be automatically applied for eligible customers.

There is some silver lining though. Gratz notes that no layoffs occurred as team members transitioned to other places within Microsoft after active development of the tool stopped last fall. Maintaining its behind the scenes aspects wasn’t possible with a small group, though, hence the shut-down. Farewell, Project Spark, and thanks for giving Xbox One owners a chance at playing a version of P.T. on their console.

Via: Kotaku

Source: Project Spark

Tiny $1 STEM-oriented hacker board hits Indiegogo

The content below is taken from the original (Tiny $1 STEM-oriented hacker board hits Indiegogo), to continue reading please visit the site. Remember to respect the Author & Copyright.

Like the tiny BBC Micro:bit board, the “One Dollar Board” is aimed at introducing kids to computer programming and the Internet of Things at a young age. A team of Brazilian developers has just launched a “One Dollar Board” Indiegogo campaign aimed at funding a tiny, open source microcontroller board so simple and inexpensive that […]

Incremental update to Azure Stack PaaS services: Web Apps

The content below is taken from the original (Incremental update to Azure Stack PaaS services: Web Apps), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we released updates to App Service (Web Apps) for Azure Stack – you can find the bits here. This update streamlines setup/deployment while providing a more stable experience.

Please note that there is no in-place upgrade from the prior release. You will have to re-deploy to take advantage of these improvements.

To support this release we have updated our technical documentation to guide you through the improved experience.

Visit the Azure Stack forum for help or if you’d like to provide feedback. One specific topic we’d love your feedback on is which Azure Services you’d like to see come down to Azure Stack.

– The Microsoft Azure Stack Team

The Meraki Network: Cloud Managed Switches

Transform your experience with Meraki switches

Yun Shield adds OpenWrt and WiFi to Arduinos

The content below is taken from the original (Yun Shield adds OpenWrt and WiFi to Arduinos), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino LLC released a shield version of its Arduino Yún SBC, letting you add a WiFi and Linux to Arduino boards, along with Ethernet and USB ports. The Arduino and Genuino Yún Shield peels off the OpenWrt-driven WiFi subsystem of the Arduino Yún SBC as a shield add-on, letting you add Internet access to other […]

Deploy your hybrid scenarios and solutions in Microsoft’s cloud

The content below is taken from the original (Deploy your hybrid scenarios and solutions in Microsoft’s cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you trying to get a deeper understanding of how hybrid cloud scenarios can best serve your business? Just what are the elements of hybrid cloud for Microsoft’s cloud platforms and services? What layers are common across them?

The new Microsoft Hybrid Cloud for Enterprise Architects poster

The new Microsoft Hybrid Cloud for Enterprise Architects poster helps you:

  • Understand the breadth of support for hybrid scenarios in Microsoft’s cloud, including Azure PaaS, Azure IaaS, and PaaS (Office 365)
  • See the architecture of hybrid scenarios in Microsoft’s cloud and the common layers of on-premises infrastructure, networking, and identity
  • See the architecture of Azure PaaS-based hybrid apps and step through an example
  • See the architecture for Azure IaaS hybrid scenarios and line of business applications hosted on a cross-premises virtual network
  • See the architecture of SaaS-based hybrid scenarios and examples of hybrid configurations for Office 365

You can download this multi-page poster in PDF or Visio format. You can also print the six pages of this poster on tabloid format paper (also known as 11×17, ledger, or A3).

Microsoft Hybrid Cloud for Enterprise Architects

Also see the other posters in the Microsoft Cloud for Enterprise Architects Series.

Microsoft Cloud Services and Platform Options poster

Microsoft Cloud Services and Platform Options

http://bit.ly/1WthBzh

Microsoft Cloud Identity for Enterprise Architects

Microsoft Cloud Identity for Enterprise Architects

http://bit.ly/1WthBPu

Microsoft Cloud Security for Enterprise Architects

Microsoft Cloud Security for Enterprise Architects

http://bit.ly/1WthDXL

Microsoft Cloud Networking for Enterprise Architects

Microsoft Cloud Networking for Enterprise Architects

http://bit.ly/1WthDXN

Microsoft Cloud Storage for Enterprise Architects

Microsoft Cloud Storage for Enterprise Architects

http://bit.ly/1WthDXO

 

To see all the resources for Microsoft cloud platforms and services, see Microsoft’s Enterprise Cloud Roadmap.

Note to Twitter users: If you tweet about this poster or any others in the series, please use the #mscloudarch. Thanks!

Windows 10 won’t let you share WiFi passwords any more

The content below is taken from the original (Windows 10 won’t let you share WiFi passwords any more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Remember Microsoft’s WiFi Sense? One of its cornerstones is the ability to share password-protected WiFi networks with contacts, saving them the hassle of logging in when they visit. Unfortunately, though, there weren’t many people enamored with the idea. Microsoft has pulled WiFi Sense’s contact sharing its latest Windows 10 Insider preview build after noting that it wasn’t worth the effort given "low usage and low demand." It’ll remain intact on slower Insider builds and regular Windows 10 releases for now, but it should disappear for everyone when the Anniversary Update hits in the summer.

This doesn’t mean that all of WiFi Sense is going away. It’ll still automatically connect you to public hotspots based on crowdsourced data, so you’re safe if you primarily use the feature to get online at airports and coffee shops. Even so, it’s hard to avoid that bittersweet feeling: while it’s good to see Microsoft pruning features people don’t use, the decision makes Windows 10 a little more inconvenient.

Via: The Verge

Source: Windows Experience Blog

Azure for Developers: Download Free eBook from O’Reilly

The content below is taken from the original (Azure for Developers: Download Free eBook from O’Reilly), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure for Developers: Download Free eBook from O’Reilly

Some of you are sure to find this free ebook Azure for Developers to be of immense interest, as it talks about all that the opportunities that Microsoft Azure has to offer to you as a developer.

azure for developers

Azure for Developers free eBook

Microsoft’s Azure platform offers a lot of functions & features – Cloud hosting, web hosting, data analytics, data storage, machine learning, and more, all of which have been integrated with Visual Studio, the tool that .NET developers already know.



With such a large number of offerings, it can be daunting to know where to start. In this O’Reilly report, .NET developer John Adams breaks down the options in plain language so that you can quickly get up to speed on Microsoft Azure.

So if you want to know what Azure offers for your next project, or if you need to convince your management to go with Azure, you definitely want to download this free eBook as it has the information you require in a nutshell.

Click here to visit its download page. You will be required to submit your email ID and other information. When you do this, you will receive its download link via email. You may also receive regular weekly newsletters from O’Reilly Media.

If you are looking for more free eBooks, downloads or other freebies, go visit this link and see if anything interests you.



Anand Khanse aka HappyAndyK is an end-user Windows enthusiast, a Microsoft MVP in Windows, since 2006, and the Admin of TheWindowsClub.com. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

List of CMD or Command Prompt keyboard shortcuts in Windows 10

The content below is taken from the original (List of CMD or Command Prompt keyboard shortcuts in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

List of CMD or Command Prompt keyboard shortcuts in Windows 10

If you use the Command Line frequently, then here is a list of CMD or Command Prompt keyboard shortcuts in Windows 10, that will help you work quicker.

Command Prompt keyboard shortcuts

Command Prompt keyboard shortcuts

Keyboard Shortcut Action

 

 

 

 

 

 

 

 

Begin selection in block mode

 

 

Move the cursor in the direction specified

 

 

Move the cursor by one page up



 

 

Move the cursor by one page down

 

 

Move the cursor to the beginning of the buffer

 

 

Move the cursor to the end of the buffer

 

 

Move up one line in the output history

 

 

Move down one line in the output history

 

 

If the command line is empty, move the viewport to the top of the buffer. Otherwise, delete all the characters to the left of the cursor in the command line. (History navigation)

 

 

If the command line is empty, move the viewport to the command line. Otherwise, delete all the characters to the right of the cursor in the command line. (History navigation)

If you are looking for more tips to work better with CMD, these Command Prompt tips & tricks will help you get started.



Anand Khanse aka HappyAndyK is an end-user Windows enthusiast, a Microsoft MVP in Windows, since 2006, and the Admin of TheWindowsClub.com. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

Azure continues to be the best place for Software as a Service

The content below is taken from the original (Azure continues to be the best place for Software as a Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

A complete platform for SaaS developers

More and more software developers are building SaaS applications—cloud applications that serve a growing base of end customers. To efficiently and cost-effectively deliver this experience, developers need a customizable application platform, data isolation without the overhead, global distribution of data and content to end-users, integrated identity and access, and the option to easily embed business intelligence. Azure offers a unique set of fully managed Platform as a Service (PaaS) offerings, that deliver these foundational elements including: Azure App Service, Azure Service Fabric, Azure SQL Database, Azure CDN, Azure Active Directory, and Power BI Embedded. It is the only application development platform that delivers a comprehensive and integrated suite of fully managed services and is recognized as a Leader in Gartner’s Magic Quadrant for Enterprise Application Platform as a Service, Worldwide for the third consecutive year.

Today, we are excited to announce two more investments that further enrich this experience for SaaS developers: general availability of SQL Database elastic pools and a partnership with Akamai for Azure CDN. These investments add to the momentum shared at //Build 2016 and together these services give our SaaS customers even more reasons to transform their solutions by leveraging Azure as their development platform.

Azure SQL Database elastic pools

Prior to SQL Database elastic pools, developers were forced to make tradeoffs between database isolation and DevOps efficiency. Now, with the general availability of SQL Database elastic pools, in addition to the intelligent capabilities built into the service,  developers can manage few to thousands of databases as one while still maintaining data isolation. Elastic pools are an ideal solution for multitenant environments as each tenant is assigned a database and each database in the elastic pool gets the computing resources only as needed – eliminating the complexity of developing custom application code or over-provisioning and managing individual databases to isolate data. Elastic pools include auto-scaling database resources, intelligent management of the database environment with insights and recommendations, and a broad performance and price spectrum to meet various needs.

Elastic Pools

Since its preview of last year, many SaaS developers have adopted pools in their applications and are benefiting from the transition to elastic pools. One customer already benefitting from SQL Database elastic pools is GEP, a technology provider of SMART by GEP, a cloud-based procurement and supply chain solution.

“The PaaS technologies in Azure help us focus on our core product development without worrying about infrastructure. We have redesigned all our applications to run on PaaS services running in Azure. Currently we are able to quickly bring our code to market and deliver to customers across the globe." – Dhananjay Nagalkar, vice president of technology, GEP. “We’ve migrated +800 of our databases to elastic pools. Each are grouped into a mixture of standard and premium elastic pools which allow us to offer tiered performance and pricing options to our customers. Since the migration, we’ve closed two datacenters in San Jose, CA and Newark, NJ and we’re proud to say GEP is now a datacenter free company. SQL Database brings huge cost savings to us in long run, in the 2016 financial year alone, the adoption of elastic pools will save us a quarter of a million dollars.”

Azure Content Delivery Network from Akamai

With the general availability of Azure CDN from Akamai, Azure CDN is now a multi-CDN offering with services from Akamai and Verizon enabling customers to choose the right CDN for their needs, streamlining support and service with Azure. CDN’s can improve speed, performance and reliability of solutions – a foundational requirement of SaaS applications. The stakes for today’s businesses and content delivery are high. All users—whether they’re online for business, consumer, or entertainment purposes—expect uniformly fast performance and richer media content on any device. In fact, 79 percent who have trouble with website performance say they won’t return to the site to buy again (Kiss Metrics). With this release, Microsoft is offering customers flexibility and global coverage with the availability of Azure CDN from Akamai, a leader in CDN services for media, software and cloud security solutions, enabling customers to manage large media workloads in an efficient and secure way.

Since our preview in the fall, we’ve been working with a number of leading companies including LG, TVN, MEKmedia, TVB and several others in the broadcasting, production and media space. MEKmedia, a leading technology partner for smart TV apps, relies on quick, stable, secure and reliable delivery. Matthias Moritz, CEO, MEKmedia remarked,

“Azure Media Services and Azure CDN from Akamai offers a scalable, secure, and cost-effective solution for our media workflows. Enabling us to deliver great experiences to our customers.”

Transform SaaS apps with Azure Services

In addition to the data isolation and data and content distribution provided by SQL Database and CDN, additional benefits can be unlocked when leveraging additional Azure PaaS services for SaaS applications.

Azure App Service and Service Fabric

Combined with SQL Database elastic pools, App Service delivers a fully end-to-end managed app experience, something SaaS developers need to maintain sensitive margins. App Service is a one-of-a kind solution that brings together the tools customers need for building enterprise-ready apps around a common development, management and billing model. Customers can choose from a rich selection of app templates, API services and a unified set of enterprise capabilities including web and mobile backend services, turnkey connectivity to SaaS and enterprise systems, and workflow-based creation of business processes. Azure App Service frees developers to focus on delivering great business value instead of needing to worry about repetitive tasks such as stitching disparate data sources together and dealing with infrastructure management and operational overhead. This unified approach lets our customers take full advantage of the service while meeting their concerns about security, reliability, and scalability.

For customers looking to build new highly scalable multi-tenant applications, Service Fabric is a mature, feature-rich microservices application platform with built-in support for lifecycle management, stateless and stateful services, performance at scale, 24×7 availability, and cost efficiency.

Service Fabric has been in production use at Microsoft for more than five years, powering a range of Microsoft’s PaaS and SaaS offerings including SQL Databases, DocumentDB, Intune, Cortana and Skype for Business. In the largest of these, Service Fabric manages hundreds of thousands of stateful and stateless microservices across hundreds of servers. Now, we’ve taken this exact same technology and with it released Service Fabric as-a-service on Azure.

Azure Active Directory

For SaaS applications that require seamless federated identity and access, Azure Active Directory provides identity and access management capabilities by combining directory services, advanced identity governance, a rich standards-based platform for developers, and application access management. With Azure Active Directory, developers can enable single sign-on to any SaaS app developed on Azure. Azure Active Directory hosts almost 9.5 million directories from organization all over the world and 600 million user accounts that every day generate 1.3 billion authentications.

Power BI Embedded

Finally, for developers looking to transform the experience of their SaaS application, Microsoft recently introduced Power BI Embedded. Power BI Embedded allows application developers to embed stunning, fully interactive reports into customer facing apps without the time and expense of having to build controls from the ground-up. This service helps the end-user of an application seamlessly get contextual analytics within an app.  Application developers can choose from a broad range of modern data visualizations out of the box, or easily build and use custom visualizations to meet the applications’ unique functional and branding needs. Power BI Embedded offers consistent data visualization experiences on any devices – desktop or mobile.

Highspot, a SaaS vendor offering a sales enablement platform, is one early adopter of Power BI embedded.

“Using Microsoft Power BI Embedded, we were able to enhance our existing analytics abilities significantly. We easily added interactive power BI reports into the existing Highspot sales enablement platform. Power BI Embedded reports gave us rich out-of-the-box visuals, sitting side-by-side with Highspot’s built-in reports, providing sales and marketing teams with a unique 360-degree perspective on the effectiveness of their sales enablement initiatives.” – Robert Wahbe, CEO, Highspot

In summary

We’re excited about the general availability of SQL Database elastic pools and CDN from Akamai as they add even more value and choice to Microsoft Azure’s portfolio of services that help software developers transform their application development. By leveraging any of the Azure PaaS offerings, SaaS developers are free to focus on unlocking business value without the overhead associated with traditional approaches. Together these services give SaaS customers even more reasons to transform their business with Azure.

Learn more about these unique SaaS-optimized services:

How to get your ASP.NET app up on Google Cloud the easy way

The content below is taken from the original (How to get your ASP.NET app up on Google Cloud the easy way), to continue reading please visit the site. Remember to respect the Author & Copyright.

Don’t let anyone tell you that Google Cloud Platform doesn’t support a wide range of platforms and programming languages. We kicked things off with Python and Java on Google App Engine, then PHP and Go. Now, we support .NET framework on Google Compute Engine.

Google recently published a .NET client library for services like Google Cloud Datastore and Windows virtual machines running on Compute Engine. With those pieces in place, it’s now possible to run an ASP.NET application directly on Cloud Platform.

To get you up and running fast, we published two new tutorials that show you how to build and deploy ASP.NET applications to Cloud Platform.

The Hello World tutorial shows you how to deploy an ASP.NET application to Compute Engine.

The Bookshelf tutorial shows you to build an ASP.NET MVC application that uses a variety Cloud Platform services to make your application reliable, scaleable and easy to maintain. First, it shows you how to store structured data with .NET. Do you love SQL? Use Entity Framework to store structured data in Cloud SQL. Tired of connection strings and running ALTER TABLE statements? Use Cloud Datastore to store structured data. The tutorial also shows you how to store binary data and run background worker tasks.

Give the tutorials a try, and please share your feedback! And don’t think we’re done yet  this is just the beginning. Among many efforts, we’re hand-coding open source libraries so that calling Google APIs feels familiar to .NET programmers. Stay tuned for more on running ASP.NET applications on Google Cloud Platform.

Free tool aims to make it easier to find vulns in open source code

The content below is taken from the original (Free tool aims to make it easier to find vulns in open source code), to continue reading please visit the site. Remember to respect the Author & Copyright.

DevOps outfit SourceClear has released a free tool for finding vulnerabilities in open-source code.

SourceClear Open is touted as a means for developers to identify known and emerging security threats beyond those in public and government databases.

“Developers are being held more accountable for security and demanding tools that help them with that responsibility,” according to SourceClear. “But traditional security products are insufficient, and the recent closure of the Open Source Vulnerability Database (OSVDB) and the well-documented struggles of the CVE and its naming process have underscored the limitations of public and government-backed software vulnerability databases.”

SourceClear Open is based on SourceClear’s commercial products and delivered as a cloud-based service. The technology is said to track thousands of threat sources and analyses millions of open-source library releases.

The new tool is designed to allow developers to identify what open-source libraries they are using, what vulnerabilities exist, which vulnerabilities actually matter, and what needs to be done to fix them. SourceClear Open integrates with GitHub and Jenkins and supports languages such as Java, Ruby, Python and JavaScript that development teams often rely on.

SourceClear’s chief exec (and OWASP founder) Mark Curphey explains the technology and the thinking beyond it in a blog post entitled, Free Security for Open-Source Code – SourceClear Open is Now Live, here. ®

Sponsored:
The total economic impact of migrating from open source application servers to IBM WAS liberty