A Crash Course In Thermodynamics For Electrical Engineers

The content below is taken from the original ( A Crash Course In Thermodynamics For Electrical Engineers), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s a simple fact that, in this universe at least, energy is always conserved. For the typical electronic system, this means that the energy put into the system must eventually leave the system. Typically, much of this energy will leave a system as heat, and managing this properly is key to building devices that don’t melt under load. It can be a daunting subject for the uninitiated, but never fear — Adam Zeloof delivered a talk at Supercon 2019 that’s a perfect crash course for beginners in thermodynamics.

Adam’s talk begins by driving home that central rule, that energy in equals energy out. It’s good to keep in the back of one’s mind at all times when designing circuits to avoid nasty, burning surprises. But it’s only the first lesson in a series of many, which serve to give the budding engineer an intuitive understanding of the principles of heat transfer. The aim of the talk is to avoid getting deep into the heavy underlying math, and instead provide simple tools for doing quick, useful approximations.

Conduction and Convection

Conduction is the area first explored, concerning the transfer of heat between solid materials that are touching. Adam explains how this process is dependent on surface area and how this can be affected by surface condition, and the reasons why we use thermal paste when fitting heatsinks to chips. The concept is likened to that of electrical resistance, and comparisons are drawn between heat transfer equations and Ohm’s law. Thermal resistances can be calculated in much the same way, and obey the same parallel and series rules as their electrical counterparts.

With conduction covered, the talk then moves on to discussion of convection — where heat is passed from a solid material to the surrounding fluid, be it a liquid or a gas. Things get a little wilder here, with the heat transfer coefficient h playing a major role. This coefficient depends on the a variety of factors, like the fluid in question, and how much it’s moving. For example, free convection in still air may only have a coefficient of 5, whereas forced air cooling with a fan may have a coefficient of 50, drawing away 10 times as much heat. Adam discusses the other factors involved in convection, and how surface area has a major role to play. There’s a great explanation of why heatsinks use fins and extended surfaces to increase the heat transfer rate to the fluid.

Modelling Thermodynamics

[Adam] demonstrated a heat transfer simulation running on the Hackaday Superconference badge, to much applause.

With the basics out of the way, it’s then time to discuss an example. Given the talk is aimed at an electrical engineering audience, Adam chose to cover the example of a single chip in the middle of a printed circuit board. In three dimensions, the math quickly becomes complex, with many differential equations required to cover conduction and all the various surfaces for convection. Instead, the simulation is simplified down to a quasi-1-dimensional system. Some imperfect assumptions are made to simplify the calculations. While these are spurious and don’t apply in many circumstances, chosen properly, they enable the simple solution of otherwise intractable problems — the magic of engineering! After showing the basic methods involved, Adam shows how such an analysis can be used to guide selection of different cooling methods or heatsink choices, or make other design decisions.

The talk is a great primer for anyone wanting to take a proper engineering approach to solving thermal problems in their designs. And, as a final party piece, Adam closed out the talk with a demonstration of a heat transfer simulation running on the conference badge itself. Thermodynamics can be a dry topic to learn, so it’s great to see a straightforward, intuitive, and engineering-focused approach presented for a general technical audience!

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

The content below is taken from the original ( Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access secures internal applications without the hassle, slowness or user headache of a corporate VPN. Access brings the experience we all cherish, of being able to access web sites anywhere, any time from any device, to the sometimes dreary world of corporate applications. Teams can integrate the single sign-on (SSO) option, like Okta or AzureAD, that they’ve chosen to use and in doing so make on-premise or self-managed cloud applications feel like SaaS apps.

However, teams consist of more than just the internal employees that share an identity provider. Organizations work with partners, freelancers, and contractors. Extending access to external users becomes a constant chore for IT and security departments and is a source of security problems.

Cloudflare Access removes that friction by simultaneously integrating with multiple identity providers, including popular services like Gmail or GitHub that do not require corporate subscriptions. External users login with these accounts and still benefit from the same ease-of-use available to internal employees. Meanwhile, administrators avoid the burden in legacy deployments that require onboarding and offboarding new accounts for each project.

We are excited to announce two new integrations that make it even easier for organizations to work securely with third parties. Starting today, customers can now add LinkedIn and GitHub Teams as login methods alongside their corporate SSO.

The challenge of sharing identity

If your team has an application that you need to share with partners or contractors, both parties need to agree on a source of identity.

Some teams opt to solve that challenge by onboarding external users to their own identity provider. When contractors join a project, the IT department receives help desk tickets to create new user accounts in the organization directory. Contractors receive instructions on how to sign-up, they spend time creating passwords and learning the new tool, and then use those credentials to login.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

This option gives an organization control of identity, but adds overhead in terms of time and cost. The project owner also needs to pay for new SSO seat licenses, even if those seats are temporary. The IT department must spend time onboarding, helping, and then offboarding those user accounts. And the users themselves need to learn a new system and manage yet another password – this one with permission to your internal resources.

Alternatively, other groups decide to “federate” identity. In this flow, an organization will connect their own directory service to their partner’s equivalent service. External users login with their own credentials, but administrators do the work to merge the two services to trust one another.

While this method avoids introducing new passwords, both organizations need to agree to dedicate time to integrate their identity providers – assuming that those providers can integrate. Businesses then need to configure this setup with each contractor or partner group. This model also requires that external users be part of a larger organization, making it unavailable to single users or freelancers.

Both options must also address scoping. If a contractor joins a project, they probably only need access to a handful of applications – not the entire portfolio of internal tools. Administrators need to invest additional time building rules that limit the scope of user permission.

Additionally, teams need to help guide external users to find the applications they need to do their work. This typically ends up becoming a one-off email that the IT staff has to send to each new user.

Multi-SSO with Cloudflare Access

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

With Multi-SSO, this model works the same way but extends that login flow to other sign-in options. When users visit a protected application, they are presented with the identity provider options your team configures. They select their SSO, authenticate, and are redirected to the resource if they are allowed to reach it.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access can also help standardize identity across multiple providers. When users login, from any provider, Cloudflare Access generates a signed JSON Web Token that contains that user’s identity. That token can then be used to authorize the user to the application itself. Cloudflare has open sourced an example of using this token for authorization with our Atlassian SSO plugin.

Whether the identity providers use SAML, OIDC, or another protocol for sending identity to Cloudflare, Cloudflare Access generates standardized and consistent JWTs for each user from any provider. The token can then be used as a common source of identity for applications without additional layers of SSO configuration.

Onboard contractors seamlessly

With the Multi-SSO feature in Cloudflare Access, teams can onboard contractors in less than a minute without paying for additional identity provider licenses.

Organizations can integrate LinkedIn, GitHub, or Google accounts like Gmail alongside their own corporate identity provider. As new partners join a project, administrators can add single users or groups of users to their Access policy. Contractors and partners can then login with their own accounts while internal employees continue to use the SSO provider already in place.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

With the Access App Launch, administrators can also skip sending custom emails or lists of links to new contractors and replace them with a single URL. When external users login with LinkedIn, GitHub, or any other provider, the Access App Launch will display only the applications they can reach. In a single view, users can find and launch the tools that they need.

The Access App Launch automatically generates this view for each user without any additional configuration from administrators. The list of apps also updates as permissions are added or removed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Integrate mergers and acquisitions without friction

Integrating a new business after a merger or acquisition is a painful slog. McKinsey estimates that reorganizations like these take 41% longer than planned. IT systems are a frequent, and expensive, reason. According to data from Ernst and Young, IT work represents the third largest one-time integration cost after a merger or acquisition – only beat by real estate and payroll severance.

Cloudflare Access can help cut down on that time. Customers can integrate their existing SSO provider and the provider from the new entity simultaneously, even if both organizations share the same identity provider. For example, users from both groups can continue to login with separate identity services without disruption.

IT departments can then start merging applications or deprecating redundant systems from day one without worrying about breaking the login flow for new users.

Zero downtime SSO migrations

If your organization does not need to share applications with external partners, you can still use Multi-SSO to reduce the friction of migrating between identity providers.

Organizations can integrate both the current and the new provider with Cloudflare Access. As groups within the organization move to the new system, they can select that SSO option in the Cloudflare Access prompt when they connect. Users still on the legacy system can continue to use the provider being replaced until the entire team has completed the cutover.

Regardless of which option users select, Cloudflare Access will continue to capture comprehensive and standard audit logs so that administrators do not lose any visibility into authentication events during the migration.

Getting started

Cloudflare Access’ Multi-SSO feature is available today for more than a dozen different identity providers, including the options for LinkedIn and GitHub Teams announced today. You can follow the instructions here to start securing applications with Cloudflare Access. The first five users are free on all plans, and there is no additional cost to add multiple identity providers.

Now generally available: Managed Service for Microsoft Active Directory (AD)

The content below is taken from the original ( Now generally available: Managed Service for Microsoft Active Directory (AD)), to continue reading please visit the site. Remember to respect the Author & Copyright.

A few months ago, we launched Managed Service for Microsoft Active Directory (AD) in public beta. Since then, our customers have created more than a thousand domains to evaluate the service in their pre-production environments. We’ve used the feedback from these customers to further improve the service and are excited to announce that Managed Service for Microsoft AD is now generally available for everyone and ready for your production workloads.

Simplifying Active Directory management.png

Simplifying Active Directory management

As more AD-dependent apps and servers move to the cloud, you might face heightened challenges to meet latency and security goals, on top of the typical maintenance challenges of configuring and securing AD Domain Controllers. Managed Service for Microsoft AD can help you manage authentication and authorization for your AD-dependent workloads, automate AD server maintenance and security configuration, and connect your on-premises AD domain to the cloud. The service delivers many benefits, including:

  • Compatibility with AD-dependent apps. The service runs real Microsoft AD Domain Controllers, so you don’t have to worry about application compatibility. You can use standard Active Directory features like Group Policy, and familiar administration tools such as Remote Server Administration Tools (RSAT), to manage the domain. 

  • Virtually maintenance-free. The service is highly available, automatically patched, configured with secure defaults, and protected by appropriate network firewall rules.

  • Seamless multi-region deployment. You can deploy the service in a specific region to enable your apps and VMs in the same or other regions to access the domain over a low-latency Virtual Private Cloud (VPC). As your infrastructure needs grow, you can simply expand the service to additional regions while continuing to use the same managed AD domain.

  • Hybrid identity support. You can connect your on-premises AD domain to Google Cloud or deploy a standalone domain for your cloud-based workloads.
admin experience.png
The admin experience.

You can use the service to simplify and automate familiar AD tasks like automatically “domain joining” new Windows VMs by integrating the service with Cloud DNS, hardening Windows VMs by applying Group Policy Objects (GPOs), controlling Remote Desktop Protocol (RDP) access through GPOs, and more. For example, one of our customers, OpenX, has been using the service to reduce their infrastructure management work:

“Google Cloud’s Managed AD service is exactly what we were hoping it would be. It gives us the flexibility to manage our Active Directory without the burden of having to manage the infrastructure,” said Aaron Finney, Infrastructure Architecture, OpenX. “By using the service, we are able to solve for efficiency, reduce costs, and enable our highly-skilled engineers to focus on strategic business objectives instead of tactical systems administration tasks.”

And our partner, itopia, has been leveraging Managed AD to make the lives of their customers easier: “itopia makes it easy to migrate VDI workloads to Google Cloud and deliver multi-session Windows desktops and apps to users on any device. Until now, the customer was responsible for managing and patching AD. With Google Cloud’s Managed AD service, itopia can deploy cloud environments more comprehensively and take away one more piece of the IT burden from enterprise IT staff,” said Jonathan Lieberman, CEO, itopia. “Managed AD gives our customers even more incentive to move workloads to the cloud along with the peace of mind afforded by a Google Cloud managed service.”

Getting started

To learn more about getting started with Managed Service for Microsoft AD now that it’s generally available, check out the quickstart, read the documentation, review pricing, and watch the webinar.

Low code programming with Node-RED comes to GCP

The content below is taken from the original ( Low code programming with Node-RED comes to GCP), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wouldn’t it be great if building a new application were as easy as performing some drag and drop operations within your web browser? This article will demonstrate how we can achieve exactly that for applications hosted on Google Cloud Platform (GCP) with Node-RED, a popular open-source development and execution platform that lets you build a wide range of solutions using a visual programming style, while still leveraging GCP services.

1 node red.png

Through Node-RED, you create a program (called a flow) using supplied building blocks (called nodes). Within the browser, Node-RED presents a canvas area alongside a palette of available nodes. You then drag and drop nodes from the palette onto the canvas and link those nodes together by drawing connecting wires. The flow describes the desired logic to be performed by specifying the steps and their execution order, and can then be deployed to the Node-RED execution engine.

One of the key features that has made Node-RED successful is its ability to be easily extended with additional custom nodes. Whenever a new API or technology becomes available, it can be encapsulated as a new Node-RED node and added to the list of available nodes found in the palette. From the palette, it can then be added into a flow for use in exactly the same way that the base supplied nodes are used. These additional nodes can then be published by their authors as contributions to the Node-RED community and made available for use in other  projects. There is a searchable and indexed catalog of contributed Node-RED nodes.

A node hides how it internally operates and exposes a clean consumable interface allowing the new function to be used faster. 

Now, let’s take a look at how to run Node-RED on GCP and use it with GCP services.

Installing Node-RED

You can use the Node Package Manager (npm) to install Node-RED on any environment that has a Node.JS runtime. For GCP, this includes Compute Engine, Google Kubernetes Engine (GKE), Cloud Run, Cloud Shell as well as other GCP environments. There’s also a publically available Docker image, which is what we’ll use for this example.

Now, let’s create a Compute Engine instance using the Google Cloud Console and specify the public Node-RED docker image for execution.

Visit the Cloud Console and navigate to Compute Engine. Create a new Compute Engine instance. Check the box labeled “Deploy a container image to this VM instance“.  Enter “nodered/node-red” for the name of the container image:

2 install node red.png

You can leave all the other settings as their defaults and proceed to completing the VM creation.

Once the VM has started, Node-RED is running. To work with Node-RED, you must connect to it from a browser. Node-RED listens on port 1880. The default VPC network firewall deliberately restricts incoming requests which means that requests to port 1880 will be denied. The next step will be to allow a connection into our network at the Node-RED port. We strongly discourage you from opening up Node-RED for development for unrestricted access. Instead, define the firewall rule to only allow ingress from the IP address that your browser presents. You can find your own browser address by performing a Google search on “my ip address”.

Connecting to Node-RED

Now that Node-RED is running on GCP, you can connect to it from a browser, by passing the external public IP address of the VM at port 1880.  For example:

http://35.192.185.114:1880

You can now see the Node-RED development environment within your browser:

3 Connecting to Node-RED.png

Working with GCP nodes

At this point, you have Node-RED running on GCP and can start constructing flows by dragging and dropping nodes from the palette onto the canvas and wiring them together. The nodes that come pre-supplied are merely a starter set—there are many more available that you can install and use in future flows. 

At Google, we’ve built a set of GCP nodes to illustrate how to extend Node-RED to interact with GCP functions. To install these nodes, navigate to the Node-RED system menu and select “Manage palette“:

4 deploying node-red.png

Switch to the Palette tab and then switch to the Install tab within Palette. Search for the node set called “node-red-contrib-google-cloud” and then click install.

5 install node-red.png

Once installed, scroll down through the list of available palette nodes and you’ll find a GCP section containing the currently available GCP building blocks.

6 gcp building blocks.png

Here’s a list of currently available GCP nodes:

  • PubSub in – The flow is triggered by the arrival of a new message associated with a named subscription

  • PubSub out – A new message is published to a named topic

  • GCS read – Reads the content of a Cloud Storage object

  • GCS write – Writes to a new Cloud Storage object

  • Language sentiment – Performs sentiment analysis on a piece of text

  • Vision – Analyzes an image for distinct attributes

  • Log – Writes a log message to Stackdriver Logging

  • Tasks – Initiates a Cloud Tasks instance

  • Monitoring – Writes a new monitoring record to Stackdriver

  • Speech to Text – Converts audio input to a textual data representation

  • Translate – Converts textual data from one language to another

  • DLP – Performs Data Loss Prevention processing on input data

  • BigQuery – Interacts with Google’s BigQuery database

  • FireStore – Interacts with Google’s Firestore database

  • Metadata – Retrieves the metadata for the Compute Engine upon which Node-RED is running

Going forward, we hope to make additional GCP nodes available. It’s also not hard to create a custom node yourself—check out the public Github repository to see how easy it is to create one.

A sample Node-RED flow

Here is an example flow:

7 sample Node-RED flow.png

At a high level, this flow listens on incoming REST requests and creates a new Google Cloud Storage object for each request received.

This flow starts with an HTTP input node which causes Node-RED to listen on the /test URL path for an HTTP GET request. When an incoming REST request arrives, the incoming data undergoes some manipulations:

8 properties.png

Specifically, two fields are set: one called msg.filename, which is the name of a file to create in Cloud Storage, and the other called msg.payload, which is the content of the new file we are creating. In this example, the query parameters passed in the HTTP request are being logged.

The next node in the flow is a GCP node that performs a Cloud Storage object write that writes/creates a new file. The final node sends a response back concluding the original HTTP request that triggered the flow.

Securing Node-RED

Node-RED is designed to get you up and running as quickly as possible. To that end, the default environment isn’t configured for security. We don’t recommend this. Fortunately, Node-RED provides security features that can be quickly enabled.  These features include authorization to be able to make flow changes and enablement of SSL/TLS for encryption of incoming and outgoing data. When initially studying Node-RED, define a firewall rule that only permits ingress from your browser’s IP address.

Visual programming on GCP the Node-RED way

Node-RED has proven itself as a data flow and event processor for many years. Its extremely simple architectural model and low barrier to entry means that even a novice user can get value from it in a very short period of time. A quick Internet search reveals many tutorials on YouTube, the documentation is mature and polished, and the community active and vibrant. With the addition of the rich set of GCP nodes that we’ve contributed to the community, you can now incorporate GCP services into Node-Red whether it’s hosted on GCP, on another public cloud, or on-premises. 

References

Sky is New Limit for Dot Com Domain Prices

The content below is taken from the original ( Sky is New Limit for Dot Com Domain Prices), to continue reading please visit the site. Remember to respect the Author & Copyright.

Earlier this week, domain name registrar Namecheap sent out an email to all customers advising them of a secret deal that went down between ICANN and Verisign sometime late last year. It has the potential to change the prices of domain names drastically over time, and thus change the makeup of the Internet as we know it.

Domain names aren’t really owned, they’re rented with an option to renew, and the annual rate that you pay depends both on your provider’s markup, but also on a wholesale rate that’s the same for all names in that particular domain. This base price is set by ICANN, a non-profit.

Officially, this deal is a proposed Amendment 3 to the contract in place between Verisign and ICANN that governs the “.com” domain. The proposed amendment would let Verisign increase the wholesale rental price of “.com” domain names by 7% per year for the next four years. Then there will be a two-year breather, followed by another four years of 7% annual hikes. And there is no foreseeable end to this cycle. We think it seems reasonable to assume that the domain name registrars might pass the price gouging on to the consumer, but that really remains to be seen.

The annual wholesale domain name price has been sitting at $7.85 since 2012, and as of this writing, Namecheap is charging $8.88 for a standard “.com” address. If our math is correct, ten years from now, a “.com” domain will cost around $13.50 wholesale and $17.50 retail. This almost-doubling in price will affect both small sites and companies that hold many domain names. And the increase will only get more dramatic with time.

So let’s take a quick look at the business of domain names.

The backs of the racks via @tvick on unsplash

They CANN and They Will

The Internet Corporation for Assigned Names and Numbers (ICANN) formed in 1998 with the intent to coordinate, allocate, and assign domain names and IP addresses, assign protocols, and more. ICANN is also responsible for the thirteen root name servers that make up the Internet, and they’re the reason you type words instead of numbers when you want to visit a website. They officially operate as a not-for-profit, public-benefit corporation.

Verisign was founded in 1995 and got their start issuing SSL certificates. They became an internet superpower when they acquired Network Solutions in 2000 and took over the company’s registry functions. As part of this new deal, Verisign will be able to operate as a domain name registrar, stopping just short of being able to sell “.com” real estate themselves, although they could potentially act as a reseller through another company.

As part of the proposed amendment, Verisign will give ICANN $20 million over the next five years, beginning January 2021. Although it isn’t exactly clear how they’ll spend the money, it’s supposed to be earmarked for continued support of things ICANN were already doing, like mitigating threats to DNS security, governing the root name server system, and policing possible name collisions. But people have questioned ICANN’s transparency and accountability — so far, there doesn’t seem to be a system in place to verify that the funds aren’t misappropriated.

ICANN has transparency? Image via ICANN

What’s a Web Address Worth?

If domains are too cost-prohibitive, then only the rich can stake a claim in cyberspace, and democracy dies in that regard. Conversely, if land is too cheap, cyber-squatters will snatch up URLs and/or dilute the web with snakeoil sites. Any right answer will need to balance these offsetting effects.

Inflation drives the prices of all other goods up, why not domain names?  But is the rate too high? The average inflation rate in the US runs under 3% per year, and hasn’t seen 7% in ages.

What do you think, Hackaday universe? Is this increase schedule cause for alarm, or is it just business as usual?

We think ICANN could have at least notified registrars sooner, but that may have given consumers too much time to complain. This isn’t the first time that ICANN has ignored public comment in recent memory — last summer when there was talk of removing price caps on “.org” domains, many people commented in favor of keeping prices capped on the other legacy TLDs, and ICANN completely ignored them. A few months later, the .org registry was purchased by a private equity firm, and the details are still being worked out. Is ICANN still working for the public good?

In the tradition of begging forgiveness later, and for all the good it’ll do, ICANN has an open comment period until Friday, February 14th. So go tell ’em how you feel, even if it feels like screaming into the void.

New: PropertySystemView v1.00

The content below is taken from the original ( New: PropertySystemView v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

PropertySystemView is a tool that allows you view and modify the properties of file from GUI and command-line, using the property system of Windows operating system. For example, you can change the ‘Media Created’ timestamp stored in .mp4 files (System.Media.DateEncoded) as well as other metadata stored in media files and office documents, like Title, Comments, Authors, Tags, Date Acquired, Last Saved Date, Content Created Date, Date Imported, Date Taken (EXIF of .jpg files), and more…
PropertySystemView also allows you to set properties of Windows. For example, you can set the System.AppUserModel.ID property of a window in order to disable the taskbar grouping of the specified window.

Admin Essentials: Configuring Chrome Browser in your VDI environment

The content below is taken from the original ( Admin Essentials: Configuring Chrome Browser in your VDI environment), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a Chrome Enterprise Customer Engineer, I often get asked by administrators of virtual desktop infrastructure (VDI) environments what our best practices are for backing up user profiles. For example, many ask us how to minimize backup size to speed up user log-in and log-off into the Windows environment and reduce impact on the overall user experience.

Like any browser, Chrome has cache directories. This is where data is temporarily stored for faster future site loading, cookies are saved in order to provide seamless authentication on websites, extensions cache various resources, and more. Chrome stores all of its caches in folders in the user profile directory. 

VDI administrators may prefer to back up the entire Chrome user profile directory, but the more sites a user accesses, the more the size of the cache folder increases, and the number of small files in those folders can become quite large. This can result in an increased user profile folder backup time. For users, this can lead to slower startup time for Chrome. 

Although we’ll cover different scenarios today, Google Sync is still our recommended method for syncing browser profile data between machines. It provides the best experience for both the user and the administrator as users only need to sign in. However, there are some environments where this option isn’t suitable for technical or policy reasons. If you can’t use Google Sync, there are a few approaches that can be used to minimize the backup size.

Moving the cache folders

One option is for administrators to move the cache folders outside of Chrome’s user profile folder. The VDI administrator will need to identify a folder outside of the Chrome user profile directory where the caches will be stored. Caches should still be in the Windows user’s directory, and keeping them in hidden directories can also reduce the risk of the cache being accidentally deleted. 

Examples of such folder shortcuts would be:

  • ${local_app_data}/Chrome Cache

  • ${profile}/Chrome Cache

The user data directory variables can help you specify the best directory for your caches.

Once the folder location has been decided, administrators need to configure the DiskCacheDir policy that relocates the cache folders. This policy can be configured either via Group Policy or registry. Once the policy configuration has been applied onto the machines, Chrome will start storing the cache directories into the newly defined cache folder location. The administrator might have to do a cleanup of older caches from the user profile folder the first time after enabling this policy as the policy does not remove the old caches.

Then, continue using the standard Chrome user profile directory. This should result in faster startup times for Chrome, as less data will be copied when a user signs-on or signs-off. It’s important to note that this approach will not allow simultaneous sessions from different machines, but it will preserve session data.

Enabling Roaming Profile Support

A second option is to enable the Chrome Roaming Profile Support feature. This will also not allow simultaneous sessions from different machines, and it won’t save a user’s concurrent session data. However, it will enable you to move the Chrome profile into network storage and load it from there. In this scenario, network performance could impact Chrome’s startup time.

To enable Chrome Roaming Profile Support: 

  • Switch on the ​Roaming​Profile​Support​Enabled policy.

  • Optional: Use the RoamingProfileLocation policy to specify the location of the roaming profile data, if this is how you’ve configured your environment. The default is ${roaming_app_data}\Google\Chrome\User Data.

  • If you have been using the UserDataDir policy to relocate the regular Chrome profile to a roaming location, make sure to revert this change.

Advanced controls

While the solutions above will work for most enterprises, there are organizations that want more granular control of the files that are backed up. The approach below allows for more control, but comes at a higher risk, as file names or locations can change at any moment with a Chrome version release. A granular file backup could introduce data corruption, but unlike the other options, it will preserve session data. Here is how to set it up: 

  • Set disk cache to ${local_app_data}\Google\Chrome\User Data with the DiskCacheDir flag.

  • Set user profile to ${roaming_app_data}\Google\Chrome\User Data with the UserDataDir flag.

  • Back up the following files in your VDI configuration:

    • Folder location: AppData\Roaming\Google\Chrome\User Data\.

    • Files: First Run, Last Version, Local State, Safe Browsing Cookies, Safe Browsing Cookies-journal, Bookmarks, Cookies, Current Session, Current Tabs, Extension Cookies, Favicons, History, Last Session, Last Tabs, Login Data, Login Data-journal, Origin Bound Certs, Preferences, Shortcuts, Top Sites, Web Data, Web Data-journal.

Even though this approach preserves session data, it will not enable simultaneous sessions from different machines. 

There you have it—three different approaches IT teams can take to store Chrome caches in VDI environments. Keep in mind that there are a few ways an administrator can push policies onto a machine. For all desktop platforms, Google offers the Chrome Browser Cloud Management (CBCM) console as a one-stop shop for all policy deployments and it allows the admin to set one policy that can be deployed on any desktop OS and Chrome OS. For Windows, the admin can also use GPO or registry settings. For Mac, they can use managed preferences. These templates and more info can be found at chrome.com/enterprise.

If you’d like to learn more about the management options that we make available to IT teams, please visit our Chrome Enterprise web site.

Trying to Create a Script using Microsoft Forms to Create AD and O365 Accounts

The content below is taken from the original ( in /r/ PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey guys,

I am looking into taking results from an excel file that Microsoft Forms create after filled out by HR, then somehow take that data and create a user in our local AD and also sync it with O365. The form contains distribution groups, group memberships, email addresses, first and last names. I am just having a little trouble beginning this task as I have never wrote a script before and was looking for some great minds to assist me. Any help is greatly appreciated and I will happily answer any questions!

Petition asking Microsoft to open-source Windows 7 sails past 7,777-signature goal

The content below is taken from the original ( Petition asking Microsoft to open-source Windows 7 sails past 7,777-signature goal), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Free Software Foundation really set the bar high there

Good news everybody! The Free Software Foundation has blown through its self-imposed target of 7,777 signatories in its efforts to persuade Microsoft to make Windows 7 open source.…

Python: What Is It and Why Is It so Popular?

The content below is taken from the original ( Python: What Is It and Why Is It so Popular?), to continue reading please visit the site. Remember to respect the Author & Copyright.

What is Python?

Python is a general-purpose programming language. If you see any survey on popular programming languages over the last few years, Python always comes on top of the demand chart each year.

People who hate programming are tempted to change their minds by the simplicity of Python. Most of the job listings have Python somewhere as part of the job description. Let us see what makes Python so special in this decade and future. Candidates and companies upskill or reskill in Python irrespective of their qualifications, roles, and experience. Needless to say, Python is very easy to learn.

Cloud Academy offers a Python for Beginners Learning Path that provides an ideal entry point for people wanting to learn how to program with the Python scripting language. The learning path includes courses, exams, hands-on labs, and lab challenges that give you the knowledge and real-world experience you need.

Cloud Academy Python for Beginners

What is a programming language?

Let us spend a few minutes on understanding what is a programming language. If you have a programming background, this section is going to be a quick recap for you. Computers have built-in powerful components such as CPUs, memory, etc. CPUs can perform millions of instructions per second. How can we use computation and storage capability for our own benefit?

For example, I want to process payroll data for thousands of employees to display hundreds of products on an eCommerce website. Programming language helps us give instructions to computers to perform some activities. You might have heard about Java, Python, C, C++, PHP, COBOL, .NET, BASIC as they are examples of high-level programming languages.

We all know that computers can understand only electronic signals. There are only two possible things computers know: Presence of a signal(one) and the absence of a signal(zero). Computers use machine code (Contains zeros and ones) for processing purposes. Now how do we give instructions to computers when they don’t understand English, Spanish, or French.

Software professionals came up with the idea of high-level languages using which we can give instructions to CPUs to perform some activity. High-level languages are built with simple keywords from plain English and convey special instructions to CPUs for each keyword. Therefore, we write a set of statements in any high-level language to perform some computational activity.

This set of statements(code) is called a program. In the software field, this activity is known as coding or programming. With that background information, we will see one more basic concept.

Compiler vs interpreter

Software tools are available to convert high-level language code into machine code. Now we will explore two tools that convert high-level language code into machine code.

The compiler converts your entire code into machine code and stores it as an executable format file. In the Windows operating system, the popular format is known as .exe format. Windows OS will use this executable file to launch the program whenever we double-click the executable file. Java, C, and C++ are compiler-based programming languages.

Python is an interpreted high-level language. Interpreter translates one line of code at a time into machine code then executes it before converting the next line of source code. Perl, PHP, and Ruby are some more examples of interpreter based languages. Python interpreters are available in major operating systems.

Let us see a small Python program. We can call it hello.py where it contains only one line.

print(“hello”)

This is the code requires to print “hello” in screen. In other languages, you need to write at least three or four lines to do the same work.

Python is globally well supported and adopted by the tech community

Some languages are designed to solve specific sets of problems. For example, Structured Query language (SQL) is meant for working with databases. LISP favored Artificial Intelligence related research and FORTRAN was developed for scientific and engineering applications.

Python is a general-purpose language. You could use Python in different domains such as the data science field, web development, application development, game development, information security field, system administration, image processing, multimedia, IoT, and machine learning.

Python key features

If you want to know whether Python has advanced features of a typical high-level language. Yes, it does. Python supports important language features: dynamic typing, late binding, garbage collection, exception handling and more than 200,000 packages with a wide range of functionality. Python is very stable and has been here for three decades.

Python also supports different programming approaches. For example, Python supports structured programming, object-oriented programming, functional programming, and aspect-oriented programming.

Guido van Rossum released Python in 1991. The name Python came from the British comedy group Monty Python. Python maintains two releases of Python namely Python 2.x and Python 3.x. Python 2.x is officially being discontinued from 01 Jan 2020. Python 3.8 is the latest version.

Python is free and has strong support from the python global community. A non-profit organization, the Python Software Foundation, manages and directs resources for Python. Stackoverflow developer survey results say that Python is the most popular and wanted general-purpose language.

Most popular technologiesMost wanted languages

Source: Stack Overflow’s annual Developer Survey

Why is Python so popular?

Now I am going to share more details on why Python is popular among job seekers.

Software related services provide employment to millions of people across the globe. Candidates are recruited for different roles in software development. Here below I have listed some of the roles from the software industry where Python skills are important.

Python developer/engineer: As a Python developer, you will get the opportunity to work in different jobs. You will be working on the design and development of front end and back end components. You can work on website development with exposure to Django framework or flask framework. Exposure to Databases such as MySQL, MongoDB is desirable with SQL knowledge.

Python automation tester: Software testers can use Selenium with Python and pytest for testing Automation.

System administrator: In operations, Python is used heavily as a scripting language. Python can be used to automate DevOps and routine day to day activities. In the AWS cloud environment, Boto is used as Amazon Python SDK. Boto is used to create, configure and manage AWS services such as EC2, Identity management, S3, SES, SQS, AWSKMS, etc. Open stack cloud is written in Python.

Python for managers/business people: Non-technology people can learn beginner level python to organize and analyze huge amounts of data using the Panda data analysis library. This will help them to make meaningful data-driven decisions.

Researchers: Those who belong to the research community can use python module NumPy for scientific computing. More than that exploration of statistical learning can be performed using the scikit-learn library.

Cybersecurity analyst/security consultant: Python can be used to write penetration testing scripts, information gathering and automating purposes. For those of you who are familiar with Kali Linux, many scripts are written in Python.

Data science engineer: Python is a known data science packages. scikit-learn and matplotlib are some of the packages that are useful in data science. Python also supports various big data tools as well. Artificial intelligence is predicted as the future of technology. Python is the go-to language for career choice in data science.

Internet of things(IoT) developer: Python is emerging as a language choice of developers.

2019 IoT developer survey by Eclipse IoT working group lists Python, C, and Java as preferred languages in IoT environments.

Source: 2019 IoT developer survey by Eclipse IoT working group

Python is also listed on the PopularitY of Programming Language (PYPL) index as a popular programming language.

Python ranking

Source: PopularitY of Programming Language (PYPL) 

If you get a chance to go through Python openings in any of the job sites, you will find more roles where Python knowledge is a must.

Python is widely used in different application domains

Python is used in many application domains. Here is a list of a few.

Web and internet development

Python offers many choices for web development:

  • Frameworks such as Django and Pyramid.
  • Micro-frameworks such as Flask and Bottle.
  • Advanced content management systems such as Plone and django CMS.

Python’s standard library supports many internet protocols:

  • HTML, XML, and JSON
  • Email processing.
  • Support for FTP, IMAP, and other internet protocols.
  • Easy-to-use socket interface.

Scientific and numeric

Python is widely used in scientific and numeric computing.

Education

Python is a superb language for teaching programming, both at the introductory level and in more advanced courses. Schools and colleges are started to offer Python as a beginner level course for programming. So there are many openings in teaching as well.

Desktop GUIs

The Tk GUI library is included with most binary distributions of Python.

Software development (DevOps)

Python is often used as a support language for software developers to build control and management, testing, and in many other ways.

Business Applications

Python is also used to build ERP and e-commerce systems:

As per TIOBE Index for December 2019, Python is placed in one of the top three languages to build new systems.

Python rate of adoption and usage is growing fast across industries including manufacturing, academia, electronics, IoT, finance, energy, tech, and government.

Before I conclude, I consider Python the best bet to learn to advance your career and future. What about you? I will meet you in another post to explain the steps involved in installing Python and writing simple code in Python.

References

Applications for Python ( https://www.python.org/about/apps/ )

The post Python: What Is It and Why Is It so Popular? appeared first on Cloud Academy.

Google open-sources the tools needed to make 2FA security keys

The content below is taken from the original ( Google open-sources the tools needed to make 2FA security keys), to continue reading please visit the site. Remember to respect the Author & Copyright.

Security keys are designed to make logging in to devices simpler and more secure, but not everyone has access to them, or the inclination to use them. Until now. Today, Google has launched an open source project that will help hobbyists and hardware…

Admin Essentials: Improving Chrome Browser extension management through permissions

The content below is taken from the original ( Admin Essentials: Improving Chrome Browser extension management through permissions), to continue reading please visit the site. Remember to respect the Author & Copyright.

IT teams often look for best practices on managing extensions to avoid exposing company IP, leaving open security holes and compromising the productivity of end users. Fortunately, there are several options available to admins for extension management in Chrome. I’m going to cover one of them in more detail in this Admin Essentials post. 

Several  configuration options are available to enterprises wanting to manage extensions. Many enterprises are familiar with the more traditional route of blacklisting and whitelisting. But a second approach offers enterprises more granular controls. Instead of managing the extensions themselves, you can block or allow them by their behavior or permissions.

What are extension permissions? 

Permissions are the rights that are needed on a machine or website in order for the extension to function as intended. There are device permissions that need access to devices and site permissions that need access to sites. Some extensions require both.

extension permissions.png

Permissions are declared by the extension developer in the manifest file. Here is an example:

manifest file.png

Take a look at this list of the various permissionsto help you determine what is or isn’t acceptable to be run on your organization’s devices. As a first step towards discovering which extensions are live in your environment, consider Chrome Browser Cloud Management. It has the ability to pull what extensions are present on your enrolled machines as well as what permissions they are using. Here is an example of that view in Chrome Browser Cloud Management:

Chrome Browser Cloud Management.gif

If you’re a G Suite customer, you already have this functionality in the Device Management section of the Admin console.  

Once you’ve done a discovery exercise to learn which extensions are installed on your end users’ machines, and created a baseline of what permissions you will (or won’t) allow in your environment, you can centrally allow or block extensions by those permissions. With this approach, you don’t have to maintain super long whitelists or blacklists. If you couple this with allowing/blocking site permissions, which allows you to designate specific sites where extensions can or cannot run, you add another layer of protection. This approach of blocking runtime hosts makes it so you can block extensions from running on your most sensitive sites while allowing them to run on any other site. 

For a more in depth look at managing extensions, check out this guide (authored by yours truly) that covers all of the different ways of managing extensions. Or watch this video of me and my Google Security colleague, Nick Peterson, at Next 2019 presenting how to get this done. Enjoy, and happy browsing!

Announcing the Cloudflare Access App Launch

The content below is taken from the original ( Announcing the Cloudflare Access App Launch), to continue reading please visit the site. Remember to respect the Author & Copyright.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Every person joining your team has the same question on Day One: how do I find and connect to the applications I need to do my job?

Since launch, Cloudflare Access has helped improve how users connect to those applications. When you protect an application with Access, users never have to connect to a private network and never have to deal with a clunky VPN client. Instead, they reach on-premise apps as if they were SaaS tools. Behind the scenes, Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Administrators need about an hour to deploy Access. End user logins take about 20 ms, and that response time is consistent globally. Unlike VPN appliances, Access runs in every data center in Cloudflare’s network in 200 cities around the world. When Access works well, it should be easy for administrators and invisible to the end user.

However, users still need to locate the applications behind Access, and for internally managed applications, traditional dashboards require constant upkeep. As organizations grow, that roster of links keeps expanding. Department leads and IT administrators can create and publish manual lists, but those become a chore to maintain. Teams need to publish custom versions for contractors or partners that only make certain tools visible.

Starting today, teams can use Cloudflare Access to solve that challenge. We’re excited to announce the first feature in Access built specifically for end users: the Access App Launch portal.

The Access App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.

How does it work?

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

When the user logs in, they are redirected through a subdomain unique to each Access account. Access assigns that subdomain based on a hostname already active in the account. For example, an account with the hostname “widgetcorp.tech” will be assigned “widgetcorp.cloudflareaccess.com”.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

The Access App Launch uses the unique subdomain assigned to each Access account. Now, when users visit that URL directly, Cloudflare Access checks their identity and displays only the applications that the user has permission to reach. When a user clicks on an application, they are redirected to the application behind it. Since they are already authenticated, they do not need to login again.

In the background, the Access App Launch decodes and validates the token stored in the cookie on the account’s subdomain.

How is it configured?

The Access App Launch can be configured in the Cloudflare dashboard in three steps. First, navigate to the Access tab in the dashboard. Next, enable the feature in the “App Launch Portal” card. Finally, define who should be able to use the Access App Launch in the modal that appears and click “Save”. Permissions to use the Access App Launch portal do not impact existing Access policies for who can reach protected applications.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Administrators do not need to manually configure each application that appears in the portal. Access App Launch uses the policies already created in the account to generate a page unique to each individual user, automatically.

Defense-in-depth against phishing attacks

Phishing attacks attempt to trick users by masquerading as a legitimate website. In the case of business users, team members think they are visiting an authentic application. Instead, an attacker can present a spoofed version of the application at a URL that looks like the real thing.

Take “example.com” vs “examрle.com” – they look identical, but one uses the Cyrillic “р” and becomes an entirely different hostname. If an attacker can lure a user to visit “examрle.com”, and make the site look like the real thing, that user could accidentally leak credentials or information.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

To be successful, the attacker needs to get the victim to visit that fraudulent URL. That frequently happens via email from untrusted senders.

The Access App Launch can help prevent these attacks from targeting internal tools. Teams can instruct users to only navigate to internal applications through the Access App Launch dashboard. When users select a tile in the page, Access will send users to that application using the organization’s SSO.

Cloudflare Gateway can take it one step further. Gateway’s DNS resolver filtering can help defend from phishing attacks that utilize sites that resemble legitimate applications that do not sit behind Access. To learn more about adding Gateway, in conjunction with Access, sign up to join the beta here.

What’s next?

As part of last week’s announcement of Cloudflare for Teams, the Access App Launch is now available to all Access customers today. You can get started with instructions here.

Interested in learning more about Cloudflare for Teams? Read more about the announcement and features here.

Open Laptop Soon to be Open For Business

The content below is taken from the original ( Open Laptop Soon to be Open For Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

How better to work on Open Source projects than to use a Libre computing device? But that’s a hard goal to accomplish. If you’re using a desktop computer, computer than Libre software is easily achievable, though achievable, thought keeping your entire software stack free of closed source binary blobs might require a little extra work. But if you want a laptop, laptop your options are few indeed. Lucky for us, us soon there may be another device in the mix soon, because as [Lukas Hartmann] has just about finalized the MNT Reform.

Since we started eagerly watching the Reform a couple years ago the hardware world has kept keep turning, and the Reform has improved accordingly. The i.MX6 series CPU is looking a little peaky now that it’s approaching end of life, and the device has switched to a considerably more capable – but no less free – i.MX8M paired with 4 GB 4GB of DDR4 on a SODIMM-shaped System-On-Module. This particular SOM is notable because the manufacturer freely provides the module schematics, making it easy to upgrade or replace in the future. The screen has been bumped up to a 12.5″ 1080p panel and steps have been taken to make sure it can be driven without blobs in the graphics pipeline.

If you’re worried that the chassis of the laptop may have been left to wither while the goodies inside got all the attention, there’s no reason for concern. Both concern as both have seen substantial improvement. The keyboard now uses the Kailh Choc ultra low profile mechanical switches for great feel in a small package, while the body itself is milled out of aluminum in five pieces. It’s printable as well, if you want to go that route. All in all, pieces (and printable as well). All in all the Reform represents a heroic amount of work and we’re extremely impressed with how far the design has come.

Of course if any of the above piqued your interest full electrical, mechanical and software sources (spread across a few repos) are available for your perusal; follow the links in the blog post for pointers to follow. We’re thrilled to see how production ready the Reform is looking and can’t wait to hear user reports as they make their way into to the wild!

Via [Brad Linder] at Liliputing.

UK begins testing unsupervised autonomous transport pods

The content below is taken from the original ( UK begins testing unsupervised autonomous transport pods), to continue reading please visit the site. Remember to respect the Author & Copyright.

Shoppers at a UK mall have the opportunity to try out autonomous transport pods this week which — in a UK first — operate entirely without supervision. The driverless pods are being tested at the Cribbs Causeway mall in Gloucestershire, and run bet…

Setting up passwordless Linux logins using public/private keys

The content below is taken from the original ( Setting up passwordless Linux logins using public/private keys), to continue reading please visit the site. Remember to respect the Author & Copyright.

Setting up an account on a Linux system that allows you to log in or run commands remotely without a password isn’t all that hard, but there are some tedious details that you need to get right if you want it to work. In this post, we’re going to run through the process and then show a script that can help manage the details.

Once set up, passwordless access is especially useful if you want to run ssh commands within a script, especially one that you might want to schedule to run automatically.

It’s important to note that you do not have to be using the same user account on both systems. In fact, you can use your public key for a number of accounts on a system or for different accounts on multiple systems.

To read this article in full, please click here

Introducing Google Cloud’s Secret Manager

The content below is taken from the original ( Introducing Google Cloud’s Secret Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication. Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.

Secret Manager is a new Google Cloud service that provides a secure and convenient method for storing API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. 

Secret Manager offers many important features:

  • Global names and replication: Secrets are project-global resources. You can choose between automatic and user-managed replication policies, so you control where your secret data is stored.

  • First-class versioning: Secret data is immutable and most operations take place on secret versions. With Secret Manager, you can pin a secret to specific versions like 42 or floating aliases like latest.

  • Principles of least privilege: Only project owners have permissions to access secrets. Other roles must explicitly be granted permissions through Cloud IAM.

  • Audit logging: With Cloud Audit Logging enabled, every interaction with Secret Manager generates an audit entry. You can ingest these logs into anomaly detection systems to spot abnormal access patterns and alert on possible security breaches.  

  • Strong encryption guarantees: Data is encrypted in transit with TLS and at rest with AES-256-bit encryption keys. Support for customer-managed encryption keys (CMEK) is coming soon.

  • VPC Service Controls:Enable context-aware access to Secret Manager from hybrid environments with VPC Service Controls.

The Secret Manager beta is available to all Google Cloud customers today. To get started, check out the Secret Manager Quickstarts. Let’s take a deeper dive into some of Secret Manager’s functionality.

Global names and replication

Early customer feedback identified that regionalization is often a pain point in existing secrets management tools, even though credentials like API keys or certificates rarely differ across cloud regions. For this reason, secret names are global within their project.

While secret names are global, the secret data is regional. Some enterprises want full control over the regions in which their secrets are stored, while others do not have a preference. Secret Manager addresses both of these customer requirements and preferences with replication policies.

  • Automatic replication: The simplest replication policy is to let Google choose the regions where Secret Manager secrets should be replicated.

  • User-managed replication: If given a user-managed replication policy, Secret Manager replicates secret data into all the user-supplied locations. You don’t need to install any additional software or run additional services—Google handles data replication to your specified regions. Customers who want more control over the regions where their secret data is stored should choose this replication strategy.

First-class versioning

Versioning is a core tenet of reliable systems to support gradual rollout, emergency rollback, and auditing. Secret Manager automatically versions secret data using secret versions, and most operations—like access, destroy, disable, and enable—take place on a secret version.

Production deployments should always be pinned to a specific secret version. Updating a secret should be treated in the same way as deploying a new version of the application. Rapid iteration environments like development and staging, on the other hand, can use Secret Manager’s latest alias, which always returns the most recent version of the secret.

Integrations

In addition to the Secret Manager API and client libraries, you can also use the Cloud SDK to create secrets:

and to access secret versions:

Discovering secrets

As mentioned above, Secret Manager can store a variety of secrets. You can use Cloud DLP to help find secrets using infoType detectors for credentials and secrets. The following command will search all files in a source directory and produce a report of possible secrets to migrate to Secret Manager:

If you currently store secrets in a Cloud Storage bucket, you can configure a DLP job to scan your bucket in the Cloud Console. 

Over time, native Secret Manager integrations will become available in other Google Cloud products and services.

What about Berglas?

Berglas is an open source project for managing secrets on Google Cloud. You can continue to use Berglas as-is and, beginning with v0.5.0, you can use it to create and access secrets directly from Secret Manager using the sm:// prefix.

If you want to move your secrets from Berglas into Secret Manager, the berglas migrate command provides a one-time automated migration.

Accelerating security

Security is central to modern software development, and we’re excited to help you make your environment more secure by adding secrets management to our existing Google Cloud security product portfolio. With Secret Manager, you can easily manage, audit, and access secrets like API keys and credentials across Google Cloud. 

To learn more, check out the Secret Manager documentation and Secret Manager pricing pages.

Building a Low-Tech Website for Energy Efficiency

The content below is taken from the original ( Building a Low-Tech Website for Energy Efficiency), to continue reading please visit the site. Remember to respect the Author & Copyright.

In an age of flashy jQuery scripts and bulky JavaScript front-end frameworks, loading a “lite” website is like a breath of fresh air. When most of us think of lightweight sites, though, our mind goes to old-style pure HTML and CSS sites or the intentionally barebones websites of developers and academics. Low-tech Magazine, an intentionally low-tech and solar-powered website, manages to incorporate both modern web aesthetics and low-tech efficiency in one go.

Rather than hosting the site on data centers – even those running on renewable power sources – they have a self-hosted site that is run on solar power, causing the site to occasionally go off-line. Their model contrasts with the cloud computing model, which allows more energy efficiency at the user-side while increasing energy expense at data centers. Each page on the blog declares the page size, with an average page weight of 0.77 MB, less than half of the average page size of the top 500,000 most popular blogs in June 2018.

Some of the major choices that have limited the size of the website include building a static site as opposed to a dynamic site, “dithering” images, sparing a logo, staying with default typefaces, and eliminating all third-party tracking, advertising services, and cookies. Their GitHub repository details the front-end decisions  including using unicode characters for the site’s logo rather than embedding an SVG. While the latter may be scalable and lightweight in format it requires distribution to the end-user, which can involve a zipped package with eps, ai, png, and jpeg files in order to ensure the user is able to load the image.

As for the image dithering, the technique allows the website to maintain its characteristic appearance while still minimizing image quality and size. Luckily for Low-tech Magazine, the theme of the magazine allows for black and white images, suitable for dithering. Image sprites are also helpful for minimizing server requests by combining multiple small images into one. Storage-wise, the combined image will take up less memory and only load once.

There are also a few extraneous features that emphasize the website’s infrastructure. The background color indicates the capacity of the solar-charged battery for the website’s server, while other stats about the server’s location (time, sky conditions, forecast) also help with making the website availability in the near future more visible. Who knows, with the greater conscience on environmental impact, this may be a new trend in web design.

ZeroPhone is an open-source smartphone that can be assembled for 50$ in parts based on Raspberry Pi Zero. It is Linux-powered, with UI software written in Python, allowing it to be easily modifiable – and it doesn’t prohibit you from changing the way it works.

The content below is taken from the original ( in /r/ raspberry_pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2T5WSpD

Arduino launches a new modular platform for IoT development

The content below is taken from the original ( Arduino launches a new modular platform for IoT development), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino, the open-source hardware platform, today announced the launch of a new low-code platform and modular hardware system for IoT development. The idea here is to give small and medium businesses the tools to develop IoT solutions without having to invest in specialized engineering resources.

The new hardware, dubbed the Arduino Portenta H7,  features everything you’d need to get started with building an IoT hardware platform, including a crypto-authentication chip and communications modules for WiFi, Bluetooth Low Energy and LTE, as well as Narrowband IoT. Powered by 32-bit Arm microcontrollers, either the Cortex-M7 or M4, these low-power modules are meant for designing industrial applications, as well as edge processing solutions and robotics applications. It’ll run Arm’s Mbed OS and support Arduino code, as well as Python and Javascript applications.

“SMBs with industrial requirements require simplified development through secure development tools, software and hardware to economically realize their IoT use cases,” said Charlene Marini, the VP of strategy for Arm’s IoT Services Group. “The combination of Mbed OS with Cortex-M IP in the new Arduino Portenta Family will enable Arduino’s millions of developers to securely and easily develop and deploy IoT devices from prototypes through to production.”

The new H7 module is now available to beta testers, with general availability slated for February 2020.

DMCA-Locked Tractors Make Decades-Old Machines the New Hotness

The content below is taken from the original ( DMCA-Locked Tractors Make Decades-Old Machines the New Hotness), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s fair to say that the hearts and minds of Hackaday readers lie closer to the technology centres of Shenzhen or Silicon Valley than they do to the soybean fields of Minnesota. The common link is the desire to actually own the hardware we buy. Among those working the soil there has been a surge in demand (and consequently a huge price rise) in 40-year-old tractors.

Second-hand farm machinery prices have made their way to the pages of Hackaday due to an ongoing battle between farmers and agricultural machinery manufacturers over who has the right to repair and maintain their tractors. The industry giant John Deere in particular uses the DMCA and end-user licensing agreements to keep all maintenance in the hands of their very expensive agents. It’s a battle we’ve reported on before, and continues to play out across the farmland of America, this time on the secondary market. Older models continue to deliver the freedom for owners to make repairs themselves, and the relative simplicity of the machines tends to make those repairs less costly overall.

Tractors built in the 1970s and 80s continue to be reliable and have the added perk of predating the digital shackles of the modern era. Aged-but-maintainable machinery is now the sweetheart of farm sales. It confirms a trend I’ve heard of anecdotally for a few years now, that relatively new tractors can be worth less than their older DMCA-free stablemates, and it’s something that I hope will also be noticed in the boardrooms. Perhaps this consumer rebellion can succeed against the DMCA where decades of activism and lobbying have evidently failed.

They just don’t build ’em like they used to.


[Image Source: John Deere 2850 by Raf24 CC-BY-SA 3.0]

[Via Hacker News]

A look back at 10 years of CES

The content below is taken from the original ( A look back at 10 years of CES), to continue reading please visit the site. Remember to respect the Author & Copyright.

So, here we are on the eve of CES 2020 — the supersized buffet of an annual consumer electronics show in Las Vegas, where we not only get a sneak peek of what to expect from tech companies this year, but also to take the pulse of how people are responding to what’s out there.

CES is all about “the future,” and we’ll be here this week covering all the big stories and themes. But what about the past? In the spirit of 2020 hindsight, here are some of the most notable headlines and trends of last 10 years of CES.

Take a look and let us know what you think have been the biggest themes coming out of the event in the comments below.

CES 2010

Palm’s ‘turnaround’. The mobile upstart launched its first smartphones, the Pre and Pixel, in 2009, and found a hardcore group of fans that loved the look of the devices, and all the features that set it apart from Android and iOS. 2010 was about gaining momentum. Palm CEO Jon Rubenstein (one of the early pioneers at Apple making the iPod) made some waves onstage by claiming that he had never used an iPhone. Bullish talk that helped cement the company’s independent image. Palm also saw its stock rise by 10% after it announced at a press conference that it would sell its devices through a lucrative deal with Verizon Wireless (which now owns TechCrunch). (All proved to be short-lived, and many lamented that Palm was probably too ahead of its time.)

Meanwhile, the battle between iOS and Android raged on. In 2010 the story was about how AT&T was finally adding its first Android-based smartphones to its lineup (remember that it was the exclusive carrier of the iPhone as its first foray into the new generation of touchscreen smartphones). Motorola “Backflip” smartphones, and tablets and smartbooks from Dell, were among the other mobile devices announced that year.

Natural User Interface.Here’s a prescient article by then-CEO of Microsoft Steve Ballmer written on the heels of CES about the new ways we will be interacting with devices in the way ahead, covering touch, gesture and voice, and very dependent on the cloud. He gave a specific nod to Project Natal, which Microsoft showed off at CES and eventually became Microsoft’s Kinect gesture technology. Ballmer was absolutely on the money, although I’d be interested to know if he suspected just how big of a role his neighbor Amazon would play in that new era.

Other themes in the year included the usual boost in TV technology, this time around 3D; and some early signals of a smart car future with news from the likes of Nvidia and Ford.

CES 2011

RealNetworks and cloud-based music. This was absolutely the direction music would be going. RealNetworks would not ultimately be the top dog in this game, but back in 2011 it was the one leading the charge with a service called Unifi, lauded at the time for being first to market before Apple and Google — and Spotify — in presenting a way to merge what you buy online with what you might already own in terms of digital files. Alas, being an early mover does not always pay off, a theme of its own. Real Unifi quietly died a death and its URL is particularly iffy now.

4G. 2011 was also the year of LTE and 4G announcements from many carriers, from T-Mobile announcing sales of 900,000 4G handsets, to HTC unveiling its first 4G device.

The year of many phones (and specifically phone brands) that did not stand the test of time. Sister site Engadget (a monster when it comes to comprehensive CES coverage) highlighted its list of the best tablets and smartphones, a selection including Motorola, Notion Ink, BlackBerry, and Vizio — a veritable graveyard of brands. It was also the year of Android tablets set up for the future using Honeycomb… another dud, as it turned out.

Vroom vroom. We might take for granted that cars are a major component of CES today — after all, they are essentially very large, mobile pieces of hardware — but that wasn’t always the case (not least because the mammoth Detroit Auto Show is just around the corner in the convention calendar). In 2011, Ford unveiled its first foray into electric cars at the show, setting up a decade of major car advances getting launched at CES.

CES 2012

Remember when Nokia was the world’s biggest mobile handset maker? This was the year that it made some critical shifts in its downturn. CES 2012 was the event where the company unveiled its first Windows Phone-powered smartphone for the US market, to ship exclusively via AT&T.  This proved to be a step along the road to Microsoft buying Nokia’s handset business outright, an ill-fated move that ultimately left both brands in a ditch as far as that market was concerned. But in 2012, there was still a lot of hope and enthusiasm for both.

More smart TV advances: Today we take integrated services and very light hardware like Fire TV sticks and Chrome Casts for granted. That was not the story in 2012, though, where a Samsung TV that integrated DirectTV without a box made headlines.

Steve Ballmer gave his last Microsoft keynote at CES (because Microsoft pulled out of the show keynote roster after that year) and announced a date for the Kinect (some two years after highlighting gesture as something that was going to be coming at us fast).

CES 2013

Hoppin’ mad! This was the year that big media took a bite out of streaming media. CNET, which had been running ‘best of’ awards on behalf of CES for years, was asked to remove the ‘best of show’ award that had been given to Dish’s Hopper with Sling (which let you watch programs recorded on your Dish DVR on your iPad) after the legal team of its parent, CBS, intervened. The interference was nefarious: CBS was embroiled in a lawsuit with Dish, tainting the whole business of then bestowing the award upon Razer. The whole thing was uncovered, criticized even by CNET itself. Then, Dish was awarded Best of Show some weeks later. As with so much else in the question about the best way to move content from one device to another, the answer was coming from somewhere else altogether: the cloud — that is to say, streaming services have trumped both the Dish Hopper and whatever else was competing against it.

Nvidia Shield. A big leap for Nvidia, the company that had already made its name with graphics and other processors used in high-performance computing devices and connected cars for gaming, artificial intelligence applications and more. 2013 was the year it revealed its own hardware in the form of a gaming device called Project Shield, powered by its newest processors. Shield has continued to grow over the years.

Oculus Rift. It was still going to be another year and a bit before Oculus got snapped up by Facebook for $2 billion, and the Rift virtual reality headset didn’t actually launch at CES, but this was a kind of coming-out party for it nonetheless, since it was the first time that the company had set up a stand to provide demos to any and all comers. The result was a lot of buzzy exposure (reported on by multiple outlets) to build up interest in the device. This in itself was a stage two media strategy: The company had launched on Kickstarter with a lot of fanfare prior to this — another notable trend for CES this year, where another crowdfunded blockbuster, the Pebble Smartwatch, exhibited for the first time, too.

CES 2014

oculus

Wearing thin… Wearable tech was everywhere at CES this year and definitely became a bigger theme as the year went on. Indeed Google did the wide launch of Google Glass as a consumer product just a few months later, although it took Apple until April 2015 to launch its first Watch. (That Watch, incidentally, has been one of the most successful of all wearable efforts…. There’s that late mover advantage again). Many voices were already crying foul: Just because you can build something to put on your wrist, or head, or finger, or on your jacket, or wherever… does that mean you necessarily should? And if you do, will anyone want to buy it, or will it collapse into history books as a novelty?

Honk honk. After a couple of fallow years where cars were more certainly present but not previewing blockbuster changes, 2014 was the year they changed gears. Google announced the Open Automotive Alliance, including partnerships with GM, Audi and Honda, Hyundai and Nvidia for Android-powered in-car systems.

Netflix and streaming. The streaming wars really didn’t kick off until the year after this, but for a little taster of how OTT services like Netflix’s would soon dominate the conversation about video at CES and elsewhere: this was the year that Netflix CEO Reed Hastings took to the stage to announce an exclusive 4K (high resolution) version of hit series House of Cards that would stream on 4K-capable LG TVs.

VR expansion. Oculus (still yet to be acquired by Facebook but easily emerging as a power player in the VR space) released a new version of its headset. Sony followed suit and others like Meta, working in the area of VR meeting augmented reality, also showed off their new kit. The momentum wouldn’t last: Meta and several more efforts are now defunct, others are struggling, and many are wondering what the mainstream market for VR and AR headsets will be longer term. (We’re still waiting to see whether Apple will launch a device here, too.)

CES 2015

 

http://bit.ly/2ZYALCE

Smart Home gets smarter. Google still hadn’t closed its deal to acquire Nest when CES 2015 rolled around, but the latter company’s momentum spoke a lot to why it got snapped up and integrated and continues to be a central part of the company’s home strategy. It announced a number of new partners at the show through its “works with Nest” program.

Meanwhile, Apple — long without an official presence at the show — continued to make itself a part of the conversation, specifically in the area of the connected home. The ecosystem for Apple’s HomeKit platform for managing connected smart devices by way of your iPhone grew by some way during CES with the announcement of a number of new partners.

Robots and spycams and drones, oh my! The rise of improvements in AI-based technology such as computer vision, voice recognition and more led to a wave of devices aimed at helping people see and do our bidding, and ultimately than they’d be able to on their own. This bigger trend expanded to creepy humanoids, as well as camera-equipped drones and security devices. The huge swing we’ve seen toward an awareness of our privacy in recent times was not so apparent in 2015, so I wonder how this theme would be handled today.

Mercedes Benz F 015 – concept autonomous car. This is just one vehicle — literally speaking. It’s a prototype that may never see the light of day on a production line, at least not for a long time. But I’m highlighting it here as a sign of the times. 2015 saw yet more vehicles, yet more automonous demos, and yet more moves from a range of players to plant their flags in the self-driving market. We’re still far from seeing any wide-scale commercial products or services for a number of reasons, but the amount of investment in this area continues to grow, with the hope and expectation that the tech, the consumer appetite, and regulations will all align in its favor. Other big news in automotive included Nvidia’s new car platforms.

Plus, too many selfie sticks.

CES 2016

GettyImages 948027492

Coming to a screen near you. Netflix has both spearheaded and in many ways led the focus on streamed video in the world of entertainment, and this year it used the show as a platform to announce a huge shift: It was expanding its service to 130 countries, a massive international push. While there are a lot of localized  players, and I’d argue that Netflix doesn’t have quite the right mix of local and general content abroad as it does in its home market (where the catalog is genuinely far bigger), this was a huge move that continues to set Netflix up for more growth in the future, perhaps via consolidation.

As automotive tech continued to be a major theme at CES, there was a new twist this year. One of the huge automotive giants, GM, used the event not just to unveil a new electric car model, but also a $500 million investment in one of the on-demand transportation giants, Lyft. The rationale: to tap into what might be a new market to supply with its future autonomous vehicles, and to get a stake in one of the bigger startups transforming how people get from A to B (which will ultimately have an impact on how, and if, they buy cars).

CES 2017


Alexa, Everywhere. While the Echo was launched back in 2015, and we saw the beginnings of Alexa-integrated devices and Echo-based hardware the year before by way of Amazon’s platform plays, it was really in 2017 that this trend became nearly all-pervasive, appearing not just in speakers made by third parties, but refrigerators and more. Given the strong recurring theme of home gadgets at the show, this has made Amazon a big presence at the huge event. It still raises the question, though, of how successful any of these integrations have been. We may have a lot of “connected” devices in our homes, including smart speakers, but how much are we really using them?

CES 2018

Google assistant, Everywhere. Not to be outdone by Amazon or Apple in years before, 2018 was the year that Google permeated CES with its own voice — or more specifically its voice bot, the Google Assistant. In addition to blanketing trains, billboards and more with Google slogans, slides and gumball machines, the company’s name and Assistant were brought up in connection with most of the event’s biggest hardware and software splashes, by way of integrations. It’s not a guarantee that this will actually get people to use it, but having the ubiquity makes it ever more convenient for those who do want to go that route. Will be interesting to see how and if this continues as a theme in 2020 and beyond. My guess is that simple messages of “it’s there” will not be enough to hold interest longer term.

CES 2019

LAS VEGAS, NEVADA – JANUARY 08: An attendee walks by the Huawei booth at CES 2019 at the Las Vegas Convention Center on January 8, 2019 in Las Vegas, Nevada. CES, the world’s largest annual consumer technology trade show, runs through January 11 and features about 4,500 exhibitors showing off their latest products and services to more than 180,000 attendees. (Photo by David Becker/Getty Images)

Bull in a China shop. The theme of how Chinese companies grow and operate in the US and other Western markets — two major vectors that are being buffeted by large political questions around tariffs and national security — has been a big one in the tech world for a while now. It was interesting to see it play out at CES last year. The event benefits from a massive amount of visitors from China, and also exhibitors, and so many eyes were on both. It played out in some big ways: Huawei downplayed its presence, and ZTE didn’t come at all. And Gary Shapiro, head of the Consumer Technology Association, which organizes CES, came out on the side of doing business with China, criticizing President Trump’s strategy. Given that Trump’s daughter Ivanka will be a keynote speaker this year, all will be watching how and if this theme will continue to develop or simply get swept under the rug.

Other themes last year included more advanced media streaming that moved further away from being tied to large and expensive middleware (the only expensive hardware that matters being the big TV screens), and a kind of truce in the voice assistant matrix: support for multiple assistants on single devices.

CES 2020 coverage - TechCrunch

Steampunk Motorcycle Runs On Compressed Air, Is Pure Hacking Art

The content below is taken from the original ( Steampunk Motorcycle Runs On Compressed Air, Is Pure Hacking Art), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sometimes it’s ok to sacrifice some practicality for aesthetics, especially for passion projects. Falling solidly in this category is [Peter Forsberg]’s beautiful, barely functional steam punk motorcycle. If this isn’t hacker art, then we don’t know what is.

The most eye-catching part of the motorcycle is the engine and drive train, with most of the mechanical components visible. The cylinders are clear glass tubes with custom pistons, seals, valves and push rods. The crank mechanism is from an old Harley and is mounted inside a piece of stainless steel pipe. Because it runs on compressed air it cools down instead of heating up, so an oil system is not needed.

For steering, the entire front of the bike swings side to side on hinges in the middle of the frame, which is quite tricky to ride with a top speed that’s just above walking speed. It can run for about 3-5 minutes on a tank, so the [Peter] mounted a big three-minute hour glass in the frame. The engine is fed from an external air tank, which he wears on his back; he admits it’s borderline torture to carry the thing for any length of time. He plans to build a side-car to house a much larger tank to extend range and improve riding comfort.

[Peter] admits that it isn’t very good as a motorcycle, but the amount of creativity and resourcefulness required to make it functional at all is the mark of a true mechanical hacker. We look forward to seeing it in its final form.

For more inspiration check out the DIY electric motorcycle, and the flying motorcycle that the Dubai police is testing.

HP’s Elite Dragonfly lappie to let Tile gadget-trackers stalk it till they’re Blue in the tooth

The content below is taken from the original ( HP’s Elite Dragonfly lappie to let Tile gadget-trackers stalk it till they’re Blue in the tooth), to continue reading please visit the site. Remember to respect the Author & Copyright.

Or find it under a crap-pile in your pigsty of a desk… let’s be real, that is what you’d use it for

HP and Tile plan to hook up some of their kit, with the the latter’s gadget-tracking tech shoved into the upcoming HP Elite Dragonfly laptops.…

Otterbox made a ‘bacteria-killing’ screen protector for your phone

The content below is taken from the original ( Otterbox made a ‘bacteria-killing’ screen protector for your phone), to continue reading please visit the site. Remember to respect the Author & Copyright.

Face it, your phone screen is filthy. Think about all those times you texted from the toilet or scrolled through Instagram while riding the subway — those streaks on your screen aren't just schmutz, they're breeding grounds for bacteria. But that's…