Plex Media Server – Desktop Monitoring

The content below is taken from the original ( Plex Media Server – Desktop Monitoring), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://preview.redd.it/kjlrq7r69n761.png?width=1590&format=png&auto=webp&s=fd377584889dede7215ad5884d39d7b3cacabb39

Plex Media Server – Desktop Monitoring – Rainmeter Forums

This is my release for the Plex Media Server Desktop Monitor (as of this writing there is not a Rainmeter Skin to do this). It will allow you to monitor current stream count from the Plex API directly. This skin has a long way to go, but its in a decent enough state for public use.

  1. Download Rainmeter https://github.com/rainmeter/rainmeter/releases/download/v4.3.0.3321/Rainmeter-4.3.1.exe
  2. Download Skinhttps://www.deviantart.com/bdrumm/art/Plex-Desktop-Monitoring-1-0-865212103

Instructions

Known Issues

When a user starts a stream with the media being from Tidal, the Stream Decision and Media Decision is missing which messes with every stream (details) from that point.

submitted by /u/xX_limitless_Xx to r/PleX
[link] [comments]

Retired Microsoft engineer Dave Plummer talks about the history of task manager

The content below is taken from the original ( Retired Microsoft engineer Dave Plummer talks about the history of task manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dave Plummer is the original author of the Windows Task Manager, a tool known to many around the world. In a series on YouTube he talks about it’s history and how he wrote it. Another credit to Dave Plummers name is that he also wrote Space Cadet Pinball for Windows.

It gives a unique insight into Task Manager and how it came to be:

Part 1

Part 2

Source code review of Windows Taskmanager

submitted by /u/a_false_vacuum to r/sysadmin
[link] [comments]

Five more free services now available in the Azure free account

The content below is taken from the original ( Five more free services now available in the Azure free account), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free amounts of Archive Storage, Container Registry, Load Balancer, Service Bus, and VPN Gateway are now available for eligible Azure free account users, as well as and increased free amounts of Cosmos DB.

Five more free services now available in the Azure free account

The content below is taken from the original ( Five more free services now available in the Azure free account), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free amounts of Archive Storage, Container Registry, Load Balancer, Service Bus, and VPN Gateway are now available for eligible Azure free account users, as well as and increased free amounts of Cosmos DB.

Getting Started with Amazon Managed Service for Prometheus

The content below is taken from the original ( Getting Started with Amazon Managed Service for Prometheus), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Managed Service for Prometheus (AMP) is a Prometheus-compatible monitoring service for container infrastructure and application metrics for containers that makes it easy for customers to securely monitor container environments at scale. Prometheus supports a variety of metric collection mechanisms that includes a number of libraries and servers which help in exporting application-specific metrics from third-party systems as Prometheus metrics. You can also use AWS Distro for OpenTelemetry to ingest application metrics from your environment. With AMP, you can use the same open-source Prometheus data model and query language as they do today to monitor the performance of your containerized workloads. There are no up-front investments required to use the service, and you only pay for the number of metrics ingested.

Customers using Prometheus in their container environments face challenges in managing a highly-available, scalable and secure Prometheus server environment, infrastructure for long-term storage, and access control. AMP solves these problems by providing a fully-managed environment which is tightly integrated with AWS Identity and Access Management (IAM) to control authentication and authorization. You can start using AMP by following these two simple steps:

  • Create an AMP workspace
  • Configure your Prometheus server to remote-write into the AMP workspace

Once configured, you will be able to use the fully-managed AMP environment for ingesting, storing and querying your metrics. In this blog post, we will walk through the steps required to set up AMP to ingest custom Prometheus metrics collected from a containerized workload deployed to an Amazon EKS cluster and then query and visualize them using Grafana.

Architecture

The figure below illustrates the overall architecture of Amazon Managed Service for Prometheus and its interaction with other components.

AMP architecture

AMP architecture

Setting up a workspace to collect Prometheus metrics

To get started, you will first create a workspace. A workspace is the conceptual location where you ingest, store, and query your Prometheus metrics that were collected from application workloads, isolated from other AMP workspaces. One or more workspaces may be created in each Region within the same AWS account and each workspace can be used to ingest metrics from multiple workloads that export metrics in Prometheus-compatible format.

A customer-managed IAM policy with the following permissions should be associated with the IAM user that manages a workspace.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aps:CreateWorkspace",
                "aps:DeleteWorkspace",
                "aps:DescribeWorkspace",
                "aps:ListWorkspaces",
                "aps:UpdateWorkspaceAlias"
            ],
            "Resource": "*"
        }
    ]
}

A workspace is created from the AWS Management Console as shown below:

Creating a workspace in Amazon Managed Service for Prometheus

Create workspace

List of workspaces

List of workspaces

Alternatively, you can also create a workspace using AWS CLI as documented here.

Next, as an optional step, you create an interface VPC endpoint in order to securely access the managed service from resources deployed within your VPC. This will ensure that data ingested by the managed service do not leave the VPC in your AWS account. You can do this by using AWS CLI as follows. Ensure the placeholder string such as VPC_ID, AWS_REGION and others are replaced with appropriate values.

aws ec2 create-vpc-endpoint \
--vpc-id <VPC_ID> \
--service-name com.amazonaws.<AWS_REGION>.aps-workspaces \
--security-group-ids <SECURITY_GROUP_IDS> \
--vpc-endpoint-type Interface \
--subnet-ids <SUBNET_IDS>

In the above command, SECURITY_GROUP_IDS represents a list of security groups associated with the VPC interface endpoint to allow communication between the endpoint network interface and the resources in your VPC, such as worker nodes of the Amazon EKS cluster. SUBNET_IDS represents the list of subnets where these resources reside.

Configuring permissions

Metrics collectors (such as a Prometheus server deployed to an Amazon EKS cluster) scrape operational metrics from containerized workloads running in the cluster and send them to AMP for long-term storage as well as for subsequent querying by monitoring tools. The data is sent using HTTP requests which must be signed with valid AWS credentials using the AWS Signature Version 4 algorithm to authenticate and authorize each client request for the managed service. In order to facilitate this, the requests are sent to an instance of AWS signing proxy which will forward the requests to the managed service.

The AWS signing proxy can be deployed to an Amazon EKS cluster to run under the identity of a Kubernetes service account. With IAM roles for service accounts (IRSA), you can associate an IAM role with a Kubernetes service account and thus provide AWS permissions to any pod that uses that service account. This follows the principle of least privilege by using IRSA to securely configure the AWS signing proxy to help ingest Prometheus metrics into AMP.

The shell script shown below can be used to execute the following actions after substituting the placeholder variable YOUR_EKS_CLUSTER_NAME with the name of your Amazon EKS cluster.

  1. Creates an IAM role with an IAM policy that has permissions to remote-write into an AMP workspace
  2. Creates a Kubernetes service account that is annotated with the IAM role
  3. Creates a trust relationship between the IAM role and the OIDC provider hosted in your Amazon EKS cluster

The script requires that you have installed the CLI tools kubectl and eksctl and have configured them with access to your Amazon EKS cluster.

##!/bin/bash
CLUSTER_NAME=YOUR_EKS_CLUSTER_NAME
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

PROM_SERVICE_ACCOUNT_NAMESPACE=prometheus
GRAFANA_SERVICE_ACCOUNT_NAMESPACE=grafana
SERVICE_ACCOUNT_NAME=iamproxy-service-account
SERVICE_ACCOUNT_IAM_ROLE=EKS-AMP-ServiceAccount-Role
SERVICE_ACCOUNT_IAM_ROLE_DESCRIPTION="IAM role to be used by a K8s service account with write access to AMP"
SERVICE_ACCOUNT_IAM_POLICY=AWSManagedPrometheusWriteAccessPolicy
SERVICE_ACCOUNT_IAM_POLICY_ARN=arn:aws:iam::$AWS_ACCOUNT_ID:policy/$SERVICE_ACCOUNT_IAM_POLICY
#
# Setup a trust policy designed for a specific combination of K8s service account and namespace to sign in from a Kubernetes cluster which hosts the OIDC Idp.
# If the IAM role already exists, then add this new trust policy to the existing trust policy
#
echo "Creating a new trust policy"
read -r -d '' NEW_TRUST_RELATIONSHIP <<EOF
 [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${GRAFANA_SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${PROM_SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
EOF
#
# Get the old trust policy, if one exists, and append it to the new trust policy
#
OLD_TRUST_RELATIONSHIP=$(aws iam get-role --role-name $SERVICE_ACCOUNT_IAM_ROLE --query 'Role.AssumeRolePolicyDocument.Statement[]' --output json)
COMBINED_TRUST_RELATIONSHIP=$(echo $OLD_TRUST_RELATIONSHIP $NEW_TRUST_RELATIONSHIP | jq -s add)
echo "Appending to the existing trust policy"
read -r -d '' TRUST_POLICY <<EOF
{
  "Version": "2012-10-17",
  "Statement": ${COMBINED_TRUST_RELATIONSHIP}
}
EOF
echo "${TRUST_POLICY}" > TrustPolicy.json
#
# Setup the permission policy grants write permissions for all AWS StealFire workspaces
#
read -r -d '' PERMISSION_POLICY <<EOF
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "aps:RemoteWrite",
            "aps:QueryMetrics",
            "aps:GetSeries",
            "aps:GetLabels",
            "aps:GetMetricMetadata"
         ],
         "Resource":"*"
      }
   ]
}
EOF
echo "${PERMISSION_POLICY}" > PermissionPolicy.json

#
# Create an IAM permission policy to be associated with the role, if the policy does not already exist
#
SERVICE_ACCOUNT_IAM_POLICY_ID=$(aws iam get-policy --policy-arn $SERVICE_ACCOUNT_IAM_POLICY_ARN --query 'Policy.PolicyId' --output text)
if [ "$SERVICE_ACCOUNT_IAM_POLICY_ID" = "" ]; 
then
  echo "Creating a new permission policy $SERVICE_ACCOUNT_IAM_POLICY"
  aws iam create-policy --policy-name $SERVICE_ACCOUNT_IAM_POLICY --policy-document file://PermissionPolicy.json 
else
  echo "Permission policy $SERVICE_ACCOUNT_IAM_POLICY already exists"
fi

#
# If the IAM role already exists, then just update the trust policy.
# Otherwise create one using the trust policy and permission policy
#
SERVICE_ACCOUNT_IAM_ROLE_ARN=$(aws iam get-role --role-name $SERVICE_ACCOUNT_IAM_ROLE --query 'Role.Arn' --output text)
if [ "$SERVICE_ACCOUNT_IAM_ROLE_ARN" = "" ]; 
then
  echo "$SERVICE_ACCOUNT_IAM_ROLE role does not exist. Creating a new role with a trust and permission policy"
  #
  # Create an IAM role for Kubernetes service account 
  #
  SERVICE_ACCOUNT_IAM_ROLE_ARN=$(aws iam create-role \
  --role-name $SERVICE_ACCOUNT_IAM_ROLE \
  --assume-role-policy-document file://TrustPolicy.json \
  --description "$SERVICE_ACCOUNT_IAM_ROLE_DESCRIPTION" \
  --query "Role.Arn" --output text)
  #
  # Attach the trust and permission policies to the role
  #
  aws iam attach-role-policy --role-name $SERVICE_ACCOUNT_IAM_ROLE --policy-arn $SERVICE_ACCOUNT_IAM_POLICY_ARN  
else
  echo "$SERVICE_ACCOUNT_IAM_ROLE_ARN role already exists. Updating the trust policy"
  #
  # Update the IAM role for Kubernetes service account with a with the new trust policy
  #
  aws iam update-assume-role-policy --role-name $SERVICE_ACCOUNT_IAM_ROLE --policy-document file://TrustPolicy.json
fi
echo $SERVICE_ACCOUNT_IAM_ROLE_ARN

# EKS cluster hosts an OIDC provider with a public discovery endpoint.
# Associate this Idp with AWS IAM so that the latter can validate and accept the OIDC tokens issued by Kubernetes to service accounts.
# Doing this with eksctl is the easier and best approach.
#
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve

The script shown above creates a service account named iamproxy-service-account under the amp namespace which is attached to an IAM role named EKS-AMP-ServiceAccount-Role. The role is attached to a customer-managed IAM policy which comprises the set of permissions shown below in order to send data over to the Amazon Managed Service for Prometheus.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aps:RemoteWrite",
                "aps:GetSeries",
                "aps:GetLabels",
                "aps:GetMetricMetadata"
            ],
            "Resource": "*"
        }
    ]
}

Deploying Prometheus server

Amazon Managed Service for Prometheus does not directly scrape operational metrics from containerized workloads in a Kubernetes cluster. It requires users to deploy and manage a standard Prometheus server, or an OpenTelemetry agent such as the AWS Distro for OpenTelemetry Collector in their cluster to perform this task. The implementation in this blog uses a Prometheus server which is deployed to an Amazon EKS cluster using Helm charts as follows:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
kubectl create ns prometheus
helm install prometheus-for-amp prometheus-community/prometheus -n prometheus

The AWS signing proxy can now be deployed to the Amazon EKS cluster with the following YAML manifest. Now substitute the placeholder variable {AWS_REGION} with the appropriate AWS Region name, replace ${IAM_PROXY_PROMETHEUS_ROLE_ARN} with the ARN of the EKS-AMP-ServiceAccount-Role you created and replace the placeholder {WORKSPACE_ID} with the AMP workspace ID you created earlier. The signing proxy references a Docker image from a public repository in ECR. Alternatively, you can follow the steps outlined here under the section titled Deploy the AWS signing proxy and build a Docker image from the source code for the AWS SigV4 Proxy.

Create a file called amp_ingest_override_values.yaml with the following content in it.


serviceAccounts:
    server:
        name: "iamproxy-service-account"
        annotations:
            eks.amazonaws.com/role-arn: "${IAM_PROXY_PROMETHEUS_ROLE_ARN}"
server:
  sidecarContainers:
    aws-sigv4-proxy-sidecar:
        image: public.ecr.aws/aws-observability/aws-sigv4-proxy:1.0
        args:
        - --name
        - aps
        - --region
        - ${AWS_REGION}
        - --host
        - aps-workspaces.${AWS_REGION}.amazonaws.com
        - --port
        - :8005
        ports:
        - name: aws-sigv4-proxy
          containerPort: 8005
  statefulSet:
      enabled: "true"
  remoteWrite:
      - url: http://localhost:8005/workspaces/${WORKSPACE_ID}/api/v1/remote_write

Execute the following command to modify the Prometheus server configuration to deploy the signing proxy and configure the remoteWrite endpoint

helm upgrade --install prometheus-for-amp prometheus-community/prometheus -n prometheus -f ./amp_ingest_override_values.yaml

With the above configurations, the Prometheus server is now ready to scrape metrics from services deployed in the cluster and send them to the specified workspace within Amazon Managed Service for Prometheus via the AWS signing proxy.

An application instrumented with Prometheus client library is now deployed as a replica set to the Amazon EKS cluster. It tracks the number of incoming HTTP requests using a Prometheus Counter named http_requests_total and exposes this data over HTTP at the endpoint /metrics. Invoking this endpoint gives the following output, which is scraped periodically by the Prometheus server.

# HELP http_requests_total Total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{job="recommender",path="/user/product",} 86.0
http_requests_total{job="recommender",path="/popular/product",} 128.0
http_requests_total{job="recommender",path="/popular/category",} 345.0

Visualizing metrics using Grafana

The metrics collected in a workspace within Amazon Managed Service for Prometheus can be visualized using Grafana. Grafana v7.3.x has added a new feature to support AWS Signature Version 4 (SigV4) authentication and we will be using that version here. A self-managed Grafana installation is deployed to the Amazon EKS cluster using Helm charts as follows:

helm repo add grafana https://grafana.github.io/helm-charts
kubectl create ns grafana
helm install grafana-for-amp grafana/grafana -n grafana

Update your Grafana server to use the AWS signing proxy

Create a new file and name it amp_query_override_values.yaml. This file will be used to update your Grafana deployment to enable the Sigv4 protocol which the AWS signing proxy uses to authenticate.


serviceAccount:
    name: "iamproxy-service-account"
    annotations:
        eks.amazonaws.com/role-arn: "${IAM_PROXY_PROMETHEUS_ROLE_ARN}"
grafana.ini:
  auth:
    sigv4_auth_enabled: true

Now execute the following command to update your Grafana environment.

helm upgrade --install grafana-for-amp grafana/grafana -n grafana -f ./amp_query_override_values.yaml

You can now access Grafana by forwarding the port to http://localhost:5001 using the following command. Replace the string GRAFANA_POD_NAME with the actual Grafana pod name you just created.

kubectl port-forward -n grafana pods/GRAFANA_POD_NAME 5001:3000

Next, open Grafana from an internet browser using the above URL and login with the admin username. The password is obtained from the Kubernetes secret as follows:

kubectl get secrets grafana-for-amp -n grafana -o jsonpath='{.data.admin-password}'|base64 --decode

Before we can visualize the metrics in Grafana, it has to be configured with one or more data sources. Here, we will specify the workspace within Amazon Managed Service for Prometheus as a data source, as shown below. In the URL field, specify the Endpoint – query URL displayed in the AMP workspace details page without the /api/v1/query string at the end of the URL.

Configure AMP data source

Configure AMP data source

You’re now ready to query metrics data for the Prometheus Counter http_requests_total stored in the managed service workspace and visualize the rate of HTTP requests over a trailing 5-minute period using a Prometheus query as follows:
sum(rate(http_requests_total{exported_job=”recommender”}[5m])) by (path)

The figure below illustrates how to visualize this metric in Grafana across the different path labels captured in the Prometheus Counter.

Visualizing rate of HTTP requests using metrics retrieved from Amazon Managed Service for Prometheus

PromQL and metric visualization

In addition to service metrics collected from application workloads, we can also query system metrics captured by Prometheus for all the containers and nodes in a Kubernetes cluster. For example, the Prometheus Counter node_network_transmit_bytes_total captures the amount of data transmitted from each node of a cluster. The figure below visualizes the rate of data transfer from each node using a Prometheus query as follows: sum(rate(node_network_transmit_bytes_total[5m])) by (instance)

Visualizing rate of data transfer using metrics retrieved from Amazon Managed Service for Prometheus

PromQL and metric data visualization

Users may also visualize these metrics using Amazon Managed Service for Grafana (AMG) as outlined in this blog post.

Concluding remarks

Prometheus is an extremely popular open source monitoring tool that provides powerful querying features and has wide support for a variety of workloads.This blog post outlined the steps involved in using Amazon Managed Service for Prometheus to securely ingest, store, and query Prometheus metrics that were collected from application workloads deployed to an Amazon EKS cluster. Amazon Managed Service for Prometheus can also be used in conjunction with any Prometheus-compatible monitoring and alerting service to collect metrics from other container environments such as Amazon ECS, self-managed Kubernetes on AWS, or on-premise infrastructure. Workspaces within Amazon Managed Service for Prometheus serve as a valid data source for Grafana. Therefore, users can visualize these metrics using a self-managed installation of Grafana or use Amazon Managed Service for Grafana.

Authors

Viji Sarathy is a Senior Solutions Architect at Amazon Web Services. He has 20+ years of experience in building large-scale, distributed software systems in a broad range of verticals, both traditional and cloud native software stacks. His current interests are in the area of Container Services and Machine Learning. He has an educational background in Aerospace Engineering, earning his Ph.D from the University of Texas at Austin, specializing in Computational Mechanics. He is an avid runner and cyclist.

 

 

Imaya Kumar Jagannathan is a Senior Solution Architect focused on Amazon CloudWatch and AWS X-Ray. He is passionate about Monitoring and Observability and has a strong application development and architecture background. He likes working on distributed systems and is excited to talk about microservice architecture design. He loves programming on C#, working with Containers and Serverless technologies.

Amazon Managed Grafana – Getting Started

The content below is taken from the original ( Amazon Managed Grafana – Getting Started), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Managed Service for Grafana (AMG) is a fully managed and secure data visualization service that enables customers to instantly query, correlate, and visualize operational metrics, logs, and traces for their applications from multiple data sources. AMG is based on the open source Grafana project, a widely deployed data visualization tool popular for its extensible data source support. Developed together with Grafana Labs, AMG manages the provisioning, setup, scaling, and maintenance of Grafana, eliminating the need for customers to do this themselves. Customers also benefit from built-in security features that enable compliance with governance requirements, including single sign-on, fine-grained data access control, and audit reporting. AMG is integrated with AWS data sources that collect operational data, such as Amazon CloudWatch, Amazon Elasticsearch Service, Amazon Timestream, AWS IoT SiteWise, AWS X-Ray, and Amazon Managed Service for Prometheus (AMP), and provides plug-ins to popular open-source databases, third-party ISV monitoring tools, as well as other cloud services. With AMG you can easily visualize information from multiple AWS services, AWS accounts, and Regions in a single Grafana dashboard.

You can also perform an in-place upgrade to Grafana Enterprise to get access to additional features and plugins. With Grafana Enterprise, you can consolidate your data from AppDynamics, DataDog, Dynatrace, New Relic, MongoDB, Oracle Database, ServiceNow, Snowflake, Splunk, and Wavefront. Additionally, you can access support and training content directly from Grafana Labs to help you explore and adopt Grafana’s advanced features easily. Click here to learn more.

Before you begin

To use AMG in a flexible and convenient manner, we chose to leverage AWS Single Sign-On (SSO) for user management. AWS SSO is available once you’ve enabled AWS Organizations. In order to check whether an AWS account is part of an AWS Organization, head over to https://console.aws.amazon.com/organizations/ and you should see a view akin to the following and if you do not see AWS Organizations activated go ahead and create one:

Existing AWS Organization

Existing AWS Organization

AMG integrates with AWS SSO so that you can easily assign users and groups from your existing user directory such as Active Directory, LDAP, or Okta within the AMG workspace and single sign on using your existing user ID and password. This allows you to enforce existing login security requirements for your company such as two-factor authentication and password complexity.

If you don’t have an existing user directory or do not want to integrate with external identity providers through the Security Assertion Markup Language (SAML) 2.0 standard, you can create local users and passwords within AWS SSO and use them to sign in to the Grafana workspace:

SSO console: add user

SSO console: add user

With the preparations around AWS SSO and AWS Organizations out of the way, we’re ready to get into the core AMG setup.

Setting up AMG on your AWS account

You can easily spin up on-demand, autoscaled Grafana workspaces (virtual Grafana servers) that enable you to create unified dashboards across multiple data sources. Before we can use AMG for the following example, we need to set it up. In the following we’re using the AWS console to walk you through the required steps and comment on things to consider when performing each step.

After you hit the Create workspace button in the right upper corner of the AMG console landing page, give your new workspace a name and optionally a description:

Create new AMG workspace

Create new AMG workspace

Next, you need to define user and data access permissions. Unless you have a use case where you have to or want to manage the underlying IAM role and policy yourself (to have fine-grained control for other AWS services), we suggest that you let AMG manage the creation of IAM roles and policies for accessing your AWS Services data.

In this step you also have to enable AWS Single Sign-On (SSO) for AMG since this is how we manage user authentication to Grafana workspaces:

Configure AMG workspace settings

Configure AMG workspace settings

If you haven’t set up users via AWS SSO as mentioned above you can use the inline experience offered by AMG and click Create user at this step:

Enable SSO, create user inline

Enable SSO, create user inline

Then fill in the details in the form, that is, provide their email address and first as well as last name:

Create SSO user

Create SSO user

You should quickly get a confirmation mail with the invitation:

Invitation mail for SSO user

Invitation mail for SSO user

The user (in our case grafana-user) can now use the URL provided by SSO (something like d-xxxxx-awsapps.com/start) or the specific Grafana workspace URL from the workspace details console page to log into their environment where they will be prompted to change their password using the one-time password that you as an admin get from SSO and share with them (see above screen shot).

Back to the AMG workspace setup: We’re almost done, we now need to tell AMG the data sources from which we want to consume and visualize data. To be able to try out everything, we have selected all of the following, but you may want to restrict to the necessary subset for your use case:

AMG workspace permissions

AMG workspace permissions

As usual, you will have a final opportunity to review your settings and then confirm the creation of the AMG workspace:

AMG workspace creation review

AMG workspace creation review

Once the workspace is created, you can assign users access to the Grafana workspace. You can either assign an individual user from AWS SSO, or you can choose to assign a user group:

AMG workspace, assign user

AMG workspace, assign user

Now we’re done from an administrative point of view, so let’s switch gears. We can use the SSO user from above to gain access to the AMG workspace. Now you can click on the Grafana workspace URL from the workspace details page, as shown in the image above, to log into your Grafana workspace using the SSO credentials (user grafana-user) and you should see AMG enabled and ready for you to access:

SSO landing page with AMG listed

SSO landing page with AMG listed

Now all you have to do is to click on the AMG icon and you’re logged into your Grafana workspace:

Grafana landing page

Grafana landing page

The setup is completed with this and we’re ready to use AMG. Let’s start with consuming data from Prometheus.

Integration with Amazon Managed Service for Prometheus (AMP) and other data sources

Amazon Managed Service for Grafana supports a variety of datasources such as Amazon Managed Service for Prometheus, Amazon CloudWatch, AWS X-Ray, Amazon Elasticsearch, Amazon Timestream, AWS IoT SiteWise plugin,  and several others. The full-list of data sources can be found here.

AMG can auto-discover the accounts and resources you have for the AWS data-sources and auto-configure based on the permissions setup using CloudFormation.

Auto-detected AWS data sources

Auto-detected AWS data sources

Provisioning a datasource

Provisioning a datasource

This option discovers the accounts and resources you have for the six AWS Services that AMG natively integrates with. Based on the permissions you granted during workflow creation, you can now just check the box for the account or resource you want to add, and AMG will automatically configure the data source with the right IAM role permissions without you having to manually copy and paste.

Manually configuring an AWS data source

You can also manually configure the data sources by following the steps below:

  • Select the Datasource plugin
  • Choose from one of the following authentication mechanisms:
    • Access & Secret key
    • Credentials file
    • ARN of the AssumeRole to authenticate into your AWS account
  • Select a default AWS Region (optional)

In this example, we are connecting to an Amazon Managed Prometheus (AMP) data source:

Connecting to AMP datasource manually

Connecting to AMP datasource manually

Once connected, you will be able to create dashboard panels by selecting the newly connected data source and query metrics using PromQL. Here is a screenshot showing metrics from an EKS cluster queried on an AMP data source.

EKS Pod Metrics dashaboard

EKS Pod Metrics dashaboard

Out-of-the-box Dashboards

Several AWS datasource plug-ins come with many out-of-the-box dashboards to help you get started quickly. You can find them under the Dashboards tab as shown below.

List of out-of-the-box dashboards for Amazon CloudWatch

List of out-of-the-box dashboards for Amazon CloudWatch

You can easily import a dashboard by clicking the Import button. Below you can see a screenshot of the Amazon EC2 dashboard.

EC2 Dashboard

EC2 Dashboard

Creating custom dashboards

Grafana supports a variety of panel visualizations to create dashboards using a variety of data sources. Below you can see a dashboard showing AWS X-Ray Trace data from the One Observability Demo application. You can use AWS X-Ray filter expressions to create dashboard panels in Grafana to visualize trace data as shown below.

Custom dashboard showing AWS X-Ray trace data

Custom dashboard showing AWS X-Ray trace data

You can also investigate a single trace to see the segment timeline by simply clicking on a Trace ID. Grafana also provides deep links to the AWS X-Ray console where you can use tools such as X-Ray Analytics.

You can take advantage of a large collection of pre-built Grafana dashboards built by the community that can be easily imported into your Grafana workspace and provide a domain specific quick start to visualizing and investigating observability data for a variety of popular data sources.

Conclusion

With AMG (and AMP), you can now use Grafana and Prometheus without having to worry about the operational management of maintaining infrastructure resources, and let AWS take care of the undifferentiated heavy lifting.

More importantly, you can rely on open standards such as CNCF OpenTelemetry along with our own AWS Distro for OpenTelemetry that, together with our open source-based, fully managed services for Grafana and Prometheus, enable you to build powerful observability on AWS.

Authors

Imaya Kumar Jagannathan

Imaya is a Senior Solution Architect focused on Amazon CloudWatch and AWS X-Ray. He is passionate about Monitoring and Observability and has a strong application development and architecture background. He likes working on distributed systems and is excited to talk about micro-service architecture design. He loves programming on C#, working with Containers and Serverless technologies.

 

Michael Hausenblas

Michael is an Open Source Product Developer Advocate in the AWS container service team covering open source observability and service meshes. Before AWS, Michael worked at Red Hat, Mesosphere, MapR and as a PostDoc in applied research. Reach him on Twitter via @mhausenblas.

Securing the post-quantum world

The content below is taken from the original ( Securing the post-quantum world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Securing the post-quantum world

Quantum computing is inevitable; cryptography prepares for the future

Securing the post-quantum world

Quantum computing began in the early 1980s. It operates on principles of quantum physics rather than the limitations of circuits and electricity, which is why it is capable of processing highly complex mathematical problems so efficiently. Quantum computing could one day achieve things that classical computing simply cannot.

The evolution of quantum computers has been slow. Still, work is accelerating, thanks to the efforts of academic institutions such as Oxford, MIT, and the University of Waterloo, as well as companies like IBM, Microsoft, Google, and Honeywell. IBM has held a leadership role in this innovation push and has named optimization the most likely application for consumers and organizations alike. Honeywell expects to release what it calls the “world’s most powerful quantum computer” for applications like fraud detection, optimization for trading strategies, security, machine learning, and chemistry and materials science.

In 2019, the Google Quantum Artificial Intelligence (AI) team announced that their 53-qubit (analogous to bits in classical computing) machine had achieved “quantum supremacy.” This was the first time a quantum computer was able to solve a problem faster than any classical computer in existence. This was considered a significant milestone.

Quantum computing will change the face of Internet security forever — particularly in the realm of cryptography, which is the way communications and information are secured across channels like the Internet. Cryptography is critical to almost every aspect of modern life, from banking to cellular communications to connected refrigerators and systems that keep subways running on time. This ultra-powerful, highly sophisticated new generation of computing has the potential to unravel decades of work that have been put into developing the cryptographic algorithms and standards we use today.

Quantum computers will crack modern cryptographic algorithms

Quantum computers can take a very large integer and find out its prime factor extremely rapidly by using Shor’s algorithm. Why is this so important in the context of cryptographic security?

Most cryptography today is based on algorithms that incorporate difficult problems from number theory, like factoring. The forerunner of nearly all modern cryptographic schemes is RSA (Rivest-Shamir-Adleman), which was devised back in 1976. Basically, every participant of a public key cryptography system like RSA has both a public key and a private key. To send a secure message, data is encoded as a large number and scrambled using the public key of the person you want to send it to. The person on the receiving end can decrypt it with their private key. In RSA, the public key is a large number, and the private key is its prime factors. With Shor’s algorithm, a quantum computer with enough qubits could factor large numbers. For RSA, someone with a quantum computer can take a public key and factor it to get the private key, which allows them to read any message encrypted with that public key. This ability to factor numbers breaks nearly all modern cryptography. Since cryptography is what provides pervasive security for how we communicate and share information online, this has significant implications.

Theoretically, if an adversary were to gain control of a quantum computer, they could create total chaos. They could create cryptographic certificates and impersonate banks to steal funds, disrupt Bitcoin, break into digital wallets, and access and decrypt confidential communications. Some liken this to Y2K. But, unlike Y2K, there’s no fixed date as to when existing cryptography will be rendered insecure. Researchers have been preparing and working hard to get ahead of the curve by building quantum-resistant cryptography solutions.

When will a quantum computer be built that is powerful enough to break all modern cryptography? By some estimates, it may take 10 to 15 years. Companies and universities have made a commitment to innovation in the field of quantum computing, and progress is certainly being made. Unlike classical computers, quantum computers rely on quantum effects, which only happen at the atomic scale. To instantiate a qubit, you need a particle that exhibits quantum effects like an electron or a photon. These particles are extremely small and hard to manage, so one of the biggest hurdles to the realization of quantum computers is how to keep the qubits stable long enough to do the expensive calculations involved in cryptographic algorithms.

Both quantum computing and quantum-resistant cryptography are works in progress

It takes a long time for hardware technology to develop and mature. Similarly, new cryptographic techniques take a long time to discover and refine. To protect today’s data from tomorrow’s quantum adversaries, we need new cryptographic techniques that are not vulnerable to Shor’s algorithm.

The National Institute of Standards and Technology (NIST) is leading the charge in defining post-quantum cryptography algorithms to replace RSA and ECC. There is a project currently underway to test and select a set of post-quantum computing-resistant algorithms that go beyond existing public-key cryptography. NIST plans to make a recommendation sometime between 2022 and 2024 for two to three algorithms for both encryption and digital signatures. As Dustin Moody, NIST mathematician points out, the organization wants to cover as many bases as possible: “If some new attack is found that breaks all lattices, we’ll still have something to fall back on.”

We’re following closely. The participants of NIST have developed high-speed implementations of post-quantum algorithms on different computer architectures. We’ve taken some of these algorithms and tested them in Cloudflare’s systems in various capacities. Last year, Cloudflare and Google performed the TLS Post-Quantum Experiment, which involved implementing and supporting new key exchange mechanisms based on post-quantum cryptography for all Cloudflare customers for a period of a few months. As an edge provider, Cloudflare was well positioned to turn on post-quantum algorithms for millions of websites to measure performance and use these algorithms to provide confidentiality in TLS connections. This experiment led us to some useful insights around which algorithms we should focus on for TLS and which we should not (sorry, SIDH!).

More recently, we have been working with researchers from the University of Waterloo and Radboud University on a new protocol called KEMTLS, which will be presented at Real World Crypto 2021. In our last TLS experiment, we replaced the key negotiation part of TLS with quantum-safe alternatives but continued to rely on digital signatures. KEMTLS is designed to be fully post-quantum and relies only on public-key encryption.

On the implementation side, Cloudflare team members including Armando Faz Hernandez and visiting researcher Bas Westerbaan have developed high-speed assembly versions of several of the NIST finalists (Kyber, Dilithium), as well as other relevant post-quantum algorithms (CSIDH, SIDH) in our CIRCL cryptography library written in Go.

Securing the post-quantum world
A visualization of AVX2-optimized NTT for Kyber by Bas Westerbaan

Post-quantum security, coming soon?

Everything that is encrypted with today’s public key cryptography can be decrypted with tomorrow’s quantum computers. Imagine waking up one day, and everyone’s diary from 2020 is suddenly public. Although it’s impossible to find enough storage to record keep all the ciphertext sent over the Internet, there are current and active efforts to collect a lot of it. This makes deploying post-quantum cryptography as soon as possible a pressing privacy concern.

Cloudflare is taking steps to accelerate this transition. First, we endeavor to use post-quantum cryptography for most internal services by the end of 2021. Second, we plan to be among the first services to offer post-quantum cipher suites to customers as standards emerge. We’re optimistic that collaborative efforts among NIST, Microsoft, Cloudflare, and other computing companies will yield a robust, standards-based solution. Although powerful quantum computers are likely in our future, Cloudflare is helping to make sure the Internet is ready for when they arrive.

For more on Quantum Computing, check out my interview with Scott Aaronson and this segment by Sofía Celi and Armando Faz (in Spanish!) on Cloudflare TV.

Microsoft brings new process mining features to Power Automate

The content below is taken from the original ( Microsoft brings new process mining features to Power Automate), to continue reading please visit the site. Remember to respect the Author & Copyright.

Power Automate is Microsoft’s platform for streamlining repetitive workflows — you may remember it under its original name: Microsoft Flow. The market for these robotic process automation (RPA) tools is hot right now, so it’s no surprise that Microsoft, too, is doubling down on its platform. Only a few months ago, the team launched Power Automate Desktop, based on its acquisition of Softomotive, which helps users automate workflows in legacy desktop-based applications, for example. After a short time in preview, preview Power Automate Desktop is now generally available.

The real news today, though, is that the team is also launching a new tool, the Process Advisor, which is now in preview as part of the Power Automate platform. This new process mining tool provides users with a new collaborative environment where developers and business users can work together to create new automations.

The idea here is that business users are the ones who know exactly how a certain process works. With Process Advisor, they can now submit recordings of how they process a refund, for example, and then submit that to the developers, who are typically not experts in how these processes usually work.

What’s maybe just as important is that a system like this can identify bottlenecks in existing processes where automation can help speed up existing workflows.

Image Credits: Microsoft

“This goes back to one of the things that we always talk about for Power Platform, which, it’s a corny thing, but it’s that development is a team sport,” Charles Lamanna, Microsoft’s corporate VP for its Low Code Application Platform, told me. “That’s one of our big focuses: how to do bring people to collaborate and work together who normally don’t. This is great because it actually brings together the business users who live the process each and every day with a specialist who can build the robot and do the automation.”

The way this works in the backend is that Power Automate’s tools capture exactly what the users do and click on. All this information is then uploaded to the cloud and — with just five or six recordings — Power Automate’s systems can map how the process works. For more complex workflows, or those that have a lot of branches for different edge cases, you likely want more recordings to build out these processes, though.

Image Credits: Microsoft

As Lamanna noted, building out these workflows and process maps can also help businesses better understand the ROI of these automations. “This kind of map is great to go build an automation on top of it, but it’s also great because it helps you capture the ROI of each automation you do because you’ll know for each step step, how long it took you,” Lamanna said. “We think that this concept of Process Advisor is probably going to be one of the most important engines of adoption for all these low-code/no-code technologies that are coming out. Basically, it can help guide you to where it’s worth spending the energy, where it’s worth training people, where it’s worth building an app, or using AI, or building a robot with our RPA like Power Automate.”

Lamanna likened this to the advent of digital advertising, which for the first time helped marketers quantify the ROI of advertising.

The new process mining capabilities in Power Automate are now available in preview.

Azure Digital Twins now generally available: Create IoT solutions that model the real world

The content below is taken from the original ( Azure Digital Twins now generally available: Create IoT solutions that model the real world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Digital Twins, virtually model the physical world.

Today, organizations are showing a growing appetite for solutions that provide a deeper understanding of not just assets, but also the complex interactions across environments. This shift is driven by the need to be more agile and respond to real-time changes in the world, and it involves moving beyond tracking individual assets or devices to focusing on everything within a rich environment—from people, places, processes, and things to their very relationships.

To really understand these intricate environments, companies are creating digital replicas of their physical world also known as digital twins. With Microsoft Azure Digital Twins now generally available, this Internet of Things (IoT) platform provides the capabilities to fuse together both physical and digital worlds, allowing you to transform your business and create breakthrough customer experiences.

According to the IoT Signals report, the vast majority of companies with a digital twin strategy see it as an integral part of their IoT solution. Yet the reality is that modeling entire environments can be complicated. It involves bringing together the many elements that make up a digital twin to capture actionable insights. However, siloed data across these experiences makes it challenging to build digital twin solutions that bring those models to life, and doing so in a scalable, secure way is often time-consuming.

Azure Digital Twins now ready for enterprise-grade deployments

Azure Digital Twins is an industry-first solution. It breaks down silos within intelligent environments by fusing data from previously disparate devices and business systems. It means you can track both past and present events, simulate possibilities, and help predict future events for those environments.

With Azure Digital Twins now generally available, it offers ready-to-use building blocks that can simplify the creation of detailed, comprehensive digital models that bring solutions to life. This trusted enterprise-grade platform brings the scale, reliability, security, and broad market availability that enables customers to build production-ready solutions.
 

Customer insights and partner solutions with Azure Digital Twins

Intelligent environments come in all shapes and sizes, and their attributes and outcomes are as varied as the industries in which they are used. As such, the possibilities for digital twins are endless—it can be used to model industries and environments that include factories, buildings, stadiums, and even entire cities. Today, we are working with customers and partners who are creating digital twins to model everything from facilities and spaces such as, smart buildings to manufactured products and the very processes within their supply chains.

One company pushing the boundaries of renewable energy production and efficiency is Korea-based Doosan Heavy Industries and Construction. Doosan worked with Microsoft and Bentley Systems to develop a digital twin of its wind farms, which allows operators to remotely monitor equipment performance and predict energy generation based on weather conditions.

“To maintain our competitive edge and increase energy efficiency, we need to optimize turbine operations through data analysis and gather feedback to develop and test advances in wind turbine development. We created solutions with Azure Digital Twins, Azure IoT Hub, and Bentley iTwin to make that possible by combining data from multiple sources into actionable insights.”—Seiyoung Jang, General Manager, Strategy and Innovation, Doosan

The solution uses Azure Digital Twins to combine real-time and historical IoT, weather, and other operational data with physics and machine learning-based models to accurately predict production output for each turbine in the farm. Based on the simulated models, Doosan can proactively adjust the pitch and performance of individual turbines, maximize energy production, and generate insights that will improve the future design of its next-generation wind turbines.

The innovation and productivity that Azure Digital Twins enables doesn’t stop there:

  • Johnson Controls is collaborating with Microsoft to digitally transform how buildings and spaces are conceived, built, and managed. At the center of this collaboration is a holistic integration between their platform, OpenBlue Digital Twin, and Azure Digital Twins. This partnership helps enable integrated building management, and the platform serves as the foundation for energy and space optimization, predictive maintenance, and remote operations.
  • Ansys, a market leader in simulation software, now offers native integration of simulation-based digital twins via Ansys Twin Builder for customers using Azure Digital Twins. This lets engineers quickly deliver real-time, physics-based simulation models for operational use. Doing so helps make it possible to efficiently integrate the simulation-based twins into a broader IoT solution.
  • Brookfield Properties, one of the world’s largest commercial office landlords, partnered with Willow to create a digital replica of their One Manhattan West (OMW) property using Willow’s Azure Digital Twins-based product. Dedicated to creating a “live, work, play” ecosystem, this lays the groundwork for a digital-first future while unlocking cost savings, energy optimizations, and value-added services today. This solution leverages premade, open source models by RealEstateCore that can be used to help accelerate development across industries.

Azure Digital Twins platform capabilities

The Azure Digital Twins platform was built to simplify and accelerate the creation of IoT connected solutions. With a comprehensive set of capabilities, companies can develop customized, connected solutions with ease. And with the ability to layer your vertical domain specialization—such as 3D or 4D visualizations, physics-based simulation, and AI—on top of Azure Digital Twins, it’s easier than ever to focus on driving results for your customers.

This also includes new developer experiences with broad language support for SDKs, Digital Twins Definition Language (DTDL) modeling and validation tools, and the Azure Digital Twins explorer sample which helps visualize the graph representing your environment. Other capabilities at the core of the Azure Digital Twins platform allow you to:

  • Use an open modeling language, DTDL, to easily create custom models of intelligent environments. In addition, premade models and converter tools for vertical standards help accelerate development when getting started with use cases across industries.
  • Bring digital twins to life with a live execution environment that is scalable and secure and uses data from IoT and other sources. Using a robust event system, you can build dynamic business logic that helps keep business apps fresh and always up to date. You can also extract insights in the context of the modeled world by querying data on a wide range of conditions and relationships.
  • Break down silos using input from IoT and business systems by easily connecting assets such as IoT and Azure IoT Edge devices via Azure IoT Hub, as well as existing business systems such as ERP and CRM to Azure Digital Twins to extract relevant insights across the entire environment.
  • Output to storage and analytics by integrating Azure Digital Twins with other Azure services. This includes the ability to send data to Azure Data Lake for long-term storage or to data analytics services such as Azure Synapse Analytics to apply machine learning. Another important use case is time series data integration and historian analytics with Azure Time Series Insights.

Digital Twins Consortium: Advancing technology with open partnerships and ecosystems

Built upon open partnerships and open ecosystems, Microsoft co-founded the Digital Twins Consortium in May 2020 in collaboration with other companies. Together, we are contributing to best practices and shared digital twin models. We are committed to helping create an industry standard and bringing digital twins—including DTDL—as well as all the different vertical specializations of digital twins across industries into an open ecosystem.

There are now more than 170 members that span companies, government agencies, and academia to drive consistency in the vocabulary, architecture, security, and compatibility of digital twin technology as it’s used across industries. This means that everyone across the ecosystem can benefit from a collective pool of industry standardized models so you can accelerate your digital twin journey.

Get started with Azure Digital Twins

As Microsoft Azure partners and businesses who are already using Azure Digital Twins have shown, it offers a robust platform for building enterprise grade IoT connected solutions with the scale, compliance, security, and privacy benefits that customers can bet their business on. Learn more about Azure Digital Twins and how to get started building digital replicas of your own intelligent business environment.

Additional resources

•    Learn more about Azure Digital Twins.
•    Get started with Azure Digital Twins technical resources.
•    Watch Azure Digital Twins demo video.
•    Read Azure Digital Twins customer stories.
•    Watch the Azure Digital Twins technical deep dive video featuring the WillowTwin solution.
•    Learn how IoT and Azure Digital Twins can help connect urban environments.
•    Learn more about Microsoft and Johnson Controls digital twin collaboration.

How to Enable or Disable Scrollable Tabstrip in Google Chrome

The content below is taken from the original ( How to Enable or Disable Scrollable Tabstrip in Google Chrome), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to Enable or Disable Scrollable Tabstrip in Google ChromeWhenever you open multiple tabs on your Chrome browser, the tabs start narrowing and […]

This article How to Enable or Disable Scrollable Tabstrip in Google Chrome first appeared on TheWindowsClub.com.

OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware

The content below is taken from the original ( OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

It is appropriate that, 10 years after the first Open Hardware Summit, open source hardware was a key part of the initial Covid-19 response. Engineers, designers, and medical professionals collaborated from around the world to design and deploy medical equipment to meet the world’s unprecedented need. In many ways, this is […]

Read more on MAKE

The post  OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware appeared first on Make: DIY Projects and Ideas for Makers.

Enabling Microsoft-based workloads with file storage options on Google Cloud

The content below is taken from the original ( Enabling Microsoft-based workloads with file storage options on Google Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Enterprises are rapidly moving Microsoft and Windows-based workloads to the cloud to reduce license spend and embark on modernization strategies to fully leverage the power of cloud-native architecture. Today’s business climate requires agility, elasticity, scale, and cost optimization, all of which are far more difficult to attain by operating out of data centers. Google Cloud offers a top-level enterprise-grade experience for Microsoft-based services and tools. 

Many Windows-based workloads require a Server Message Block (SMB) file service component. For example, highly available SAP application servers running in Windows Server clusters need SMB file servers to store configuration files and logs centrally. The COVID-19 pandemic has resulted in increased demand for virtual desktop solutions to enable workers to adapt to the sudden necessity of working remotely. Those virtual desktop users often require access to SMB file servers to store documents and to collaborate with coworkers. 

Fortunately, there are numerous options for SMB file services in Google Cloud that meet the varying needs of Microsoft shops. They fall into three categories: fully managed, semi-managed, and self-managed services. In this post, we’ll examine several options across those three buckets. (Note: this is by no means an exhaustive list of SMB file service providers for Google Cloud. Rather, this is a brief review of some of the common ones.)

Fully managed SMB file services

For many enterprises, reducing operational overhead is a key objective of their cloud transformation. Fully managed services provide the capabilities and outcomes, without requiring IT staff to worry about mundane tasks like software installation and configuration, application patching, and backup. These managed SMB file service options let customers get their Windows applications and users to work expeditiously, reducing toil and risk. (Note that these are managed partner-provided services, so make sure to check the region you’ll be using to ensure availability.)

NetApp Cloud Volumes Service

If you work in IT and have ever managed, used, or thought about storage, chances are you’re familiar with NetApp. NetApp has been providing enterprise-grade solutions since 1992. With NetApp Cloud Volumes Service (CVS), you get highly available, cloud-native, managed SMB services that are well-integrated with Google Cloud. Storage volumes can be sized from 1 to 100 TB to meet the demands of large-scale application environments, and the service includes tried-and-true NetApp features like automated snapshots and rapid volume provisioning. It can be deployed right from the Google Cloud Marketplace, managed in the Google Cloud console, supported by Google, and paid for in your Google Cloud bill.

Dell Technologies PowerScale

Dell Technologies is another leader in the enterprise storage market, and have partnered with them to offer PowerScale on Google Cloud. PowerScale leverages an all-flash architecture for blazing fast storage operations. However, it will be backward-compatible, allowing you to choose between PowerScale all-flash nodes and Isilon nodes in all-flash, hybrid, or archive configuration. The OneFS file system boasts a maximum of 50 PB per namespace; this thing scales! And as with NetApp, PowerScale in Google Cloud includes enterprise-grade features like snapshots, replication, and hybrid integration with on-premises storage. It’s tightly integrated with Google Cloud: it can be found in the Google Cloud Marketplace, is integrated with the Google Cloud console, and billed and supported directly by Google.

Both of these managed file storage products support up to SMBv3, making them outstanding options to support Windows workloads, without a lot of management overhead.  

Semi-managed SMB file services

Not everyone wants fully managed SMB services. While managed services take a lot of work off your plate, as a general rule they also reduce the ways in which you can customize the solution to meet your particular requirements. Therefore, some customers prefer to use self-managed (or semi-managed) services, like the storage services below, to tailor the configurations to the exact specifications needed for their Windows workloads.

NetApp Cloud Volumes ONTAP

Like the fully managed NetApp Cloud Volumes Service, NetApp Cloud Volumes ONTAP (CVO) gives you the familiar features and benefits you’re likely used to with NetApp in your data center, including SnapMirror. However, as a semi-managed service, it’s well-suited for customers who need enhanced control and security of their data on Google Cloud. CVO deploys into your Google Cloud virtual private cloud (VPC) on Google Compute Engine instances, all within your own Google Cloud project(s), so you can enforce policies, firewall rules, and user access as you see fit to meet internal or external compliance requirements. You will need to deploy CVO yourself by following NetApp’s step-by-step instructions. In the Marketplace, you get your choice of a number of CVO price plans, each with varying SMB storage capacity (2 TB to 368 TB) and availability. NetApp Cloud Volumes ONTAP is available in all Google Cloud regions.

Panzura Freedom Hybrid Cloud Storage

Panzura Freedom is a born-in-the-cloud, hybrid file service that allows global enterprises to store, collaborate, and back up files. It presents a single, geo-distributed file system called Panzura CloudFS that’s simultaneously accessible from your Google Cloud VPCs, corporate offices, on-premises data centers, and other clouds. The authoritative data is stored in Google Cloud Storage buckets and cached in Panzura Freedom Filers deployed locally, giving your Windows applications and users high-performing access to the file system. Google Cloud’s global fiber network and 100+ points of presence (PoPs) reduce global latency to ensure fast access from anywhere. Panzura can be found in the Google Cloud Marketplace as well.  

Self-managed SMB file services

In some cases, managed services will not meet all the requirements. This is not limited to technical requirements. For example, in your industry you might be subject to a compliance regulation for which none of the managed services are certified. If you consider all of the fully managed and semi-managed SMB file service options, but none of them are just right for your budget and requirements, don’t worry. You still have the option of rolling your own Windows SMB file service on Google Cloud. This approach gives you the most flexibility of all, along with the responsibility of deploying, configuring, securing, and managing it all. Don’t let that scare you, though: These options are likely very familiar to your Microsoft-focused staff.

Windows SMB file servers on a Google Compute Engine instance

This option is quite simple: you deploy a Compute Engine instance running your preferred version of Windows Server, install the File Server role, and you’re off to the races. You’ll have all the native features of Windows at your disposal. If you’ve extended or federated your on-premises Active Directory into Google Cloud or are using the Managed Service for Active Directory, you’ll be able to apply permissions just as you do on-prem.  Persistent Disks add a great deal of flexibility to Windows file servers. You can add or expand Persistent Disks to increase the storage capacity and disk performance of your SMB file servers with no downtime. Although a single SMB file server is a single point of failure, the native protections and redundancies of Compute Engine make it unlikely that a failure will result in extended downtime. If you choose to utilize Regional Persistent Disks, your disks will be continuously replicated to a different Google Cloud zone, adding an additional measure of protection and rapid recoverability in the event of a VM or zone failure.  

Windows clustering

If your requirements dictate that your Windows file services cannot go down, a single Windows file server will not do. Fortunately, there’s a solution: Windows Failover Clustering. With two or more Windows Compute Engine instances and Persistent Disks, you can build a highly available SMB file cluster that can survive the failure of Persistent Disks, VMs, the OS, or even a whole Google Cloud zone with little or no downtime. There are two different flavors of Windows file clusters: File Server Cluster and Scale-out File server (SOFS).  

Windows file server clusters have been around for around 20 years. The basic architecture is two Windows servers in a Windows Failover Cluster, connected to shared storage such as a storage area network (SAN). These clusters are active-passive in nature. At any given time, only one of the servers in the cluster can access the shared storage and provide file services to SMB clients. Clients access the services via a floating IP address, front-ended by an internal load balancer. In the event of a failure of the active node, the passive node will establish read/write access to the shared storage, bind the floating IP address, and launch file services. In a cloud environment, physical shared storage devices cannot be used for cluster storage. Instead, Storage Spaces Direct (S2D) may be used. S2D is a clustered storage system that combines the persistent disks of multiple VMs into a single, highly available, virtual storage pool. You can think of it as a distributed virtual SAN.

Scale-Out File Server (SOFS) is a newer and more capable clustered file service role that also runs in a Windows Failover Cluster. Like Windows File Server Clusters, SOFS makes use of S2D for cluster storage. Unlike a Windows File Server Cluster, SOFS is an active-active file server. Rather than presenting a floating IP address to clients, SOFS creates separate A records in DNS for each node in the SOFS role. Each node has a complete replica of the shared dataset and can serve files to Windows clients, making SOFS both vertically and horizontally scalable. Additionally, SOFS has some newer features that make it more resilient for application servers.  

As mentioned before, both Windows File Server Clusters and SOFS depend on S2D for shared storage. You can see the process of installing S2D on Google Cloud virtual machines hereis described, and the chosen SMB file service role may be installed afterwards. Check out the process of deploying a file server cluster role here, and the process for an SOFS role.  

Scale-Out File Server or File Server Cluster?

File Server Clusters and SOFS are alike in that they provide highly available SMB file shares on S2D. SOFS is a newer technology that provides higher throughput and more scalability than File Server Cluster. However, SOFS is not optimized for the metadata-heavy operations common with end-user file utilization (opening, renaming, editing, copying, etc.). Therefore, in general, choose File Server Clusters for end-user file services and choose SOFS when your application(s) need SMB file services. See this page for a detailed comparison of features between File Server Cluster (referred to there as “General Use File Server Cluster”) and SOFS.

Which option should I choose?

We’ve described several good options for Microsoft shops to provide their Windows workloads and users access to secure, high-performing, and scalable SMB file services. How do you choose which one is best suited for your particular needs? Here are some decision criteria you should consider:

  • Are you looking to simplify your IT operations and offload operational toil? If so, look at the fully managed and semi-managed options.

  • Do you have specialized technical configuration requirements that aren’t met by a managed service? Then consider rolling your own SMB file service solution as a single Windows instance or one of the Windows cluster options.

  • Do you require a multi-zone for fully automated high availability? If so, NetApp Cloud Volumes ONTAP and the single instance Windows file server are off the table. They run in a single Google Cloud zone.

  • Do you have a requirement for a particular Google Cloud region? If so, you’ll need to verify whether NetApp Cloud Volumes Service and NetApp Cloud Volumes ONTAP are available in the region you require. As partner services that require specialized hardware, these two services are available in many, but not all, Google Cloud regions today.

  • Do you require hybrid storage capabilities, spanning on-premises and cloud? If so, all of the managed options have hybrid options.

  • Is your budget tight? If so, and if you’re OK with some manual planning and work to minimize the downtime that’s possible with any single point of failure, then a single Windows Compute Engine instance file server will do fine. 

  • Do you require geo-diverse disaster recovery? You’re in luck—every option described here offers a path to DR.

What next?  

This post serves as a brief overview of several options for Windows file services in Google Cloud. Take a closer look at the ones that interest you. Once you’ve narrowed it down to the top candidates, you can go through the Marketplace pages (for the managed services) to get more info or start the process of launching the service. The self-managed options above include links to Google Cloud-specific instructions to get you started, then general Microsoft documentation to deploy your chosen cluster option.

Related Article

Filestore Backups eases migration of file-based apps to cloud

The new Filestore Backups lets you migrate your copy data services and backup strategy for your file systems in Google Cloud.

Read Article

New BOINC Network Website!!

The content below is taken from the original ( New BOINC Network Website!!), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve converted the podcast site into a full site for BOINC.

I am not a designer or developer… suggestions (and help if you have the time) are very much invited!

I particularly want to get rid of the stock images if possible.

Anyway, enjoy!

https://boinc.network

submitted by /u/jring_o to r/BOINC
[link] [comments]

Retired engineer confesses to role in sliding Microsoft Bob onto millions of XP install CDs

The content below is taken from the original ( Retired engineer confesses to role in sliding Microsoft Bob onto millions of XP install CDs), to continue reading please visit the site. Remember to respect the Author & Copyright.

‘While Bob never got to play on the big stage, he always followed the band around and got to ride on the bus’

One anniversary unlikely to be marked within the bowels of Redmond is that of Microsoft Bob. However, a retired engineer has taken to YouTube to confirm that the unloved interface did indeed enjoy a second life as digital ballast for Windows XP.…

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

The content below is taken from the original ( Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Maintaining a server fleet the size of Cloudflare’s is an operational challenge, to say the least. Anything we can do to lower complexity and improve efficiency has effects for our SRE (Site Reliability Engineer) and Data Center teams that can be felt throughout a server’s 4+ year lifespan.

At the Cloudflare Core, we process logs to analyze attacks and compute analytics. In 2020, our Core servers were in need of a refresh, so we decided to redesign the hardware to be more in line with our Gen X edge servers. We designed two major server variants for the core. The first is Core Compute 2020, an AMD-based server for analytics and general-purpose compute paired with solid-state storage drives. The second is Core Storage 2020, an Intel-based server with twelve spinning disks to run database workloads.

Core Compute 2020

Earlier this year, we blogged about our 10th generation edge servers or Gen X and the improvements they delivered to our edge in both performance and security. The new Core Compute 2020 server leverages many of our learnings from the edge server. The Core Compute servers run a variety of workloads including Kubernetes, Kafka, and various smaller services.

Configuration Changes (Kubernetes)

Previous Generation Compute Core Compute 2020
CPU 2 x Intel Xeon Gold 6262 1 x AMD EPYC 7642
Total Core / Thread Count 48C / 96T 48C / 96T
Base / Turbo Frequency 1.9 / 3.6 GHz 2.3 / 3.3 GHz
Memory 8 x 32GB DDR4-2666 8 x 32GB DDR4-2933
Storage 6 x 480GB SATA SSD 2 x 3.84TB NVMe SSD
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

Configuration Changes (Kafka)

Previous Generation (Kafka) Core Compute 2020
CPU 2 x Intel Xeon Silver 4116 1 x AMD EPYC 7642
Total Core / Thread Count 24C / 48T 48C / 96T
Base / Turbo Frequency 2.1 / 3.0 GHz 2.3 / 3.3 GHz
Memory 6 x 32GB DDR4-2400 8 x 32GB DDR4-2933
Storage 12 x 1.92TB SATA SSD 10 x 3.84TB NVMe SSD
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

Both previous generation servers were Intel-based platforms, with the Kubernetes server based on Xeon 6262 processors, and the Kafka server based on Xeon 4116 processors. One goal with these refreshed versions was to converge the configurations in order to simplify spare parts and firmware management across the fleet.

As the above tables show, the configurations have been converged with the only difference being the number of NVMe drives installed depending on the workload running on the host. In both cases we moved from a dual-socket configuration to a single-socket configuration, and the number of cores and threads per server either increased or stayed the same. In all cases, the base frequency of those cores was significantly improved. We also moved from SATA SSDs to NVMe SSDs.

Core Compute 2020 Synthetic Benchmarking

The heaviest user of the SSDs was determined to be Kafka. The majority of the time Kafka is sequentially writing 2MB blocks to the disk. We created a simple FIO script with 75% sequential write and 25% sequential read, scaling the block size from a standard page table entry size of 4096B 4096KB to Kafka’s write size of 2MB. The results aligned with what we expected from an NVMe-based drive.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Core Compute 2020 Production Benchmarking

Cloudflare runs many of our Core Compute services in Kubernetes containers, some of which are multi-core. By transitioning to a single socket, problems associated with dual sockets were eliminated, and we are guaranteed to have all cores allocated for any given container on the same socket.

Another heavy workload that is constantly running on Compute hosts is the Cloudflare CSAM Scanning Tool. Our Systems Engineering team isolated a Compute 2020 compute host and the previous generation compute host, had them run just this workload, and measured the time to compare the fuzzy hashes for images to the NCMEC hash lists and verify that they are a “miss”.

Because the CSAM Scanning Tool is very compute intensive we specifically isolated it to take a look at its performance with the new hardware. We’ve spent a great deal of effort on software optimization and improved algorithms for this tool but investing in faster, better hardware is also important.

In these heatmaps, the X axis represents time, and the Y axis represents “buckets” of time taken to verify that it is not a match to one of the NCMEC hash lists. For a given time slice in the heatmap, the red point is the bucket with the most times measured, the yellow point the second most, and the green points the least. The red points on the Compute 2020 graph are all in the 5 to 8 millisecond bucket, while the red points on the previous Gen heatmap are all in the 8 to 13 millisecond bucket, which shows that on average, the Compute 2020 host is verifying hashes significantly faster.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Core Storage 2020

Another major workload we identified was ClickHouse, which performs analytics over large datasets. The last time we upgraded our servers running ClickHouse was back in 2018.

Configuration Changes

Previous Generation Core Storage 2020
CPU 2 x Intel Xeon E5-2630 v4 1 x Intel Xeon Gold 6210U
Total Core / Thread Count 20C / 40T 20C / 40T
Base / Turbo Frequency 2.2 / 3.1 GHz 2.5 / 3.9 GHz
Memory 8 x 32GB DDR4-2400 8 x 32GB DDR4-2933
Storage 12 x 10TB 7200 RPM 3.5” SATA 12 x 10TB 7200 RPM 3.5” SATA
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

CPU Changes

For ClickHouse, we use a 1U chassis with 12 x 10TB 3.5” hard drives. At the time we were designing Core Storage 2020 our server vendor did not yet have an AMD version of this chassis, so we remained on Intel. However, we moved Core Storage 2020 to a single 20 core / 40 thread Xeon processor, rather than the previous generation’s dual-socket 10 core / 20 thread processors. By moving to the single-socket Xeon 6210U processor, we were able to keep the same core count, but gained 17% higher base frequency and 26% higher max turbo frequency. Meanwhile, the total CPU thermal design profile (TDP), which is an approximation of the maximum power the CPU can draw, went down from 165W to 150W.

On a dual-socket server, remote memory accesses, which are memory accesses by a process on socket 0 to memory attached to socket 1, incur a latency penalty, as seen in this table:

Previous Generation Core Storage 2020
Memory latency, socket 0 to socket 0 81.3 ns 86.9 ns
Memory latency, socket 0 to socket 1 142.6 ns N/A

An additional advantage of having a CPU with all 20 cores on the same socket is the elimination of these remote memory accesses, which take 76% longer than local memory accesses.

Memory Changes

The memory in the Core Storage 2020 host is rated for operation at 2933 MHz; however, in the 8 x 32GB configuration we need on these hosts, the Intel Xeon 6210U processor clocks them at 2666 MH. Compared to the previous generation, this gives us a 13% boost in memory speed. While we would get a slightly higher clock speed with a balanced, 6 DIMMs configuration, we determined that we are willing to sacrifice the slightly higher clock speed in order to have the additional RAM capacity provided by the 8 x 32GB configuration.

Storage Changes

Data capacity stayed the same, with 12 x 10TB SATA drives in RAID 0 configuration for best  throughput. Unlike the previous generation, the drives in the Core Storage 2020 host are helium filled. Helium produces less drag than air, resulting in potentially lower latency.

Core Storage 2020 Synthetic benchmarking

We performed synthetic four corners benchmarking: IOPS measurements of random reads and writes using 4k block size, and bandwidth measurements of sequential reads and writes using 128k block size. We used the fio tool to see what improvements we would get in a lab environment. The results show a 10% latency improvement and 11% IOPS improvement in random read performance. Random write testing shows 38% lower latency and 60% higher IOPS. Write throughput is improved by 23%, and read throughput is improved by a whopping 90%.

Previous Generation Core Storage 2020 % Improvement
4k Random Reads (IOPS) 3,384 3,758 11.0%
4k Random Read Mean Latency (ms, lower is better) 75.4 67.8 10.1% lower
4k Random Writes (IOPS) 4,009 6,397 59.6%
4k Random Write Mean Latency (ms, lower is better) 63.5 39.7 37.5% lower
128k Sequential Reads (MB/s) 1,155 2,195 90.0%
128k Sequential Writes (MB/s) 1,265 1,558 23.2%

CPU frequencies

The higher base and turbo frequencies of the Core Storage 2020 host’s Xeon 6210U processor allowed that processor to achieve higher average frequencies while running our production ClickHouse workload. A recent snapshot of two production hosts showed the Core Storage 2020 host being able to sustain an average of 31% higher CPU frequency while running ClickHouse.

Previous generation (average core frequency) Core Storage 2020 (average core frequency) % improvement
Mean Core Frequency 2441 MHz 3199 MHz 31%

Core Storage 2020 Production benchmarking

Our ClickHouse database hosts are continually performing merge operations to optimize the database data structures. Each individual merge operation takes just a few seconds on average, but since they’re constantly running, they can consume significant resources on the host. We sampled the average merge time every five minutes over seven days, and then sampled the data to find the average, minimum, and maximum merge times reported by a Compute 2020 host and by a previous generation host. Results are summarized below.

ClickHouse merge operation performance improvement
(time in seconds, lower is better)

Time Previous generation Core Storage 2020 % improvement
Mean time to merge 1.83 1.15 37% lower
Maximum merge time 3.51 2.35 33% lower
Minimum merge time 0.68 0.32 53% lower

Our lab-measured CPU frequency and storage performance improvements on Core Storage 2020 have translated into significantly reduced times to perform this database operation.

Conclusion

With our Core 2020 servers, we were able to realize significant performance improvements, both in synthetic benchmarking outside production and in the production workloads we tested. This will allow Cloudflare to run the same workloads on fewer servers, saving CapEx costs and data center rack space. The similarity of the configuration of the Kubernetes and Kafka hosts should help with fleet management and spare parts management. For our next redesign, we will try to further converge the designs on which we run the major Core workloads to further improve efficiency.

Special thanks to Will Buckner and Chris Snook for their help in the development of these servers, and to Tim Bart for validating CSAM Scanning Tool’s performance on Compute.

Microsoft’s Windows turns 35 today

The content below is taken from the original ( Microsoft’s Windows turns 35 today), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s not a stretch to say that Microsoft’s Windows is one of the most ubiquitous and well-known pieces of software the world has ever seen. At one point or another you’ve almost certainly spent some time with one of the many iconic Windows releases….

Intel’s Forgotten 1970s Dual Core Processor

The content below is taken from the original ( Intel’s Forgotten 1970s Dual Core Processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

Can you remember when you received your first computer or device containing a CPU with more than one main processing core on the die? We’re guessing for many of you it was probably some time around 2005, and it’s likely that processor would have been in the Intel Core Duo family of chips. With a dual-core ESP32 now costing relative pennies it may be difficult to grasp in 2020, but there was a time when a multi-core processor was a very big deal indeed.

What if we were to tell you that there was another Intel dual-core processor back in the 1970s, and that some of you may even have owned one without ever realizing it? It’s a tale related to us by [Chris Evans], about how a team of reverse engineering enthusiasts came together to unlock the secrets of the Intel 8271.

If you’ve never heard of the 8271 you can be forgiven, for far from being part of the chip giant’s processor line it was instead a high-performance floppy disk controller that appeared in relatively few machines. An unexpected use of it came in the Acorn BBC Micro which is where [Chris] first encountered it. There’s very little documentation of its internal features, so an impressive combination of decapping and research was needed by the team before they could understand its secrets.

As you will no doubt have guessed, what they found is no general purpose application processor but a mask-programmed dual-core microcontroller optimized for data throughput and containing substantial programmable logic arrays (PLAs). It’s a relatively large chip for its day, and with 22,000 transistors it dwarfs the relatively svelte 6502 that does the BBC Micro’s heavy lifting. Some very hard work at decoding the RMO and PLAs arrives at the conclusion that the main core has some similarity to their 8048 architecture, and the dual-core design is revealed as a solution to the problem of calculating cyclic redundancy checks on the fly at disk transfer speed. There is even another chip using the same silicon in the contemporary Intel range, the 8273 synchronous data link controller simply has a different ROM. All in all the article provides a fascinating insight into this very unusual corner of 1970s microcomputer technology.

As long-time readers will know, we have an interest in chip reverse engineering.

Turning GitHub Into A URL Shortening Service

The content below is taken from the original ( Turning GitHub Into A URL Shortening Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

URL shortening services like TinyURL or Bitly have long become an essential part of the modern web, and are popular enough that even Google killed off their own already. Creating your own shortener is also a fun exercise, and in its core doesn’t require much more than a nifty domain name, some form of database to map the URLs, and a bit of web technology to glue it all together. [Nelsontky] figured you don’t even have to build most of it yourself, but you could just (ab)use GitHub for it.

Using GitHub Pages to host the URL shortening website itself, [nelsontky] actually repurposes GitHub’s issue tracking system to map the shortened identifier to the original URL. Each redirection is simply a new issue, with the issue number serving as the shortening identifier, and the issue’s title text storing the original URL. To map the request, a bit of JavaScript extracts the issue number from the request, looks it up via GitHub API, and if a valid one was found (and API rate limits weren’t exceeded), redirects the caller accordingly. What’s especially clever about this is that GitHub Pages usually just serves static files stored in a repository, so the entire redirection logic is actually placed in the 404 error handling page, allowing requests to any arbitrary paths.

While this may not be as neat as placing your entire website content straight into the URL itself, it could be nicely combined with this rotary phone to simply dial the issue number and access your bookmarks — perfect in case you always wanted your own website phone book. And if you don’t like the thought of interacting with the GitHub UI every time you want to add a new URL, give the command line tools a try.

NVIDIA’s latest desktop workstation has four 80GB GPUs

The content below is taken from the original ( NVIDIA’s latest desktop workstation has four 80GB GPUs), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back in May, NVIDIA announced a ridiculously powerful GPU called the A100. The card was designed for data center systems, however, such as the company’s own DGX A100, rather than anything that the average consumer would run at home. Today, the compan…

New recommendations from Azure Advisor are now available

The content below is taken from the original ( New recommendations from Azure Advisor are now available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Four new best practice recommendations will help you improve the reliability and performance of your Azure resources

APIX-KBL7

The content below is taken from the original ( APIX-KBL7), to continue reading please visit the site. Remember to respect the Author & Copyright.

The APIX-KBL7 is a Pico-ITX style SBC based on Intel’s 7th-Gen Core i7/i5/i3 and Celeron SoCs, and up to 16GB SO-DIMM RAM. Interfaces include SATA, GbE LAN, 4k HDMI, LVDS, audio I/O, 5x USB, UART, M.2, and expansion signals including SMBus, GPIO, and more.

A minute to remember!

This year more than any, time to take a moment and remember all those who braved, fought and struggled to help us live the life we have today!

Time passes but memories never fade… A minute to remember!

Manage Control Tower life cycle actions intelligently using AWS Service Catalog, AWS Config, Amazon DynamoDB and AWS CloudFormation

The content below is taken from the original ( Manage Control Tower life cycle actions intelligently using AWS Service Catalog, AWS Config, Amazon DynamoDB and AWS CloudFormation), to continue reading please visit the site. Remember to respect the Author & Copyright.

As customers create and manage multi-account AWS environments, cloud administrators need to process where each account can apply configuration autonomously from a centralize configuration repository. Some of the customers I work with use AWS Control Tower to manage a multi account environment. Administrators use AWS Control Tower to create organization units for account grouping and then create multiple accounts in those organization units. These administrators would like a process to ensure consistency. For example, the administrator would like accounts across all regions to deploy a security configuration or apply an account base tagging strategy for resources that will be deployed. A mechanism to do this will be for the administrator to add an AWS CloudFormation template that contains the steps for applying the security configuration to a configuration repository, then each account will then retrieve the configuration and deploy it in its own environment.

In this blog post, I will show you how to enable administrators to configure an AWS Control Tower environment to automatically deploy configurations when needed using AWS Service Catalog, AWS Cloud Watch events and other AWS services.

This solution uses the following AWS services. Most of the resources are set up for you with an AWS CloudFormation stack:

Background

Here are some of AWS Service Catalog concepts referenced in this post. For more information, see Overview of AWS Service Catalog.

  • A product is a blueprint for building the AWS resources to make available for deployment on AWS, along with the configuration information. Create a product by importing an AWS CloudFormation template, or, in case of AWS Marketplace-based products, by copying the product to AWS Service Catalog. A product can belong to multiple portfolios.
  • A portfolio is a collection of products, together with the configuration information. Use portfolios to manage user access to specific products. You can grant portfolio access for an AWS Identity and Access Management (IAM) user, IAM group, or IAM role level.
  • A provisioned product is an AWS CloudFormation stack; that is, the AWS resources that are created. When an end-user launches a product, AWS Service Catalog provisions the product from an AWS CloudFormation stack.
  • Constraints control the way users can deploy a product. With launch constraints, you can specify a role that the AWS Service Catalog can assume to launch a product.

Solution overview­

The following diagram maps out the solution architecture.

Here’s the process for the administrator:

  1. The administrator deploys an AWS CloudFormation template that creates base components in the Control Tower management environment like API Gateway, DynamoDB, Lambda, Step Function and others. These components will be used by the add configuration and managed account configuration deployment process.
  2. The administrator uses the Add Configuration process to add a CloudFormation configuration item to the configuration database which will be used by the managed account deployment process.

Here’s the process when the managed account deploys a CloudFormation configuration item:

  1. Behind the scene, invisible to the end-user, a scheduled Cloud Watch rule triggers a Lambda in the managed account which communicates to the master account through an API Gateway. The process queries the configuration database for new CloudFormation configuration items.
  2. If there are new CloudFormation configuration item the managed account deploys the CloudFormation template and updates the configuration database.

Step 1: Configuring an environment

Prerequisites:

You will need the following information to deploy the base components:
A user with an administrator role
The name of a user, group or role who will launch AWS Service Catalog products. You will enter the user as follows:

  • User  :user/<username>
  • Role  :role/<rolename>
  • Group: group/<groupname>

Deploy the base components:

Deployment Methods:

Download content and use your own bucket.

  1. Download this content zip file
  2. Extract the zip file, it will create a folder called postaction
  3. Create an AWS S3 bucket, note the bucket name
  4. Upload the postaction folder to the bucket
  5. Drill down into the postaction folder
  6. Click the checkbox next to setup_ctpostaction_base.json
  7. Right click and copy the Object URL

Use content from existing location.

  1. Login to the AWS Control Tower console using the Control Tower master account with an administrator role.
  2. Verify that AWS Control Tower has been deployed successfully.
  3. Right click and copy this CloudFormation setup link.
  4. Open the AWS CloudFormation console in a new browser tab.
  5. In the AWS CloudFormation console, choose Create Stack, Amazon S3 URL, paste the URL you just copied, and then choose Next.
  6. On the Specify stack details page, specify the following:
    • Stack name: ctpostactionbase
    • SourceBucket: use default or enter your bucket name
  7. Leave the default values except as noted.
  8. On the Review page, check the box next to I acknowledge that AWS CloudFormation might create IAM resources with custom names, and choose Create.
  9. After the status of the stack changes to CREATE COMPLETE, select the stack and choose Outputs to see the output.

  • In the parameter screen the user  selects the parameters for the stack location and the target organization unit.
  • When the parameters have all been selected, the user  launches the AWS Service Catalog product.

Deploy a CloudFormation configuration item:

  1. Login to the AWS Service Catalog console
  2. Select Product list from the top left
  3. Select the SCproductCTAddConfigItem AWS Service Catalog product
  4. Select the LAUNCH PRODUCT button
  5. Enter a Name myfirstconfig select Next
  6. Parameters:
    1.  AutomationDescription: default
    2. AutonomousLevel:
      1. Autonomous – configuration will be deployed by the service.
    3. ActionMethod
      1. SpokePull – The manage accounts will download and install this configuration
      2. MasterPush – The Control Tower master account will push this configuration to manage accounts.
    4. S3StackLocation – The CloudFormation stack to be deployed
    5. OrgUnits – The OrgUnit to deploy the configuration to accounts in this OU will get the configuration
    6. Regions – The region to apply the configuration  select 1 or ALL
    7. AdministrationRoleARN – use default
    8. ExecutionRoleName – use default
  7. Select Next
  8. On the TagOptions page select Next
  9. On the Notifications page select Next
  10. On the Review page select Launch
  11. Monitor until the Status changes to Succeeded
  12. Optional – you can switch to a mange account to verify the configuration has been deployed.

Cleanup process

To avoid incurring cost, please delete resources that are not needed.  You can terminate the Service Catalog product deployed from the AWS Service Catalog console, select Provisioned products then select Action then Terminate.

Conclusion

In this post, you learned an easy way to for administrators to configure an AWS Control Tower environment to automatically deploy configurations when needed. You also saw how there’s an extra layer of governance and control when you use Control Tower and AWS Service Catalog to deploy resources to support business objectives.

About the Author

Kenneth Walsh is a solutions architect focusing on AWS Marketplace. Kenneth is passionate about cloud computing and loves being a trusted advisor for his customers.

 

MIT tests autonomous ‘Roboat’ that can carry two passengers

The content below is taken from the original ( MIT tests autonomous ‘Roboat’ that can carry two passengers), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve heard plenty about the potential of autonomous vehicles in recent years, but MIT is thinking about different forms of self-driving transportation. For the last five years, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) an…

EasyBCD lets you easily edit Windows Boot Settings and configure the Bootloader

The content below is taken from the original ( EasyBCD lets you easily edit Windows Boot Settings and configure the Bootloader), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to modify Boot settings & configure BootloaderEasyBCD is a robust tool for modifying the Windows bootloader. It is incredibly useful […]

This article EasyBCD lets you easily edit Windows Boot Settings and configure the Bootloader first appeared on TheWindowsClub.com.