depreciated authentication retirement o365

Microsoft is changing their policies for how authentication takes place between clients and office 365. Are you prepared?

Basic Authentication Retirement for legacy protocols in Exchange Online
Major update: Announcement started
Applied To: All customers

Beginning October 13, 2020, we will retire Basic Authentication for EWS, EAS, IMAP, POP and RPS to access Exchange Online. Note: this change does not impact SMTP AUTH.

There are several actions that you and/or your users can take to avoid service disruptions on client applications, and we describe them below. If no action is taken, client applications using Basic Authentication for EWS will be retired on October 13, 2020

Any application using OAuth 2.0 to connect to these protocols, will continue to work without change or interruption.

[What do I need to do to prepare for this change?]

You have several options on how to prepare for the retirement of Basic Authentication.
– You can start updating the client applications your users are using to versions that support OAuth 2.0 today. For mobile device access, there are several email apps available that support Modern Authentication, but we recommend switching to the Outlook app for iOS and Android as we believe it provides the best overall experience for your M365 connected users. For desktop/laptop access, we encourage the use of the latest versions of Outlook for Windows and Outlook for Mac. All Outlook versions including, or newer than, Outlook 2013 fully support OAuth 2.0.
– If you have written your own code using these protocols, you will need to update your code to use OAuth 2.0 instead of Basic Authentication, you can reach out to us on stack overflow with the tag exchange-basicauth if you need some help.
– If you or your users are using a 3rd party application, which uses these protocols, you will either need to:
— reach out to the 3rd party app developer who supplied this application to update it to support OAuth 2.0 authentication
— assist your users to switch to an application that’s built using OAuth 2.0.

We are in the process of building reports that will help you identify any impacted users and client applications in your organization. We will make these reports available to you in the next few months and communicate their availability via a follow-up Message center post.

World Time for Wall Clocks and MacBook Apple OS X

La Crosse Technology 404-1235UA-SS 14 Inch UltrAtomic Analog Stainless Steel Wall Clock

If you work with people from other parts of the world on a regular basis it’s a good idea to know what time or day it is over there. As such I’ve been trying out wall clocks to make it simple to just look up and see – are they waking up? Sleeping? At lunch or dinner? It’s been super helpful. And when I’m traveling in London the family here in California can look up at the wall and get an idea of what I’m up to as well.

The challenge has been to find a reliable easy to use clock that truly simplifies my life instead of making it more complicated. I hate having to change batteries all the time and try to use rechargeable NiMH if possible. I’ve tried all sorts. Even considered building one from a Raspberry Pi or Arduino with those vacuum tubes. We tried the go to the store and purchase the atomic clock that automatically adjusts via radio signal. These have had two issues: (1) only adjustable for USA time zones and (2) poor battery life.

I took a chance and ordered a few from Amazon that said they overcame these two issues. The one I’m using now has been working great for months and nicely solves the issues I found with other wall clocks. You can see what it looks like in the picture above. It has the option of using either 2 or 4 “c” size batteries so plan ahead and fill it up. There are a few other helpful features I really like…
– Custom Time Zone selection to adjust to ANY World Time zones. This is significant because it will continue to receive and adjust to the radio time synchronization signal.
– Dual Antennas for higher signal reception
– Battery saving feature pauses the Second Hand between the hours of 11:00pm until 5:00am. Hour and Minute hands still move.
La Crosse Technology 404-1235UA-SS 14 Inch UltrAtomic Analog Stainless Steel Wall Clock

Now that I’ve been using the wall clock so much I wanted to have the same functions available on my MacBook laptop I carry everywhere with me. So I started googling and searching on twitter and asking my friends. I tried all sorts of suggestions and nothing really worked well. One worked great for a few weeks then after a system update it started sucking CPU and draining the battery. So I went without for a while and just used my phone whenever I was curious as to the time elsewhere. Today – while disabling siri in the system preferences control panel I searched for “World” and found that Apple has thoughtfully integrated a world time clock into their Operating System and it is available via the Notification Panel that show up on the top right hamburger bullet menu. Here are a few screenshots of how it works…

Docker attack

On Thursday, April 25th, 2019, we discovered unauthorized access to a single Hub database storing a subset of non-financial user data. Upon discovery, we acted quickly to intervene and secure the site.

We want to update you on what we’ve learned from our ongoing investigation, including which Hub accounts are impacted, and what actions users should take.

Here is what we’ve learned:

During a brief period of unauthorized access to a Docker Hub database, sensitive data from approximately 190,000 accounts may have been exposed (less than 5% of Hub users). Data includes usernames and hashed passwords for a small percentage of these users, as well as Github and Bitbucket tokens for Docker autobuilds.

Actions to Take:

◦ We are asking users to change their password on Docker Hub and any other accounts that shared this password.

◦ For users with autobuilds that may have been impacted, we have revoked GitHub tokens and access keys, and ask that you reconnect to your repositories and check security logs to see if any unexpected actions have taken place.

▪ You may view security actions on your GitHub or BitBucket accounts to see if any unexpected access has occurred over the past 24 hours -see and

▪ This may affect your ongoing builds from our Automated build service. You may need to unlink and then relink your Github and Bitbucket source provider as described in

We are enhancing our overall security processes and reviewing our policies. Additional monitoring tools are now in place.

Our investigation is still ongoing, and we will share more information as it becomes available.

Thank you,



Kent Lamb

Director of Docker Support

© 2019 Docker Inc. All rights reserved | Privacy Policy

144 Townsend Street, San Francisco, CA 94107

Microsoft IoT in Action Event #IoTinActionMS vision artificial intelligence kit demo

I was able to spend some time at the Internet of Things Event Microsoft sponsored at the Santa Clara Convention Center. There were a few cool demos on display.

Vision AI dev kit

In the video above you can see a demo of the “Vision AI Developer Kit” available for purchase from Arrow for $249 (with free overnight shipping). This kit is manufactured by eInfochips And designed in partnership with Qualcomm using their Snapdragon Neural Processing Engine. It’s not just the hardware that makes up this dev kit. Using azure cloud services developers can create containers with programming code and push that down to devices at the edge to enable real time vision tasks. For example, detecting vehicle that pass through an intersection and counting cars, trucks, bicycles, etc. The images are processed locally on the edge devices and only the meta data need be sent back. These new devices represent the newer advanced “light weight edge” with devices that use less power and less network bandwidth yet offer enhanced functionality and easier code updates. When updates are desired the code in the container is updated and pushed out through the cloud.

Plant monitoring platform

The Naviz Analytics ThingBlu IoT Platform is being used to help plant growers optimize the yield of their crops. For $65 a month subscription you get a set of sensors with solar power and battery backup that constantly collect data such as temperature, humidity, soil pH, leaf canopy wetness, wind and other factors. All this data is guided by Artificial Intelligence this all back to the cloud for analysis, alerts, and recommendations.

Applications include vineyards and cannabis growers. See more in this video:

SHA256 Checksums Verify Firmware Supply Chain Security Integrity

JULY 6, 2012
Using Cryptographic Hashes to verify file download integrity

NOTE: Don’t use MD5 (see below)

The SHA hash functions are a set of cryptographic hash functions designed by the National Security Agency (NSA) and published by the NIST as a U.S. Federal Information Processing Standard (see: SHA stands for Secure Hash Algorithm.

Vendors provide a sha-1 or better sha-256 hash for software downloads. This enables you to verify that your downloaded files are unaltered from the original.


To confirm file integrity, use a software tool on your computer to calculate your own hash for files downloaded from the vendor’s web site.

NOTE: It’s important that the checksums you are comparing to are publicly available and visible on the vendors website without having to go through a paywall or logging in to see.  This will help ensure against tampering by unauthorized people and make the work of applying the patch easier to validate along the entire supply chain. How often do we have a patch downloaded by one person from the vendor, sent to a consultant on site to download to the server, tested by someone else, then actually applied to the production system?  Each of these steps in the supply chain should include checking the integrity of the file being applied against a known good checksum.

If your calculated hash matches the message digest we provide, you are assured that the file was downloaded intact.

Tools are available for Windows and Linux and Mac. Most UNIX installations provide a built-in command for these hashes. You may need a newer linux kernel to calculate the checksums for larger files.

The Microsoft File Checksum Integrity Verifier (FCIV) can be used on Windows based products to verify sha-1 values. See for details on FCIV.

Mac OS X: How to Verify a SHA-1 Digest

Instructions on checking an sha-1 checksum on a Mac:
In Finder, browse to /Applications/Utilities.
Double-click on the Terminal icon. A Terminal window will appear.
In the Terminal window, type: “openssl sha1 ” (sha1 followed by a space).
Drag the downloaded file from the Finder into the Terminal window.
Click in the Terminal window, press the Return key, and compare the checksum displayed to the screen to the one on the vendor’s download page.

UPDATE: you can now also run “shasum -a 256” from the latest mac OS X command line.

From TechNet

Windows Server 2008 R2 Standard, Enterprise, Datacenter, and Web (x64) – DVD (English)
File Name: en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Size: 2,858 (MB)
Date Published (UTC): 8/31/2009 10:22:24 AM
Last Updated (UTC): 1/11/2010 4:31:40 PM
SHA1: A548D6743129F2A02C907D2758773A1F6BB1BCD7
 ISO/CRC: 8F94460B

About MD5

MD5 was designed by Ron Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5. While it was not a clearly fatal weakness, cryptographers began recommending the use of other algorithms, such as SHA-1 (which has since been found also to be vulnerable). In 2004, more serious flaws were discovered, making further use of the algorithm for security purposes questionable; specifically, a group of researchers described how to create a pair of files that share the same MD5 checksum. Further advances were made in breaking MD5 in 2005, 2006, and 2007. In an attack on MD5 published in December 2008, a group of researchers used this technique to fake SSL certificate validity.

US-CERT says MD5 “should be considered cryptographically broken and unsuitable for further use,”and most U.S. government applications now require the SHA-2 family of hash functions.


Ravello Performance on Oracle Cloud Infrastructure

Ravello Performance on Oracle Cloud Infrastructure



I’ve been a long time network tester either as part of my day job or as a hobby. Upon getting a new computer upgrade I’d always want to measure the performance benefits against the previous version. I guess the same thing goes for cars too but that can be a little more dangerous, right? So I was excited when earlier this year a number of us vExperts were invited to beta test the new Ravello on Oracle Cloud Infrastructure (previously known as Ravello on Oracle Bare Metal Cloud). Among all the new features and capabilities there were two that caught my eye…

  1. ESXi on top of Ravello (set the preferPhysicalHost flag to true)
  2. VMs with 32 CPUs and 200 GB of RAM

It’s still the same Ravello concept under the hood but the underlying hardware is better now that they control the datacenter where the machines are run. The Ravello hypervisor runs directly on Oracle Hardware Servers without having to go through a separate nested hypervisor abstraction layer.


One of the features I love most about Ravello is the “Repo“. The Ravello Repository allows you to instantly reproduce an entire environment created by someone else. All the VMs I used in this blog post are available for you to try out in a publicly available blueprint… just click this link to make it available in your own Ravello account:

Publish these to the various zones and you can reproduce my tests below for yourself and even modify them with your own custom workloads.

There are some other great write-ups on what you can do with Ravello. Please check these out too:

Test Results

In this report I will show the results of various performance tests being run on the legacy Ravello Infrastructure versus the new “Bare Metal” option.

TL;DR – the new Ravello on Oracle Cloud Infrastructure (aka Bare Metal) capabilities offers anywhere from two to ten times the performance of the legacy environments depending on your workloads.

Blueprint Name: ubuntu virtio x 2 test-bp

VM Image:

Changes made: SSH key configured, 4 vcpu, 4gb ram.

US East 2 – 9.2 Gbits/sec network – 286.33 MB/sec memory – 10.6 seconds for 1000 threads

  • West:
  • East:

US Central 1 – 6.0 Gbits/sec network – 272.59 MB/sec memory – 11.2 seconds for 1000 threads

  • West:
  • East:

US East 5 – 11.2 Gbits/sec network – 1308.36 MB/sec memory – 2.1 seconds for 1000 threads

  • West:
  • East:

Steps followed

  1. From your admin PC…
  2. Use a web browser create a 2 node ubuntu application
  3. login to each node: ssh -i ssh-key.pem ubuntu@host-name
  4. sudo add-apt-repository “ppa:patrickdk/general-lucid”; sudo apt-get -y update; sudo apt-get -y install iperf3 bonnie++ sysbench
  5. save application as a blueprint
  6. publish blueprint to various sites and then test as follows…
  7. ssh -i ssh-key.pem ubuntu@east-host-name
    1. ip a ← shows ip address of
    2. iperf3 -s
  8. ssh -i ssh-key.pem ubuntu@west-host-name
    1. ip a
    2. iperf3 -c ← starts test from client on west to server on east
  9. record results for each site above
  10. run memory test – sysbench –test=memory run

Here’s a screenshot where you can see the iPerf3 output showing throughput differences between the same 2 node ubuntu blueprint published to US East 2 and US East 5 sites. From my mac laptop I was able to quickly spin up the same blueprint of 2 VMs in 2 different sites (4 VMs) within a few minutes and run these test workloads. By default the VMs run for 2 hours then they suspend automatically but you can extend this as long as you need. There’s also a full RESTful API available for integration with your Continuous Integration Software Development Lifecycle.

Tools used


The ultimate speed test packet generator tool for TCP, UDP and SCTP
iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters. This is a new implementation that shares no code with the original iPerf and also is not backwards compatible. iPerf was originally developed by NLANR/DAST. iPerf3 is principally developed by ESnet / Lawrence Berkeley National Laboratory. It is released under a three-clause BSD license. Here are just some of the features:

  • TCP and SCTP
  • Measure bandwidth
  • Report MSS/MTU size and observed read sizes.
  • Support for TCP window size via socket buffers.
  • UDP
  • Client can create UDP streams of specified bandwidth.
  • Measure packet loss
  • Measure delay jitter
  • Multicast capable
  • Cross-platform: Windows, Linux, Android, MacOS X, BSD

iperf3 command options

$ iperf3 -v

iperf 3.0.7

Linux 3.16.0-31-generic #41~14.04.1-Ubuntu SMP Wed Feb 11 19:30:13 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Usage: iperf [-s|-c host] [options]

iperf [-h|–help] [-v|–version]

Server or Client:

-p, –port # server port to listen on/connect to

-f, –format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes

-i, –interval # seconds between periodic bandwidth reports

-F, –file name xmit/recv the specified file

-A, –affinity n/n,m set CPU affinity

-B, –bind bind to a specific interface

-V, –verbose more detailed output

-J, –json output in JSON format

-d, –debug emit debugging output

-v, –version show version information and quit

-h, –help show this message and quit

Server specific:

-s, –server run in server mode

-D, –daemon run the server as a daemon

Client specific:

-c, –client run in client mode, connecting to

-u, –udp use UDP rather than TCP

-b, –bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited)

(default 1 Mbit/sec for UDP, unlimited for TCP)

(optional slash and packet count for burst mode)

-t, –time # time in seconds to transmit for (default 10 secs)

-n, –bytes #[KMG] number of bytes to transmit (instead of -t)

-k, –blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n)

-l, –len #[KMG] length of buffer to read or write

(default 128 KB for TCP, 8 KB for UDP)

-P, –parallel # number of parallel client streams to run

-R, –reverse run in reverse mode (server sends, client receives)

-w, –window #[KMG] TCP window size (socket buffer size)

-C, –linux-congestion set TCP congestion control algorithm (Linux only)

-M, –set-mss # set TCP maximum segment size (MTU – 40 bytes)

-N, –nodelay set TCP no delay, disabling Nagle’s Algorithm

-4, –version4 only use IPv4

-6, –version6 only use IPv6

-S, –tos N set the IP ‘type of service’

-L, –flowlabel N set the IPv6 flow label (only supported on Linux)

-Z, –zerocopy use a ‘zero copy’ method of sending data

-O, –omit N omit the first n seconds

-T, –title str prefix every output line with this string

–get-server-output get results from server

[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-

iperf3 homepage:


SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load.

The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.

Current features allow to test the following system parameters:

  • file I/O performance
  • scheduler performance
  • memory allocation and transfer speed
  • POSIX threads implementation performance
  • database server performance

The design is very simple. SysBench runs a specified number of threads and they all execute requests in parallel. The actual workload produced by requests depends on the specified test mode. You can limit either the total number of requests or the total time for the benchmark, or both.

sysbench homepage:

Debug logging output

iPerf 3 Test: protocol: TCP, 1 stream, 131072 byte blocks, 60 second test, TCP MSS: 1448

Internal Test command: $ iperf3 -c –verbose –reverse –time 60

Internet Test command: $ iperf3 -c –verbose

Command used for memory testing: $ sysbench –test=memory run

Nested e2 testing details

legacy hosted public cloud “virtual” infrastructure – slower due to overhead of binary translation

From E2 to E2 Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 51.5 GBytes 7.38 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 51.5 GBytes 7.38 Gbits/sec receiver
CPU Utilization: local/receiver 99.1% (2.0%u/97.0%s), remote/sender 15.7% (0.3%u/15.4%s)

From E2 to Internet Test Results:

[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 6.27 MBytes 5.26 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 5.85 MBytes 4.91 Mbits/sec receiver
CPU Utilization: local/sender 1.6% (0.6%u/1.0%s), remote/receiver 0.4% (0.0%u/0.4%s)

ubuntu@e2east:~$ sysbench –test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!

Operations performed: 104857600 (292531.35 ops/sec)

102400.00 MB transferred (285.68 MB/sec)

Test execution summary:
total time: 358.4491s
total number of events: 104857600
total time taken by event execution: 271.5140
per-request statistics:
min: 0.00ms
avg: 0.00ms
max: 1.49ms
approx. 95 percentile: 0.00ms

Threads fairness:
events (avg/stddev): 104857600.0000/0.00
execution time (avg/stddev): 271.5140/0.00

Bare Metal e5 testing details

new “bare metal” environment – faster with hardware assisted nested virtualization support

From E5 to E5 Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 78.0 GBytes 11.2 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 78.0 GBytes 11.2 Gbits/sec receiver
CPU Utilization: local/receiver 99.3% (0.6%u/98.8%s), remote/sender 8.0% (0.2%u/7.9%s)

From E5 to Internet Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 542 MBytes 455 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 542 MBytes 455 Mbits/sec receiver
CPU Utilization: local/sender 4.6% (0.3%u/4.3%s), remote/receiver 1.2% (0.0%u/1.2%s)

ubuntu@e5east:~$ sysbench –test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!
WARNING: Operation time (0.000000) is less than minimal counted value, counting as 1.000000
WARNING: Percentile statistics will be inaccurate

Operations performed: 104857600 (1342818.26 ops/sec)

102400.00 MB transferred (1311.35 MB/sec)

Test execution summary:
total time: 78.0877s
total number of events: 104857600
total time taken by event execution: 60.6087
per-request statistics:
min: 0.00ms
avg: 0.00ms
max: 0.49ms
approx. 95 percentile: 0.00ms

Threads fairness:
events (avg/stddev): 104857600.0000/0.00
execution time (avg/stddev): 60.6087/0.00

ubuntu@e5east:~$ sysbench –test=memory run –max-time=600 –debug=on –validate=on

Operations performed: 104857600 (1126890.18 ops/sec)

102400.00 MB transferred (1100.48 MB/sec)

Test execution summary:

total time: 93.0504s

total number of events: 104857600

total time taken by event execution: 60.6825

per-request statistics:

min: 0.00ms

avg: 0.00ms

max: 0.11ms

approx. 95 percentile: 0.00ms

Threads fairness:

events (avg/stddev): 104857600.0000/0.00

execution time (avg/stddev): 60.6825/0.00

DEBUG: Verbose per-thread statistics:

DEBUG: thread # 0: min: 0.0000s avg: 0.0000s max: 0.0001s events: 104857600

DEBUG: total time taken by event execution: 60.6825s

Next Steps – Pricing

I really wanted to include a detailed assessment and price comparison of using Ravello versus other hosted cloud providers. Now that Amazon has announced per second billing it might be worthwhile to do a cost comparison for running similar workloads with various providers. But any cost comparison should include the value of ease of use and network feature capabilities.

The blueprint for this blog has 2 VMs each with 4GB RAM, 4vCPU, and 4 GB Disk space with a total cost of less than $1 per hour of use.

Other Options

In an effort to build similar network test labs with other providers I have not been able to find one that offers the same capabilities. Ravello provides the unique ability to pre-assign MAC and IP addresses, take snapshots, create blueprints, and easily share those with others using a built-in repo. Now with the improved performance and support from Oracle, it’s not just a better cost value, it’s a better solution overall.

Read More »

Weigh the Iron Plate with Holes

Here’s a fun math problem for you iron workers (current and future) out there!

The formula for calculating the weight of iron is:

  • 1” x 12” x 12” = 40 Lbs

The dimensions of this plate shown in the picture below are:

  • 3’9” x 3’2” x 8”

The plate also has 13 holes with a diameter of 2.5”.

What is the weight of this plate?

See below for hints…

Read More »