Docker attack

On Thursday, April 25th, 2019, we discovered unauthorized access to a single Hub database storing a subset of non-financial user data. Upon discovery, we acted quickly to intervene and secure the site.

We want to update you on what we’ve learned from our ongoing investigation, including which Hub accounts are impacted, and what actions users should take.

Here is what we’ve learned:

During a brief period of unauthorized access to a Docker Hub database, sensitive data from approximately 190,000 accounts may have been exposed (less than 5% of Hub users). Data includes usernames and hashed passwords for a small percentage of these users, as well as Github and Bitbucket tokens for Docker autobuilds.

Actions to Take:

◦ We are asking users to change their password on Docker Hub and any other accounts that shared this password.

◦ For users with autobuilds that may have been impacted, we have revoked GitHub tokens and access keys, and ask that you reconnect to your repositories and check security logs to see if any unexpected actions have taken place.

▪ You may view security actions on your GitHub or BitBucket accounts to see if any unexpected access has occurred over the past 24 hours -see and

▪ This may affect your ongoing builds from our Automated build service. You may need to unlink and then relink your Github and Bitbucket source provider as described in

We are enhancing our overall security processes and reviewing our policies. Additional monitoring tools are now in place.

Our investigation is still ongoing, and we will share more information as it becomes available.

Thank you,



Kent Lamb

Director of Docker Support

© 2019 Docker Inc. All rights reserved | Privacy Policy

144 Townsend Street, San Francisco, CA 94107


Microsoft IoT in Action Event #IoTinActionMS vision artificial intelligence kit demo

I was able to spend some time at the Internet of Things Event Microsoft sponsored at the Santa Clara Convention Center. There were a few cool demos on display.

Vision AI dev kit

In the video above you can see a demo of the “Vision AI Developer Kit” available for purchase from Arrow for $249 (with free overnight shipping). This kit is manufactured by eInfochips And designed in partnership with Qualcomm using their Snapdragon Neural Processing Engine. It’s not just the hardware that makes up this dev kit. Using azure cloud services developers can create containers with programming code and push that down to devices at the edge to enable real time vision tasks. For example, detecting vehicle that pass through an intersection and counting cars, trucks, bicycles, etc. The images are processed locally on the edge devices and only the meta data need be sent back. These new devices represent the newer advanced “light weight edge” with devices that use less power and less network bandwidth yet offer enhanced functionality and easier code updates. When updates are desired the code in the container is updated and pushed out through the cloud.

Plant monitoring platform

The Naviz Analytics ThingBlu IoT Platform is being used to help plant growers optimize the yield of their crops. For $65 a month subscription you get a set of sensors with solar power and battery backup that constantly collect data such as temperature, humidity, soil pH, leaf canopy wetness, wind and other factors. All this data is guided by Artificial Intelligence this all back to the cloud for analysis, alerts, and recommendations.

Applications include vineyards and cannabis growers. See more in this video:

SHA256 Checksums Verify Firmware Supply Chain Security Integrity

JULY 6, 2012
Using Cryptographic Hashes to verify file download integrity

NOTE: Don’t use MD5 (see below)

The SHA hash functions are a set of cryptographic hash functions designed by the National Security Agency (NSA) and published by the NIST as a U.S. Federal Information Processing Standard (see: SHA stands for Secure Hash Algorithm.

Vendors provide a sha-1 or better sha-256 hash for software downloads. This enables you to verify that your downloaded files are unaltered from the original.


To confirm file integrity, use a software tool on your computer to calculate your own hash for files downloaded from the vendor’s web site.

NOTE: It’s important that the checksums you are comparing to are publicly available and visible on the vendors website without having to go through a paywall or logging in to see.  This will help ensure against tampering by unauthorized people and make the work of applying the patch easier to validate along the entire supply chain. How often do we have a patch downloaded by one person from the vendor, sent to a consultant on site to download to the server, tested by someone else, then actually applied to the production system?  Each of these steps in the supply chain should include checking the integrity of the file being applied against a known good checksum.

If your calculated hash matches the message digest we provide, you are assured that the file was downloaded intact.

Tools are available for Windows and Linux and Mac. Most UNIX installations provide a built-in command for these hashes. You may need a newer linux kernel to calculate the checksums for larger files.

The Microsoft File Checksum Integrity Verifier (FCIV) can be used on Windows based products to verify sha-1 values. See for details on FCIV.

Mac OS X: How to Verify a SHA-1 Digest

Instructions on checking an sha-1 checksum on a Mac:
In Finder, browse to /Applications/Utilities.
Double-click on the Terminal icon. A Terminal window will appear.
In the Terminal window, type: “openssl sha1 ” (sha1 followed by a space).
Drag the downloaded file from the Finder into the Terminal window.
Click in the Terminal window, press the Return key, and compare the checksum displayed to the screen to the one on the vendor’s download page.

UPDATE: you can now also run “shasum -a 256” from the latest mac OS X command line.

From TechNet

Windows Server 2008 R2 Standard, Enterprise, Datacenter, and Web (x64) – DVD (English)
File Name: en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Size: 2,858 (MB)
Date Published (UTC): 8/31/2009 10:22:24 AM
Last Updated (UTC): 1/11/2010 4:31:40 PM
SHA1: A548D6743129F2A02C907D2758773A1F6BB1BCD7
 ISO/CRC: 8F94460B

About MD5

MD5 was designed by Ron Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5. While it was not a clearly fatal weakness, cryptographers began recommending the use of other algorithms, such as SHA-1 (which has since been found also to be vulnerable). In 2004, more serious flaws were discovered, making further use of the algorithm for security purposes questionable; specifically, a group of researchers described how to create a pair of files that share the same MD5 checksum. Further advances were made in breaking MD5 in 2005, 2006, and 2007. In an attack on MD5 published in December 2008, a group of researchers used this technique to fake SSL certificate validity.

US-CERT says MD5 “should be considered cryptographically broken and unsuitable for further use,”and most U.S. government applications now require the SHA-2 family of hash functions.


Ravello Performance on Oracle Cloud Infrastructure

Ravello Performance on Oracle Cloud Infrastructure



I’ve been a long time network tester either as part of my day job or as a hobby. Upon getting a new computer upgrade I’d always want to measure the performance benefits against the previous version. I guess the same thing goes for cars too but that can be a little more dangerous, right? So I was excited when earlier this year a number of us vExperts were invited to beta test the new Ravello on Oracle Cloud Infrastructure (previously known as Ravello on Oracle Bare Metal Cloud). Among all the new features and capabilities there were two that caught my eye…

  1. ESXi on top of Ravello (set the preferPhysicalHost flag to true)
  2. VMs with 32 CPUs and 200 GB of RAM

It’s still the same Ravello concept under the hood but the underlying hardware is better now that they control the datacenter where the machines are run. The Ravello hypervisor runs directly on Oracle Hardware Servers without having to go through a separate nested hypervisor abstraction layer.


One of the features I love most about Ravello is the “Repo“. The Ravello Repository allows you to instantly reproduce an entire environment created by someone else. All the VMs I used in this blog post are available for you to try out in a publicly available blueprint… just click this link to make it available in your own Ravello account:

Publish these to the various zones and you can reproduce my tests below for yourself and even modify them with your own custom workloads.

There are some other great write-ups on what you can do with Ravello. Please check these out too:

Test Results

In this report I will show the results of various performance tests being run on the legacy Ravello Infrastructure versus the new “Bare Metal” option.

TL;DR – the new Ravello on Oracle Cloud Infrastructure (aka Bare Metal) capabilities offers anywhere from two to ten times the performance of the legacy environments depending on your workloads.

Blueprint Name: ubuntu virtio x 2 test-bp

VM Image:

Changes made: SSH key configured, 4 vcpu, 4gb ram.

US East 2 – 9.2 Gbits/sec network – 286.33 MB/sec memory – 10.6 seconds for 1000 threads

  • West:
  • East:

US Central 1 – 6.0 Gbits/sec network – 272.59 MB/sec memory – 11.2 seconds for 1000 threads

  • West:
  • East:

US East 5 – 11.2 Gbits/sec network – 1308.36 MB/sec memory – 2.1 seconds for 1000 threads

  • West:
  • East:

Steps followed

  1. From your admin PC…
  2. Use a web browser create a 2 node ubuntu application
  3. login to each node: ssh -i ssh-key.pem ubuntu@host-name
  4. sudo add-apt-repository “ppa:patrickdk/general-lucid”; sudo apt-get -y update; sudo apt-get -y install iperf3 bonnie++ sysbench
  5. save application as a blueprint
  6. publish blueprint to various sites and then test as follows…
  7. ssh -i ssh-key.pem ubuntu@east-host-name
    1. ip a ← shows ip address of
    2. iperf3 -s
  8. ssh -i ssh-key.pem ubuntu@west-host-name
    1. ip a
    2. iperf3 -c ← starts test from client on west to server on east
  9. record results for each site above
  10. run memory test – sysbench –test=memory run

Here’s a screenshot where you can see the iPerf3 output showing throughput differences between the same 2 node ubuntu blueprint published to US East 2 and US East 5 sites. From my mac laptop I was able to quickly spin up the same blueprint of 2 VMs in 2 different sites (4 VMs) within a few minutes and run these test workloads. By default the VMs run for 2 hours then they suspend automatically but you can extend this as long as you need. There’s also a full RESTful API available for integration with your Continuous Integration Software Development Lifecycle.

Tools used


The ultimate speed test packet generator tool for TCP, UDP and SCTP
iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters. This is a new implementation that shares no code with the original iPerf and also is not backwards compatible. iPerf was originally developed by NLANR/DAST. iPerf3 is principally developed by ESnet / Lawrence Berkeley National Laboratory. It is released under a three-clause BSD license. Here are just some of the features:

  • TCP and SCTP
  • Measure bandwidth
  • Report MSS/MTU size and observed read sizes.
  • Support for TCP window size via socket buffers.
  • UDP
  • Client can create UDP streams of specified bandwidth.
  • Measure packet loss
  • Measure delay jitter
  • Multicast capable
  • Cross-platform: Windows, Linux, Android, MacOS X, BSD

iperf3 command options

$ iperf3 -v

iperf 3.0.7

Linux 3.16.0-31-generic #41~14.04.1-Ubuntu SMP Wed Feb 11 19:30:13 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Usage: iperf [-s|-c host] [options]

iperf [-h|–help] [-v|–version]

Server or Client:

-p, –port # server port to listen on/connect to

-f, –format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes

-i, –interval # seconds between periodic bandwidth reports

-F, –file name xmit/recv the specified file

-A, –affinity n/n,m set CPU affinity

-B, –bind bind to a specific interface

-V, –verbose more detailed output

-J, –json output in JSON format

-d, –debug emit debugging output

-v, –version show version information and quit

-h, –help show this message and quit

Server specific:

-s, –server run in server mode

-D, –daemon run the server as a daemon

Client specific:

-c, –client run in client mode, connecting to

-u, –udp use UDP rather than TCP

-b, –bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited)

(default 1 Mbit/sec for UDP, unlimited for TCP)

(optional slash and packet count for burst mode)

-t, –time # time in seconds to transmit for (default 10 secs)

-n, –bytes #[KMG] number of bytes to transmit (instead of -t)

-k, –blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n)

-l, –len #[KMG] length of buffer to read or write

(default 128 KB for TCP, 8 KB for UDP)

-P, –parallel # number of parallel client streams to run

-R, –reverse run in reverse mode (server sends, client receives)

-w, –window #[KMG] TCP window size (socket buffer size)

-C, –linux-congestion set TCP congestion control algorithm (Linux only)

-M, –set-mss # set TCP maximum segment size (MTU – 40 bytes)

-N, –nodelay set TCP no delay, disabling Nagle’s Algorithm

-4, –version4 only use IPv4

-6, –version6 only use IPv6

-S, –tos N set the IP ‘type of service’

-L, –flowlabel N set the IPv6 flow label (only supported on Linux)

-Z, –zerocopy use a ‘zero copy’ method of sending data

-O, –omit N omit the first n seconds

-T, –title str prefix every output line with this string

–get-server-output get results from server

[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-

iperf3 homepage:


SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load.

The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.

Current features allow to test the following system parameters:

  • file I/O performance
  • scheduler performance
  • memory allocation and transfer speed
  • POSIX threads implementation performance
  • database server performance

The design is very simple. SysBench runs a specified number of threads and they all execute requests in parallel. The actual workload produced by requests depends on the specified test mode. You can limit either the total number of requests or the total time for the benchmark, or both.

sysbench homepage:

Debug logging output

iPerf 3 Test: protocol: TCP, 1 stream, 131072 byte blocks, 60 second test, TCP MSS: 1448

Internal Test command: $ iperf3 -c –verbose –reverse –time 60

Internet Test command: $ iperf3 -c –verbose

Command used for memory testing: $ sysbench –test=memory run

Nested e2 testing details

legacy hosted public cloud “virtual” infrastructure – slower due to overhead of binary translation

From E2 to E2 Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 51.5 GBytes 7.38 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 51.5 GBytes 7.38 Gbits/sec receiver
CPU Utilization: local/receiver 99.1% (2.0%u/97.0%s), remote/sender 15.7% (0.3%u/15.4%s)

From E2 to Internet Test Results:

[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 6.27 MBytes 5.26 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 5.85 MBytes 4.91 Mbits/sec receiver
CPU Utilization: local/sender 1.6% (0.6%u/1.0%s), remote/receiver 0.4% (0.0%u/0.4%s)

ubuntu@e2east:~$ sysbench –test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!

Operations performed: 104857600 (292531.35 ops/sec)

102400.00 MB transferred (285.68 MB/sec)

Test execution summary:
total time: 358.4491s
total number of events: 104857600
total time taken by event execution: 271.5140
per-request statistics:
min: 0.00ms
avg: 0.00ms
max: 1.49ms
approx. 95 percentile: 0.00ms

Threads fairness:
events (avg/stddev): 104857600.0000/0.00
execution time (avg/stddev): 271.5140/0.00

Bare Metal e5 testing details

new “bare metal” environment – faster with hardware assisted nested virtualization support

From E5 to E5 Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 78.0 GBytes 11.2 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 78.0 GBytes 11.2 Gbits/sec receiver
CPU Utilization: local/receiver 99.3% (0.6%u/98.8%s), remote/sender 8.0% (0.2%u/7.9%s)

From E5 to Internet Test Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 542 MBytes 455 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 542 MBytes 455 Mbits/sec receiver
CPU Utilization: local/sender 4.6% (0.3%u/4.3%s), remote/receiver 1.2% (0.0%u/1.2%s)

ubuntu@e5east:~$ sysbench –test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!
WARNING: Operation time (0.000000) is less than minimal counted value, counting as 1.000000
WARNING: Percentile statistics will be inaccurate

Operations performed: 104857600 (1342818.26 ops/sec)

102400.00 MB transferred (1311.35 MB/sec)

Test execution summary:
total time: 78.0877s
total number of events: 104857600
total time taken by event execution: 60.6087
per-request statistics:
min: 0.00ms
avg: 0.00ms
max: 0.49ms
approx. 95 percentile: 0.00ms

Threads fairness:
events (avg/stddev): 104857600.0000/0.00
execution time (avg/stddev): 60.6087/0.00

ubuntu@e5east:~$ sysbench –test=memory run –max-time=600 –debug=on –validate=on

Operations performed: 104857600 (1126890.18 ops/sec)

102400.00 MB transferred (1100.48 MB/sec)

Test execution summary:

total time: 93.0504s

total number of events: 104857600

total time taken by event execution: 60.6825

per-request statistics:

min: 0.00ms

avg: 0.00ms

max: 0.11ms

approx. 95 percentile: 0.00ms

Threads fairness:

events (avg/stddev): 104857600.0000/0.00

execution time (avg/stddev): 60.6825/0.00

DEBUG: Verbose per-thread statistics:

DEBUG: thread # 0: min: 0.0000s avg: 0.0000s max: 0.0001s events: 104857600

DEBUG: total time taken by event execution: 60.6825s

Next Steps – Pricing

I really wanted to include a detailed assessment and price comparison of using Ravello versus other hosted cloud providers. Now that Amazon has announced per second billing it might be worthwhile to do a cost comparison for running similar workloads with various providers. But any cost comparison should include the value of ease of use and network feature capabilities.

The blueprint for this blog has 2 VMs each with 4GB RAM, 4vCPU, and 4 GB Disk space with a total cost of less than $1 per hour of use.

Other Options

In an effort to build similar network test labs with other providers I have not been able to find one that offers the same capabilities. Ravello provides the unique ability to pre-assign MAC and IP addresses, take snapshots, create blueprints, and easily share those with others using a built-in repo. Now with the improved performance and support from Oracle, it’s not just a better cost value, it’s a better solution overall.

Read More »

Weigh the Iron Plate with Holes

Here’s a fun math problem for you iron workers (current and future) out there!

The formula for calculating the weight of iron is:

  • 1” x 12” x 12” = 40 Lbs

The dimensions of this plate shown in the picture below are:

  • 3’9” x 3’2” x 8”

The plate also has 13 holes with a diameter of 2.5”.

What is the weight of this plate?

See below for hints…

Read More »

PDCA and DevOps

What do PDCA, Toyota, and DevOps have in common?

DevOps – PDCA – Kaizen Computing – Rapid Iteration (aka The Scientific Method)

It seems like a hassle to have to watch a video and write down everything they say – but remember – you get this right and hand it off so someone else can repeat these steps.  Then you can go on to learn new things and solve new problems for your team.

1- Write down what you plan to do BEFORE you do it. This includes the configuration changes as well as a test plan to verify the change. Start simple and add details as you iterate. Many times someone else has already started a document showing how they would do something similar to what you want to do. This could even be a video from a blogger or vendor. Start by doing a copy and paste if you don’t have access to edit their procedure directly.
2 – Follow the steps you’ve written. If you need more detailed instructions look up procedures and collect suggestions from colleagues (Tribal Knowledge)
3 – Things won’t always work as expected. Check the results. Experiment a little if needed.
4 – Finally – take Action to adjust the initial plan. Wikis and Issue trackers are excellent tools to help with this process.

Repeat this process over and over as you work throughout the day.

PDCA can be used by just one person implementing a new feature for an existing vendor project but it also useful for large team projects with many groups needing to work together. Lately many software companies are working with a Fail Fast, Fail Often philosophy where success is measured by how many updates per day are made to the code base. For IT groups the equivilent metric to watch would be wiki page updates.


Tim took his final flight on the evening of March 22, 2017 – passed away in his sleep surrounded by friends in La Jolla, California. Please share your memories with us here on this page. There’s also a facebook page we can add you to.

Please consider making a donation gift here in Tim’s name:

Please note: If you are a Brocade Employee your donations will be matched.

Tim Braly was a frequent-flying, extremely dedicated member of Angel Flight West. Since he joined in 2012, he flew 131 flights and more than 1,000 hours to help people get the medical treatment they needed, consistently ranking among the most frequent contributors in Oregon and the other places he flew. He was known throughout the organization and by his passengers as selfless, kind, and generous. His family and friends have established this memorial page to pay tribute to his many contributions to Angel Flight West and its passengers.

From John: 

Tim’s plane, Ruby, bore the month/year of his birth and his initials: 9/72 TB

God Speed Tango Bravo

From Maria: 

May he always be remembered for his kind heart and the tremendous work he did with the Angel Flight organization… I wonder just how many flights of hope he actually flew.

Here’s an article he wrote in 2005 about his first 100 missions (pages 3-4). May we never forget that bright smile or the smiles he provided to those who needed It most.

“I often get asked why I choose to fly for AFW and donate my time and cost of flying to those in need. Selfishly, I’ll say it’s because I like to fly, so why not? But, that’s just us pilots being modest about the underlying sacrifices we accept in making a difference. It truthfully comes down to two things. First, I can’t think of a better way to donate to a cause that I can see 100% of my efforts going to make a difference in someone’s life. And second and most important, I lost my sister to breast cancer and lived first-hand the devastation and toll it takes on one’s life and the family around them. Flying for AFW has let me share in several cancer patients’ recovery efforts by reducing the stress and burden of their health care–and to that, I’m truly grateful.” Tim Braly 2015

Here’s a picture of Tim in his Army uniform (with last name Hoefling) from Feb 5, 1992 in the jungles of Panama – he was 19 years old.

From Iben: 

I could go on and on about the amount Tim has helped me out. Such a smart, caring, ethical person. The background image in this picture is from a briefing we attended where he wowed us with some new network automation solution he cooked up over the weekend. He interviewed me for the consulting gig I have with the Navy and has been a great mentor and inspiration to me and many others. Many people have dreams but Tim was able to execute on his dreams.

From Jon: Tim is and was a great man, a true friend, a kind heart, a giving soul & always generous with our most precious resource, time.

He lived an adventurous & wonderful life, and hopefully knew how many of us accepted & adored him just for being Tim.

Keep Flying my Friend, the World was a better place with you in it.