USB Hubs on BBB

Hey guys, I have been using the BBB more up here at school and found some interesting quirks that I thought I would share regarding using usb hubs on the host port of the BBB.  First, you have to have the hub and the devices plugged in before power is applied or they won’t be recognized.  For example, even if the hub was plugged in and power was applied it wouldn’t recognize a jump drive if that was plugged in later.  Second, I tried 3 hubs all from monoprice:

The first one did not work on ubuntu 12.04  but the 2nd and 3rd did.  All three were recognized by Angstrom.

Posted in Uncategorized | Leave a comment

The 2013 Team

Exploring the Argonne Wilderness

The 2013 Team

Posted in Uncategorized | Leave a comment

Python in Userspace

All too often in python a small compile is needed to load a module or superuser access is needed to add it to the python path. These are the steps to setup a useable python environment all within the userspace. It is quite straight forward and I am providing the steps
to save others the trouble of looking all over the place. (It is also possible to just include a module in your home directory, but due to the additional C libraries needed for netCDF I stayed away from just adding modules on)

How to make a python environment in userspace for a compute node.
I used the /sandbox/ as my download folder, and extracted there (speed up around 10x vs /homes). Unfortunately the prefix should be on the nfs so you can take this python with you.

#make a local folder in home
#$USER is the username of the userspace user
mkdir ~/local/
./configure --prefix=~/local
make install
./configure --prefix=/nfs2/$USER/local --enable-hl --enable-shared
make check # this one fails on a long variable check, doesn't seem to hurt things
make install


~/local/bin/pip install pycurl
CPPFLAGS="-I/nfs2/$USER/local/include"  LDFLAGS="-L/nfs2/$USER/local/lib" ./configure --prefix=/nfs2/$USER/local --enable-shared --enable-netcdf-4 --enable-dap
#now to install basemap
wget --prefix='/nfs2/$USER/local'
~/local/bin/pip install matplotlib
# install the goes library
cd geos-3.3.3/
./configure --prefix='/nfs2/$USER/local'
make; make install
export GEOS_DIR='/nfs2/$USER/local'
local/bin/python install
Posted in Uncategorized | Leave a comment

HS cam testing

I experimented a bit with the HS cam on Friday (people may have noticed it aimed across the aisle at the 3 colorful imaging targets). I won’t say I learned everything but I did get some useful things:

The image’s brightness is denoted by the “integration” variable under the camera setup menu. I successfully tested integration values up to 1024 without noise-induced degradation of image quality. I’ve got a suspicion that the integration number means milliseconds exposure time given its effect on the lines/second framerate.

I principally wanted to find out how wide we can open the aperture on the lens. I won’t guarantee that this applies to images focused from infinity equally but there is a clear change in image quality:

Based on these results (particularly consider the readability of the ruler) I wouldn’t recommend opening the lens up past f/4, possibly f/5.6 if brightness permits.

My table of mean pixel values from dark frames (normalized with readout 4096 -> 1.0) is so:

Noise is essentially independent of integration time but (not surprisingly) strongly dependent on gain. It’s not quite <e_noise>=sqrt(g) but it’s close. I’ve found that while one can enter any number into the “gain” box under camera properties, only numbers from 1 to 56 do what’s expected. I’d guess that after that the internal PGAs are rolling back to g=1.

The noise contains a fairly strong nonrandom component:

The lines wander across the image in a pattern that often resembles part of a sine wave. I wouldn’t hazard to guess what the origin is, and it’s pointless since we can’t take the camera apart to try and find out or do anything anyway.

WARNING: There’s also a nice “trick” that the imager software plays – it automatically rescales the captured image as displayed so max = white, however terribly underexposed it is. To actually get 12-bit fidelity, the integration variable has to be adjusted so that the max value of a white region on the spectrogram approaches 4000.

Posted in Uncategorized | Leave a comment

.cube binary file format.

To amuse myself this morning (and because I may end up having to write these back out), I read the .cube 32K header that the SOC guys are kinda ‘meh’ on. It contains a good bit of useful information, enough to load & understand the hypercube without the assistance of a .hdr file:

0×0000-0×15: 22 byte (apparently freeform ascii) header (?)
0×0016: 4-byte, samples/column [520]
0x001c: 4-byte, columns/plane  [696]
0×0022: 4-byte, planes/cube    [128]
0x1F15: Mystery; Similar but distinct per file; length ~ 1524B; Guessing this is the serialized form of a complex (MIDIS perhaps?) internal data structure.
0×2618: 0×00 here apparently means uint16 data, 0×01 means float32 data. I think.
0x38C3: [float 2.8446] repeats 768 times, identical per file (how pointless…)
0×4987: [int 1] [int 2] [int 3] [float 373.96] [float 4.981] [float .00259] These appear twice, defining wavelength data
0×5563: [int 1] [int 2] [int 3] [float 373.96] [float 4.981] [float .00259] They define the 0th through 2nd coefficients of a polynomial (see below)
0×8000: 32K offset, data begins
0x5863FFF: Last byte of file if uint16s.

It looks like the second definition (0×5563) of the wavelengths is the one that actually matters (this appears in a float-point cube written by MIDIS-CalIDS).

Some other junk that has no discernable significance; None of these appear in the MIDIS-CalIDS file:
0x0C7D: write 0x4B (‘K’)
0x54C3: write “None”
0×5513: write “None”

I’m missing (presumably it’s somewhere in the slab at 0x1F15) the BPP, which we (so far) know from first principles, but hardcoding is bad mmkay, and I’d like to be able to tell whatever reads it that this is in fact 16bpp data so I can use the full data size I’d be storing things back as. Why let bits go to waste?

The data slab is in the form of uint16s. Stride of 1 moves down (?) the current column, stride of Ny is next wavelength (ordered shortest to longest), stride of Ny*Nlambda advances row to right (?).

Regarding the chunk at 0x1F15: It’s definitely some sort of serialized structure or encoded metadata (the cube resolution appears at 0x1FE1), but it may have to do with the interface as well (115200 is encoded at 0x20AD – the serial interface’s speed)

More on 0×4987:
The three numbers give the first three coefficients in a polynomial expansion of wavelength incident on position x of the imaging plane (The exact equation is m λ / d = x / sqrt(x2+L2) for positive integer m, wavelength λ, grating spacing d, perpendicular distance from optical axis x and distance along axis L) such that labelling them c0-c2 and labelling the planes by integers n = 0 … 127,
λ(n) = c0 + c1*n + c2*n*(n-1)
This agrees with the .hdr files to the limit of the numbers printed therein.

More on 0×2618:
It appears that this flag may refer to data ordering as well. When it is set to 1, data are stored organized with x varying, then z, then y (while aquisition system cubes are stored with y varying, then z, then x).

Publishing this if, as absolutely nothing else, a reference for my future self.

– Erik

Posted in Uncategorized | Leave a comment

BBB Compression Benchmark

File used: I15_L0-511_13-7-2012_10.29.41.fescue.2.1.12.txt
File size: 324.5 MB
File type: .txt


Type: gzip – Lempel-Ziv coding
Size after compression: 75.397 MB
Compression Ratio: 4.3033
Time to compress: 3m 9.444s
Approx Avg. CPU Utilization: 98-99%
Size after decompress: exact same as start
Time to decompress: 0m 43.989s
Approx Avg. CPU Utilization: 40-50%

Type: bzip2 – Burrows-Wheeler
Size after compression:46.9819 MB
Compression Ratio: 6.906
Time to compress: 4M 44.802s
Approx Avg. CPU Utilization: 98-99%
Size after decompress: exact same as start
Time to decompress: 2m 17.088s
Approx Avg. CPU Utilization: 98%

Final Notes
1. If you are going to try and run this type of utilization is a sustained application it would be smart to heat sink the power supply chip and the cpu chip as they both got noticeably hotter than normal. Further testing could decide if a new equilibrium is reached or if it continues to increase until the thermal shutdowns are triggered.

Posted in Uncategorized | 1 Comment

Camera calibration status

Have been working on camera calibration routines. Current code is on repo presently. Once it opens, press F3 to turn console echo back on (which for some inscrutable reason I turned off by default) and enter ‘help’ to get a list of recognized commands. It can also be given a filename on the command line whose lines it will execute as if they were typed, which means it can be scripted (fair warning, any missing input parameters will result in it going to the GUI for input and blocking though).

Calibration program opens a window and displays any loaded hyperspectral data using 3 views; Red-outline is an attempted “as-seen” render that integrates the spectral data w.r.t. the human eye response given by The code also uses what is apparently called the ASTM-G173, , which tabulates the spectral density of all radiation which reaches Earth’s surface at any meaningful intensity.

My calibrator uses this to assert that “pixel value / irradiation should be constant” once properly calibrated. It can be requested to output a chart of this whole matter (example below

The dip caused by O2 absorbtion at 760nm (referenced by the SOC engineers when we asked about landscape imaging) is visible, as are several others. There are potential questions about the veracity of the irradiance functions and I have not read about astm-g173 in vast detail. If we truly feel this matters we can probably arrange to take a picture of the card using a reference spectral source, then compare a daylight picture to see what the actual spectrum which arrives in Lemont on a sunny day looks like.

In fact, using the calibration card post-calibration for this purpose may yield useful information on the arriving sunlight, beyond simply total insolation.

Currently the code performs its work in simple C for-loops and the delay on startup and rendering eye images is very noticeable. Once I am satisfied that the code does what is wanted I intend to port it to CUDA, as well as implement a 3d viewer that strobes through the whole cube, treating the plane number as a third spatial dimension. For “real” visualization along these lines it will probably be more fruitful to export the hypercubes to paraview format.

– Erik

Posted in Uncategorized | Leave a comment

Hyperspectral Camera Meeting

Interface Notes

  • The configuration is two cameras and a scanner
  • The scanner uses USB to Serial to control it’s movement, but they weren’t sure which company.  They believed it is one of the bigger names, so no problems here.
  • They recommend using a windows based laptop or a single board computer running windows like this one:
  • They only support windows because the cameras are from Lumenera and their drivers are written in C++ .NET
  • The control, calibration, calculations, and GUI are all handled on the computer side software and their isn’t any closed loop inside the camera.
  • They current have their software written in C++ MFC, but are interested in porting the code to C# .NET …most likely because their driver troubles are likely caused by Lumenera writing their drivers in .NET.

What it would take to break the windows shackles

  • Implement the USB to Serial driver for the scanner – depending on the company this might already be done
  • Implement a USB driver for the cameras
    • Lumenera’s documentation talks about updating their COM Object which means they are using a microsoft standard class for their driver.
    • Most likely it is written on the Windows Image Acquisition (WIA) interface.  Keep in mind this is an educated guess, but this is the most logical choice.
  • Replicate SOC’s control, calibration, calculation, and (optionally) GUI code that they have currently written in windows, but for linux and ideally cross-platform.
  • Wrap the whole thing in a CLI and it is yours.

Additional Hyperspectral Cube Information:

  • The company does not have a standardized algorithm for data compression. They usually refer people to ENVI or Matlab.
  • Cubes of data come in one size: 100MB/cube.
Posted in Uncategorized | Tagged , | Leave a comment

Meeting Notes 7-10-13

-multiple feeds of region weather streams coming in
-100 data points 1 minute after they are generated which is every 5 minutes
-contours using matplotlib in python

-analyzing hyperspectral data
-correcting data for human eye response
-white calibration card is saturated

-added gauge view for ANL tower data –
-ftp client to send data between VMs, integrates with CHEF
-create entire weather workflow in a single click with CHEF integrated in Phantom
-going to test RSYNC

-have assembly code working on the beaglebone
-writing motor driver on Programmable Real Time Unit
-have gps working
-going to be putting gigapan back together

-worked on linear dynamic models for hyperspectral data
-dimension reduction using PCA or factor analysis
-existing package can be slow for big data set
-using PCA makes it difficult to keep the connection between the data and the physical meaning

-Nitrate meter company said the hydraulic press “should” work
-blotting strip can be used to stretch a sample


Posted in Uncategorized | Leave a comment

A Test Drive

This is where we would put news items.  The real menus and submenus go other places.

Posted in Uncategorized | Leave a comment