In the vein of the previous post; a list of Google Chrome extensions that are very useful:
And if you have a Kindle, this extension is excellent as well:
Or as an alternative:
Everybody has them: Firefox extensions they can't live without. At least, that one percent of the world population that has a PC, and then a tiny percentage of that to care about Firefox extensions.
Without further ado, here's my list:
To install OpenOffice in a home directory on Debian, take the following steps:
Download the tarball with .deb packages from OpenOffice.org
Unpack it in your home directory:
$ tar xfz OOo_3.2.1_Linux_x86_install-deb_en-US.tar.gz
Change into the directory with the .deb packages:
$ cd OOO320_m18_native_packed-1_en-US.9502/DEBS
Unpack these to your home directory with:
$ dpkg -x ~
You'll now have a new subdirectory named 'opt' in your home directory. All executables are in the ~/opt/openoffice.org3/program subdirectory. Add it to your path to easily run OpenOffice, or create custom icons in your Gnome panel (or other favorite desktop environment).
Sometimes you want to adjust your settings in bashrc, depending from which host you are logging in from. The who command reveals the host, and then we use cut (from GNU coreutils) to get the correct field.
FROM=$(who | grep `whoami` | cut -f2 -d"(") case $FROM in chara*) # Enter your settings here set -o vi ;; esac
Useful for those shared accounts, who no IT department seems to admit using, but which are mightily useful sometimes!
A colleague of mine recently went to AHS 2010, a series of annual conferences organized by NASA, ESA and the University of Edinburough. Topics are on-chip learning, on-the-fly reconfigurable FPGAs, et cetera. This year, the conference took place in Anaheim, California, USA (Sough of LA).
Some points from my colleague's presentation:
At my dayjob, we have created an application for sensor readout and control. We are creating a software design to support Python for scripting, analysis and plotting, besides the already present combo of Perl for scripting and IDL for analysis.
The list of steps comes down to:
What we really want is that all of this happens in Python:
The problem here is that the old situation allows for configuration of plots in advance. The disadvantage is that this needs a bunch of glue code, and doesn't allow for version control. The advantage is these are defined in a graphical way, and doesn't need any scripting.
The purpose of SpaceWire (short: SpWi), is a standard to connect pieces of a satellite for data retrieval and control. The speed is 2 to max. 400 MBit/s. The missions SWIFT and the Lunar Reconnaissance Orbiter use SpaceWire.
(Image courtesy of STAR-Dundee)
The signal is sent using LVDS, low voltage differential signalling. It's a full duplex line, with two pairs each way. The standard defines the cabling in about eight pages.
The encoding is done using Data Strobe Encoding. Tx and Rx do not share a common clock. The advantage is that you're resistant against clock skew. The disadvantage is that you now have two frequency domains in your FPGA.
There are four- and ten-bit tokens where the first bit is a parity bit and the second is the indicator for the token length. The four bit tokens are for control purposes, and there are four possible tokens. Notice there is no defined packet content; nor is there a defined length. In OSI terms, SpaceWire describes the physical and datalink layer.
An active SpaceWire link is never silent, between data the transmitter sends NULL codes. These can also be sent between the bytes of a packet. The standard also defines a time code, a special data packet for time sync purposes. The contents are not defined in the standard. This packet gets priority above data so you can send it anytime (yes, even right in a packet). For flow control, the receiver sends flow control tokens (FCT) to the data sender. For each token you can send eight characters. These can be send ahead. The FCT is one of the four control tokens. For link management, a handshake is defined. For parity errors there is no retry mechanism.
Although SpaceWire is point to point, it's possible to create networks; you then stick the packet route (the path) in address bytes in front of packets, and like the old Internet bang addresses, these are removed by each router at each step. Thus routing is simple and defined on a relatively low level.
Since there are basically two types of data you'd want to send (sensor and housekeeping data), there are two protocols. RMAP, remote memory access protocol, is most useful for housekeeping purposes. STP (Streaming Transport Protocol) is better for sensor data. In the past, SRON used CCSDS where now RMAP is used in SpWi. STP is meant for bulk transfers. The packet overhead is lower than with RMAP because the stream is first set up, then explicitly closed when necessary.
SRON has set up a test project for which the purpose was: Two SpWi ports, proper Linux driver and 8 MByte/s sustained data throughput via a PCI card. We've tried boards from Aurelia and STAR-Dundee. There are also boards from 4Links and Dynamic Engineering. The Linux as well as the Windows drivers were unable to get the required speed.
SRON has also looked into a SpaceWire IP core which had to be vendor independent (Actel and Xilinx), implemented as an AMBA interface (for inclusion in a LEON core) and available in VHDL (not just a netlist). And reasonably priced. ESA has this available.
In a test setup with a PCI card and an Actel board, we could get up to 6 MByte/s due to a slow Linux driver. Yes, that's 6 megabyte of data per second. A better solution was to put in an intermediary board with a LEON core that translated to Gigabit Ethernet.
There is also a SpaceWire Light IP core via the OpenCores project.
If you want to use Ubuntu on an older PC, then memory might be tight. I recently got a HP/Compaq 7100dc which only has 512 MB memory, and will be used for light surfing plus web based e-mail. It does not have an attached printer.
The following command removes a number of services which are superfluous in the above situation:
$ sudo apt-get remove bluez brltty pcmciautils speech-dispatcher \ apport cups system-config-printer-gnome evolution
Explanation: this removes support for braille input devices, BlueTooth, laptop extension cards (PCMCIA), text-to-speech, crash reporting (to Ubuntu), printing and the memory hungry e-mail/calendar client Evolution.
If you are knowledgable about security, you can make the decision to remove AppArmor. More information here: AppArmor in Ubuntu.
$ sudo apt-get remove apparmor
Also, on such a machine it is wise to turn off all visual effects by going to menu System, Preferences, Appearance. Switch to the tab Visual Effects and select None, then click Close. Explanation: this switches your window manager from Compiz to the much lighter Metacity.
The above mentioned procedure saved me 30 MB, from 125 MB used memory to 95. To find more memory-hungry processes, take the following procedure. First, find a process:
$ ps -e -o rss,vsz,cmd --sort=rss | sort -n -r | head
Then find the path to the process:
$ whereis <processname>
If you have a path, find the corresponding package:
$ dpkg-query -S /path/to/programname
Then find out if you really need this package:
$ dpkg -s <packagename>
If you don't need it, you can remove it:
$ sudo apt-get remove <packagename>
At work, we are currently using Perl and IDL, alongside our (in C++ written) EGSE server software. For the controlling of electronics, we use Perl. For the visualization of the electronics readouts, we use IDL.
For different reasons, we are looking to replace both parts with Python equivalents. This involves both a port of the software as well as a migration path. It also offers the chance to do a clean, object-oriented rewrite which could mirror the C++ libraries.
Perl basically provides scripted control/readout of sensor equipment. These scripts can be edited and run from within the EGSE, but they can also be run from the commandline.
IDL, however is more tightly integrated with the EGSE. It is compiled along with the EGSE. It listens to data requests and these are analyzed and then plotted as well as transported back to the controlling Perl script.
Besides plots by IDL, it's also possible to create plots with the EGSE software itself. We have to look in what way we want to let these co-exist with the Python plotting facilities.
We will create a design document where we look at the following items:
It's pretty interesting to dive into the situation of recovering from unexpected reboots. Our usual lab setup consists of three parts:
Any of these could suffer unexpected power loss and subsequent power restore. The basic question is: what do we handle in the way of recovery?
For lots of things, it's necessary to maintain status. An example is the following: you are a scientist and use the above setup to set up and test your sensor. You leave the lab but then the PC unexpectedly reboots because a system administrator mistakenly remotely rebooted the PC.
When the EGSE software automatically starts again, should it attempt to initialize the biasing board? Probably not -- you may be running a test and the sensor settings should not be changed.
But then again, there is the situation of an expected power-up. You want to differentiate between the two, if you want your electronics to always be initialized upon normal startup.
Now there's complexity: both the EGSE and the Controller board will have to maintain state. Any discrepancies will have to be resolved between the two. In the end, it might be much simpler to just say that we do not support automatic initialization when the Controller board comes online.
Choices, choices...
When asking for some criticism from a colleague today, I got some additional pointers and ideas on my design sketch. Some concepts were implicit, but would be more clear when mentioned specifically:
The Controller board will carry state for the equipment that's in the rack, but since a (possibly accidental) power-down of the rack would lose state, the previously mentioned discovery mechanism still has to be created.
The software will also get a lot more simpler if we assume there is some intelligence in the rack. Thus, we can assume that a future rack will perform the following functions:
He also pointed out it's worth thinking about whether we must model the rack functions itself, perhaps as a class called RackDriver. The slots in the rack were previously left out of the model, because they did not have any meaning for the software. This now changes, since we assume the rack has some intelligence.
So far, the custom software we built at SRON assumed that users would manually configure the software when any hardware was replaced, and that there was only one electronics board present. The following is a preliminary design on the software changes required to support an array of boards that is placed in a standard format rack.
The basis is, that we are developing standard electronics boards for certain tasks. For instance, a standard design for an electronics card that contains a DAC and an ADC. These cards must work in array-like structures, for instance a standard 19 inch subrack which contains a number of these standard electronics boards. This could be used door biasing a sensor, reading out a sensor, housekeeping duties and PC communication duties. Such a rack would consist of a number of slots, each which consist of a rail that guides an inserted board to the backplane. This backplane provides power and connection to the software. The central requirement is: the user should be able to add and remove the boards without complex procedures. Any procedures like board-specific power-up/power-down sequences should be handled automatically.
The setup thus consists of two parts:
To support the above sketched requirements, we can recognize several use cases in this setup:
The user must be able to add or remove a board into the rack, and the software should detect this. Also, most boards must be initialized in some way. Thus there must be hooks that run a particular script when hardware changes. This also means that the hardware must actively identify itself, let the script take care of this, or give the software some uniform way of checking this. More on this later.
Replacing, or the moving of a board from one slot to another can be covered by a simple remove/add action.
Since the hardware and software can be powered on and off independently, both situations must be covered. Thus the software must have some sort of discovery mechanism when starting. The hardware must have some way of rate limiting if it actively advertises the adding or removing of a board. More on this later.
There are two possible ways in which a rack is powered down: expectedly and unexpectedly. The software does not need to be adapted either way. In the case of an expected power down, there should be a project-specific power down script. In the case of an unexpected power down, it should be determined whether the project needs a way of detecting this.
When the EGSE is powered up, it should see whether a rack is connected and if so, a discovery mechanism should see what boards are present. More on the discovery mechanism later. When the ESW is powered up, no particular actions are necessary.
There are two possible ways in which the EGSE is powered down: expectedly and unexpectedly. The software does not need to be adapted either way. In the case of an expected power down, there should be a project-specific power down script. In the case of an unexpected power down, it should be determined whether the project needs a way of detecting this.
The ESW can also be powered down, either accidental or as per request. There is no difference between the two, since the ESW functions as a pass-through and does not maintain state.
For the above use cases, the software obviously requires a constant and up to date register of all available boards plus their addresses. The following objects can be found in the use cases: rack, slot, board. A rack is divided in slots. A slot contains a board. Typically, racks can have shelves but for now, we assume that there's only one shelf. Also, racks are contained in a cabinet but again, there can be only one rack for now.
The current requirements do not necessitate that the software exactly knows which slots are occupied. Thus, this concept is currently not taken into account. That leaves us with the following classes:
There are two options for addressing. Currently, all boards have an address pre-programmed into the FPGA. This is fine in a situation where we can manually increment the unique address. The software will then simply use a discovery mechanism where a dummy request is sent to each possible address. When a reply is received, the board is then added to the present board list. Discovery must be quick since it inhibits other usage of the bus, and is done periodically. Thus the most logical place to run the discovery, is probably the ESW.
But when using multiple off-the-shelf boards, it is much easier to let the boards actively know that they were inserted, and let the software hand out addresses. The software still needs a discovery mechanism in case the software is brought down for some reason. This can be the same as previously mentioned.
In the first release:
For version two, we see the following points:
For version three, we see the following points:
We got a demo from the Coverity people. We ran their tool on our code base in advance. Via a WebEx session we got an explanation of the results, but first we got an overview of the company and their projects since some of the team were new to this stuff.
It's a pretty young company, founded less than ten years ago, and their aim is to deliver products that improve the quality of your software. Clients are in the medical and aerospace branche. Wikipedia article on Coverity. They have a 1000+ customers.
From the webbased Integrity Center software, several tools can be controlled. One of them is Static Analysis, called the Prevent tool. The tool identifies critical problems, not the more trivial things like style compliance etcetera.
Since bugs are cheaper to fix in development rather than in the field, this gives the user time and cost savings.
The software checks the compiler calls that are made when you do a build (via make) and then works on the code in the same way. It's not a replacement for unit tests. After running, a database of the results is written and there is a web frontend where you can read out the database.
The screen shows a number of defects, with filter options at the left. When clicking on a defect, you can see the code as well as th classification of the defect. Along with the classification, there is a short explanation of this type of issue. Clicking further will also give simple examples so you better understand the defect.
Each defect can be assigned to a certain team member. We have already invested in using Traq so I'm not so sure that's useful.
We had questions about finding concurrency problems. Coverity can help with this but they support pthread it of the box. Since we use QThreads, we should make a model for that library. However since we have the code available (Qt is open souce) and it's using PThreads, it's not a problem and Coverity will be able to pick it up automatically.
Besides the existing checks, it's possible to add your own checks. Perhaps you want to enforce a certain way in which you use an external library.
The software tries to be smart. For example sometimes you do some smart coding which usually triggers an error. Coverity will use heuristics and not report it if the rest of the code base shows that this is not something worth reporting.
We closed off the demo with a discussion on licensing. The account manager teams up with a technical consultant and together they pretty extensively work on the requirements and resulting cost savings. From that, the price is derived. There are other licensing models however.
If you're on Debian or Ubuntu Linux and you want to send a quick e-mail from a Perl script, use the Email::Send module (this module has been superceded by the Email::Sender module, but that one doesn't seem to be present in the Debian Stable package repository yet).
First, install the appropriate packages:
$ sudo apt-get install libemail-send-perl
Then use the following snippet:
use Email::Send;
my $message = <<'__MESSAGE__'; To: bartvk@example.com From: bartvk@example.com Subject: This is a mail from a Perl script
This is the body of an e-mail from a perlscript __MESSAGE__
my $sender = Email::Send->new({mailer => 'Sendmail'}); $sender->send($message);
Now go and use this wisely, my young padawan. Some observations:
We have a DT-470 temperature sensor in the cryostat of the project I'm currently working on. The problem is that our software is displaying the wrong readout. I'm trying to figure out how to display the correct value in Kelvin.
I've got the following to work with:
The software has the ability to apply a polynomial to a raw value (i.e. a value that's read out from the electronics), as well as apply a user-configurable function to a value. That is usually used for convenience like electronics giving us a negative value, when we'd rather receive a positive value.
In this case, the polynomial is applied to correct the value for the way our electronics influence the raw value. Then, the user-configurable function is applied, which in this case is the polynomial that follows from the data sheet.
So the steps are: