Ha Bart!
Ik zat wat te koekeloeren op je weblog. Die plot, is die van Python?
Je hebt trouwens een hoop nuttige info verzameld in je logs
groet, Leon
-- Anonymous 2007-05-12 13:56 UTC
Hello, Your site is great. Regards, Valintino Guxxi
-- Valintino 2007-08-10 01:11 UTC
Add your comment here.
-- bvankuik 2007-09-13 13:01 UTC
After running the Fiske macro for the setup part, we have a nice value for the FFO Control Line (CL) current. Funny enough when we use this value, the macro immediately returns, saying that setting this CL will make the FFO voltage higher than requested.
Thinking about it, this can either be a measurement error or something else. I tried to rule out measurement error by running the 'measure offsets' routine.
Later, I found out several things. Firstly, the FFO bias and CL weren't set to 0 before attempting to run the Fiske macro. This is important because otherwise with a second scan we would get different results. This turned out to be the culprit.
The next problem was that the macro reported not being able to find the FFO voltage start. After checking the output of the macro, I found this funny: the macro reported that it tried to use the previously found settings.
In other words: the previous run found a certain voltage when setting FFO bias and CL. Trying again with a slightly lower FFO bias and it fails -- while the physics tell me this shouldn't happen.
I tried upping the Find Step Up value, but this didn't help. After discussion and a cup of coffee, it turns out that I wasn't making the same settings at all: the FFO start voltage in setup step 2 is NOT the same as the FFO start voltage in setup step 1...
When that was done, I still found out that the macro returned value 1.0237 as the first result, while I had put the limit at 1.0231. It was besides the point, but I needed much more values so I made the FFO CL scan step not 40 times bigger than normal scan, but 10 times. This resulted in not 13 pairs (of FFO voltage/CL current) to choose from, but up to 32. This again was pushing the limits of themacro, since the macro is limited to 32 and if it hits that limit, it'll return an error code. So I switched it back to 20 times which seems enough for now.
Still I encountered the following:
FFO CL | FFO Bias | Resulting FFO voltage |
40.1725 | 33 | 0x058B |
40.1725 | 32.9 | 0x597 |
So: setting the same FFO CL but a different lower FFO bias resulted in a higher FFO voltage! That's physically not possible so something had to be going on here. After adding debugging,... the problem disappears. This is good for speed, but not so for understanding.
The next problem seems to be that the macro output is misinterpreted; there's a different code for getting the right FFO voltage: with stepping up and without. That last situation means: the first FFO bias/control line we set, immediately results in a good voltage. I didn't account for that.
The macro now had the problem that after a number of succesful scans with not enough points, results came up where the Vstart couldn't be found. A programming error caused the FFO bias to be constant, when it actually should be lowered with a small step. After fixing that, it turned out that the current stepsize wasn't good enough either. This is because if you keep the FFO CL the same, but you lower the FFO bias, you get a slightly lower FFO voltage as a result. Thus with each FFO bias step down, the stepsize should be increased to find the FFO Vstart again.
That stepsize for the FFO CL is 0.02 mA. This amount covers a resulting range from Vzero to Vstart of roughly 0.8 mV. (The point Vzero is the FFO voltage that results from setting the last measured FFO CL). Each decrease of the FFO Bias causes a decrease in Vzero that is on average 0.001236 mV. The area thus increases with 0.1235 percent. Equally the stepsize should be increased with this percentage as well. Alternatively, we increase the number of steps that the macro does -- but since that makes the routine slower, I prefer the former solution.
After making the changes, the routine still can't find Vstart after a couple of FFO bias decrements. Analyzing the area that's covered from Vzero to Vstart, but this time in raw values, it seems that it increases by about 3 bitsteps every FFO Bias decrement. This is a lot since the actual area between Vstart and Vstop is only 3 bitsteps in and of itself! Since it's all too easy to make the step up size so large it passes over Vstop, I increase the maximum number of steps in the macro. It's now 32 and I increase it to 128.
The first test I run, this number becomes irrelevant. The procedure bumps into two Fiske steps. The first is bridged by the increasing FFO CL, the second isn't. The macro thus returns that the scan steps are too small, when in actuality, we just can't cross the gap over the second Fiske step and thus the target Vstop is never reached. Should we increase the scan step? I don't think so -- that'll make the macro give less results between Vstart and Vstop. Should we increase the number of scan steps? I don't think so either -- it's not a goal in itself to get to Vstop.
The answer lies in the analysis: the result of the macro should be returned not as an error. This is a situation where there's enough data and we should analyze it instead. However, this could also be an error situation when we're in the first scan after the setup.
After fixing this came the case where a programming error kept causing the analysis to fail and say that every detected point was to the left. Then another where the voltage range around the center was a factor of ten too small.
What's still happening is that with every FFO bias decrement, the Vzero-Vstart range gets larger and larger. The FFO CL needs to be looked at, something like where the previous nice value is copied and passed to the macro.
As mentioned before, we need an algorithm to find the correct setting in a cloud of points.
I've gotten an explanation on how Andrey (the software developer of the Russian team) does this and I've tried to describe it using our macro, which should be faster in flight.
What does their routine do? It takes the FFO voltage upper and lower limit as well as a FFO bias current and FFO control line (CL) current. The routine then starts. It's divided in two parts:
The main thing to remember for the initial setup is that if you set the FFO bias current and CL current, you don't know the resulting FFO voltage. You'll have to measure it back to know how you're doing. The main thing to remember for the fine-tuning is that we want to find an FFO bias and CL current that result in an FFO voltage that's right between Fiske steps since the sensor is less sensitive to temperature changes.
For the first part, it lowers Vmin by ten percent. The FFO bias current is then set and the FFO CL is swept, each time reading back the FFO bias voltage. If the bias voltage falls between Vmin and Vmax or maybe somewhat over them, the bias current has a good value. If however the voltage "jumps", i.e. a value is read back that is outside Vmax, the sweep is stopped and has failed. These jumps occur, because above a certain point in the I/V curve for that particular FFO, the current has a wildly different voltage as a result.
Upon failure, which is quite likely the first couple of sweeps, the FFO bias is set lower and a new sweep is started. When success occurs, the fine-tuning starts.
When fine-tuning starts, we know the FFO bias current to set, as well as the FFO CL lower and upper limit which results in the FFO voltage Vmin and Vmax. What is done now, is request, say, 8 points in this space and then see if the gap between these points is so big that we can safely say that it's a Fiske step. If so, the FFO bias current is lowered and another sweep is done.
Below is a screenshot (actually a photo) of Irtecon, Andrey's implementation:
Now on to our own problem; recreating the above with our macro.
The macro basically sets the FFO bias current and then sweeps over the FFO CL current. It'll read out the FFO voltage with each step of the sweep. Given Vmin and Vmax, it will bail out of the FFO voltage isn't between the limits. I thought I could just pass 0x0000 and 0xFFFF as the limits, so these won't be checked and just return all values. But alas, that won't work. During the macro its setup part, it'll try to make small steps to the Vmin, and it'll bail out as well if it's not reached. However when found, a sweep is made.
With the macros we could reproduce the same two phases that were explained above. We'll start with the user giving the example parameters.
For the initial setup, using the macro should be something as follows:
The macro can return nine result codes of which two are actually successful. The first few times, I got a 0 which means that the FFO CL setting is below FFO bias.
For the fine-tuning:
To debug, we need good visualization. This isn't good enough for this particular purpose, though. We can draw a plot and then redraw it, including the results of the macro. What we can't do, is clear out the macro results and begin again with the original plot. So that's on the to do list as well.
Previously, I've written about the diff tool meld, a very nice diff tool.
However, meld requires a graphical environment and this isn't always available. Vim however, is pretty much always available and has a diff tool built-in.
Just start vim as follows:
$ vimdiff file1 file2
Screenshot:
Visually it's pretty self-explanatory. Red parts are not in the other file, blue ones are empty parts where the other file has a red part. What you probably want to know, is how to quickly shift differences between the files.
dp | The difference is pushed to other file. This is valid when the cursor is on a red part. |
do | difference is obtained from another file. Use in a blue part. |
There are many options, check out the vim documentation on diff mode.
In the previous post, I talked about how I was coding up a wizard-type bunch of screens, and was using the MVC pattern here as implemented by PEARs HTML_QuickForm_Controller class. Each screen basically has three actions: 'display screen', 'previous screen' and 'next screen'. The last screen has the action 'finish'.
You have to be careful with this approach; the number of actions probably isn't limited to these at all. Consider the following example:
Suppose the user goes back from three to two. The 'display' action is called and, using the choice from the first screen, the number of apples is recalculated. But that's not what the user wants; he just wishes to change the percentage of apples.
So what we need is an action that's derived from 'next', let's call it 'calculate'. This action then checks whether there was a previous choice, whether this differs from the current choice and if so, does a new calculation. The result is saved in the session. We then do whatever 'next' normally does.
I'm in the middle of coding up a multi-page wizard-style bunch of PHP pages. The MVC pattern is implicit herein. It looked like it'd be useful to use the PEAR class HTML_QuickForm_Controller. In combination with HTML_QuickForm for the model, this is a pretty powerful business. As the view, the PEAR package HTML_Template_IT is used.
However, it turns out that debugging can be quite painful. Because the controller and the view part are so loosely coupled, it can be troublesome when it doesn't work.
I defined the 'cancel' action besides the default stuff like 'previous' and 'next'. The related class that should be called when the button was pressed, cleared all values from the session.
The cancel button didn't work; instead it just submitted and the controller served up the next step in the wizard. The difference turned out to be as follows:
$submit[] =& $this->createElement('submit', $this->getButtonName('cancel'), "Cancel"); $submit[] =& $this->createElement('submit', $this->getButtonName('next'), t('Next')); $this->addGroup($submit, "submit");
That last line should be:
$this->addGroup($submit, 'submit', '', ' ', false);
It's really about that last parameter, the false boolean. This generates a button with name _qf_domregform_cancel instead of submit[_qf_domregform_cancel]. Why the controller interprets this differently, I don't know.
But I do know it took a lot of time to find the culprit. Basically what I did, was take the example code, and adapt one page step-by-step to the page that I coded for the website.
That's not my idea of debugging, but I'm not sure how else the bug could've been narrowed down.
Here's another one. In my wizard, the third step is to choose how DNS is set up. It's a radio button that lets the user choose between 'standard' and 'advanced'. My first attempt looked like this:
$dns1 = new HTML_QuickForm_radio(null, 's', 'Standard'); $dns2 = new HTML_QuickForm_radio(null, 'a', 'Advanced'); $this->addGroup(array($dns1, $dns2), 'DNS_server', "Choose setting for DNS server");
The problem with the above code is that it doesn't remember its setting when the user goes back from step four to step three. The code below will correctly do this:
$radio[] = &$this->createElement('radio', null, null, 's', 'Standard'); $radio[] = &$this->createElement('radio', null, null, 'a', 'Advanced'); $this->addGroup($radio, 'DNS_server', "Choose setting for DNS server");
Now what is the difference? It can't be seen in the HTML source, so I looked at the PHP code but I couldn't see the difference in the five minutes I checked.
My point to all this is that there is more than one way to do the job, but if it's not the correct one, it silently fails without any message.
That makes a developer's job harder.
We did some testing of the new Fiske step software yesterday. To see how the device (the SIR chip) behaves, we first ran a plot where we set the FFO bias current and read out the FFO bias voltage.
Some plots of an area with Fiske steps, where the Y axis is the FFO bias current and the X axis is the FFO voltage:
If we make a much finer scan, it looks like this:
What is basically seen, is a cloud of points that is formed by setting the bias current on the FFO and then reading out the voltage. Each line means a different current setting on the FFO control line (FFO CL). (For an explanation of the SIR including FFO control line, see entry 2006-04-24 SIRs for dummies).
Note that we've scanned for a limited number of control lines.
Now if we want to have the FFO beam at a certain frequency, we calculate which voltage we need by dividing the frequency with the Josephson constant. To make it easy to understand, say we want to find a Fiske step at 0.7 mV.
Some research was done by the Russian researchers and what came out is that the procedure to find a good Fiske step must be done by setting the FFO bias, then proceding to increase the FFO CL. If no good Fiske step is found, the FFO bias must be lowered, and again, the FFO CL must be reset and increased again until a certain point.
So there are two loops going on; we loop the FFO CL from high to low and get a bunch of value pairs -- FFO bias voltage and FFO CL current. For each loop, we lower the FFO bias current. Basically, you get a horizontal cut from the plots seen above.
You could just follow the lines that are drawn above, which connect one FFO CL setting. If you would do that, you'd get results with the same FFO CL setting. This migh seem logical when looking at the plots above, however, we follow the advice of the Russian team on this point.
Let's see if we can find some numbers that a Fiske step procedure should use. I've graphically extrapolated picture 1 as follows:
The blue lines are extrapolated clouds of points. The green line is a possible combination of FFO bias current and FFO voltage. The fat green line could be a possible scan area where we want to find a good Fiske step.
What you can see is that if you start looking at 32 mA for a good Fiske step, you will keep scanning down until you hit 27 mA. If you had begun at 32.5 mA, you would immediately have hit a good point. Scans should thus cover at least 5.5 mA.
However, there's another input we must keep track of: the setting of the FFO control line. I haven't displayed the plot here, but for each milliampere change in the FFO bias, we upped the FFO CL 1.2 mA.
Right, so how do we know we should stop the Fiske step procedure? Then we'll have to look at the second plot again and see how wide those clouds are. Roughly it looks like it's 2.5 uV (yeah that's microvolts) wide. If we do a sweep of at least 10 settings on the FFO CL current where we make sure we have a result of the FFO voltage with a width of 5 uV, we can see if the points that come out are centered around the target voltage (e.g. frequency).
Some questions remain:
Those questions might be answered as follows:
If you want to configure the SSH daemon on a remote machine, you probably don't want to risk the chance of locking yourself out. Nowadays, properly configured machines can restart the SSH daemon while retaining the running connections. That's great, but if you don't want to rely on that, read on.
We want start a separate, temporary SSH daemon. Dropbear is great for that. We will do enough to run a temporary copy for the duration of configuring the regular SSH daemon installation. We won't install Dropbear permanently.
Download the latest release on the remote machine. In a user account, unpack, compile and make it:
remoteserver$ tar xfz dropbear-0.50.tar.gz remoteserver$ cd dropbear-0.50 remoteserver$ ./configure remoteserver$ make
Now generate a key for the server:
remoteserver$ ./dropbearkey -t rsa -f key.rsa
The server can be started and we'll use some high port so as not to get in the way of other services. Port 31337 is used below:
remoteserver$ sudo ./dropbear -p 31337 -r ./key.rsa
From your local machine, you should now be able to reach the server:
localmachine$ ssh -p 31337 remoteserver
Log in and configure the regularly installed SSH daemon. Restart it, do whatever you like. When you're done, exit and log in again as you'd normally do (i.e. not using the dropbear server but the regularly installed SSH server). If all is successful, kill the dropbear server and wipe out the temporarily compiled copy:
remoteserver$ sudo killall dropbear remoteserver$ rm -rf dropbear-0.50
Note: it's not necessary to start dropbear with sudo. However, dropbear then can't read the root-only files for successful authentication. The only authentication possible is key-based, with a key in ~/.ssh.
I've previously explained the SIR chip, so I'll keep it short and say that currently, we're implementing a procedure to automate the setting of the frequency with which the FFO (Flux flow oscillator) beams.
This frequency is determined by the voltage that's set on the FFO. If you multiply that voltage with the Josephson constant (483 597.9 * 10^9 Hz V^-1), you get the frequency.
But we can't set that voltage straight away. We first set the FFO current. We then measure the resulting voltage to see if we're on the right way.
There are two circumstances here. We have on the one hand a Josephson junction (a special superconducting circuit); the SIR chip its temperature is brought to about 2 Kelvin. On the other hand, a magnetic field envelops the FFO. That is due to the control line. This is a conducting line which is etched below the FFO on the SIR chip. When we set a current on the FFO, a magnetic field results.
When you combine these two circumstances at a certain FFO bias voltage (and thus a certain frequency), Fiske steps can occur. From what I've gathered so far, a Fiske step is a certain voltage range that cannot occur when you set a certain current and a certain magnetic flux on a circuit. 1)
So my electronics colleague created a macro, which is a list of instructions for the Telis FPGA. This procedure does the following:
Lower boundary loop:
Upper boundary loop:
We now have a set of points. These must be looked at to see whether we need to choose a new value for the FFO control line and whether the procedure must be started again.
Below is the output of the oscilloscope, where the X-axis displays time and the Y-axis displays the FFO voltage. This is a test situation where a simple resistance is used instead of the FFO.
Footnotes:
1) Problematic in this case is that there is some hysteresis. If you lower the FFO control line, other Fiske steps occur. If you raise it again to the previous level, the Fiske steps are not the same anymore. So you'll have to steadily work your way down, assessing the merits of each control line setting and stopping when you think you've reached a correct setting.
If you're using PHP, you probably use or at least know of the PEAR classes at http://pear.php.net/. It's a pretty large set of classes providing lots of standard functionality. Amongst these is the Auth class, which gives you perfect start if you need username/password screens for your application. What this class is missing, is a function for adding salt to passwords. Use the simple class below to add this.
<?php
include_once 'Auth.php'; include_once 'config.php';
class MyAuth extends Auth { function assignData() { parent::assignData(); $this->password = $mysalt . $this->password; } } ?>
Save the above code in a file called MyAuth.php and instead of including Auth in your login script, use MyAuth. Also create a file called config.php and add the variable $mysalt. It should contain two or three characters, something like:
$mysalt = 'wd3';
This should be concatenated before all passwords when saving them in the database. This code is public domain.
To understand the usefulness of salt, see Wikipedia's entry on password salt.
Recently I installed RedHat AS 5 on a PowerEdge 860. For management, we use Zabbix; if you know Nagios then remember that this is supposed to be a more user-friendly replacement. I figured out how to configure Zabbix toread out fan speed, board temperature, etc.
To read out IPMI sensor values with Zabbix (http://www.zabbix.org/) take the following steps:
On the zabbix server, use the web frontend (menu Configuration -> Items) to create a new item "ipmi.planar_temp" of type "ZABBIX Agent (Active)". Type of value is Numeric, unit is C for Celsius.
Go to the zabbix agent machine. Give the zabbix sudo rights (as root, execute "visudo") to execute the ipmitool as root, without a passsword.
Example line to add:
zabbix ALL=(ALL) NOPASSWD: /usr/bin/ipmitool sdr
Edit the /etc/zabbix/zabbix_agentd.conf file and add the line (this is one straight line):
UserParameter=ipmi.planar_temp,sudo ipmitool sdr | grep "Planar Temp" | awk '{print $4}'
Restart the agent:
# service zabbix_agentd restart
Go to the zabbix server. Restart it (don't know if this is necessary):
# service zabbix_server restart
Go to the zabbix server web frontend, menu Monitoring -> Latest Data.
Scroll down. The following line should be shown after a minute or so:
ipmi.planar_temp 22 Jun 08:19:25 26 C
At the end of the line, there's a hyperlink to a pretty stripchart.
You can add new lines as you wish; repeating the steps above. The PE360 doesn't show a whole lot of IPMI information. For interested parties, here is my zabbix_agentd.conf which is a text file:
[zabbix_agentd.conf]
Note that last line. Basically I count all lines that do NOT end in either 'ok' or 'ns'.
Also note that this is a test setup. The sudo construction could be tighter.
A colleague of mine did the following demo:
http://www.youtube.com/watch?v=PbMH5FBVbl4
Here is the original posting:
http://www.pouet.net/user.php?who=3857
It's all handcoded assembly. An integrated video player does a realtime algorithm for finding straight lines, each frame. The 3D-modelled car is also "real" 3D, with a viewport.
In the previous entry, I talked about correcting offsets when measuring with the FFO board. We've also made an improvement in the measurement itself. The ADC has lots of options for measuring, amongst them one that takes more time to do measurements. The ADC always takes multiple measurements and then takes the mean (this might be a bit simplified). When taking more time, this results in more measurements taken and a more reliable mean measurement. When plotted, the difference was really noticeable:
The jagged line is the fast measurement mode, the smooth line is the mode where more time is taken. It's a tradeoff naturally.
This week was a very rewarding week: we squashed a bug which seemed to elude the very best minds -- these of the Telis team.
The problem was that when measuring a voltage, we read out the wrong value. We're reading very accurately, in the microvolt (uV) scale and this is done with an electronics board which incorporates an ADC. When we made sure that no current was running on the measured circuit, we tried to measure zero but we actually got -14 uV. On this scale that isn't something to worry about; besides the ADC there are more electronic components on the board and these can all account for a slight offset. Hell, on this scale even the temperature can play a role.
However, this ADC has a lot of options and one of them is a procedure to measure an offset and store it in a register. Further reads will then take this offset into account. The electronics guy had created a script for this purpose. I had incorporated the script into a nice Perl module with a button in the user interface named 'Measure Offsets'. I've previously described this procedure in 2006-10-20 Measuring FFO offsets.
So, we ran the procedure and did a new measurement. The offset changed, but didn't disappear. Hmm, strange. Now we measured -7 uV. Weird!
First we tried the usual stuff, to make sure this faulty reading was repeatable. Turn off electronics, disconnect cables, reconnect, turn on again. Trying an older version of the software. Completely reproducible. Then it became time to start thinking.
We tried to determine the location of the problem. Is it the hardware, the software, or the hardware instructions loaded into the flash located on the electronics board?
The measurement is run from the FFO board:
Our electronics guy tried the spare FFO board. Fully reproducible faulty behavior. So, it's not the hardware. Then it must be the software, right?
We reran the old script from which the Measure Offsets Perl module was created. This script ran the offset procedure for the ADC and then did some measurements. These checked out fine, printing zero uV after the offset procedure. However, if we then walked to the main software screen and read out the value, it had the -7 uV offset again. Can we rule out the software then?
We compared the Perl module and the original script line by line. These were the same. We also checked what each line did. They were created some time ago and we wanted to make sure everything still made sense.
Then we realized that there was a difference between a readout in the original Measure Offsets script and a readout in the main software screen. The second one uses a macro, the hardware instructions loaded into the flash located on the electronics board. This macro first puts the ADC in a high resolution before making the measurement.
So we changed the Measure Offsets procedure to first set the ADC in a high resolution before doing the offset procedure. Then we reran the measurement and waited with fingers crossed.... and Bingo! That was the problem. When we reran the plot, the following picture appeared:
The line left is the measurement before we ran the offsets procedure. The line at the right is the corrected measurement. (Note that the lines aren't as jagged as the first plot -- that is because the ADC was set to a higher accuracy, which takes more time for the measurement.)
Turns out it wasn't a hardware problem. It wasn't a software problem, either. It even wasn't really a problem in the macros. We just didn't use the offset options of the ADC in the right way. It was fully tested, but not in the exact same way measurements were taken later.
This type of bug had evaded unit testing and was only be caught with good testing in the field. Can't beat that kind of testing.
This week I had the situation where I was asked to come to another office (in Groningen) and do some testing and fixing of the software. The revision running there was revision 590, while I was in the middle of an integration effort, going up to release 605. I couldn't bring the current broken code, but some work needed to be done at the Groningen office, with revision 590.
(Note: we usually install a revision including source and build it on the spot, so the revision 590 source was present in Groningen office).
So, I went there and did some testing, fixing problems, etc. When I came back, bringing the source with me, I had the situation where you started to hack and decided afterwards that creating a new branch would've been a good idea. To do this, first you'll want to create a patch of all your changes:
$ cd your/current/project/directory $ svn diff > ~/hackwork.patch
Then find out what revision you are hacking:
$ svnversion . 590M
Now create a separate branch of the original version:
$ svn copy http://subversion.company.com/svn/telis/tuce \ http://subversion.company.com/svn/telis/tuce-branch-gron \ -m "Creating separate branch for work outside integration efforts" Committed revision 606.
Go to another directory, in my case ~/workspace
$ cd ~/workspace $ svn co http://subversion/svn/telis/tuce-branch-gron $ cd tuce-branch-gron
And now integrate your changes again, and commit them in the branch:
$ patch -p 0 < ~/gron.patch patching file tdb/mysql-telis.sql patching file client/python/plotffo.py ... lines removed ... $ svn commit -m "Fixes made on colors in FFO plot, conversion housekeeping \ macro different, conversion FFO plot corrected" Sending client/perl/lib/FFO_sweep_macro.pm Sending client/perl/lib/PLL_sweep_macro.pm ... lines removed ... Sending tdb/mysql-telis.sql Transmitting file data ............... Committed revision 609.
VoilĂ , a separate branch is created.
One of the current problems in the current project is the battery pack. According to the electronics man, it's a small problem, but he explained as follows: the battery pack consists of a bunch of lithium-ion non-rechargeable batteries, custom made by an American company. The batteries come with a specification that they deliver up to a certain voltage for a certain temperature. The spec sheet shows a curve; the lower the temperature, the lower the voltage. At room temperature the batteries give around 3.8 V but the Telis electronics have to operate on a balloon. The balloon its trajectory will take it in the atmosphere where temperatures can occur between -19 and -70 degrees Celsius.
This is a much broader range than what's usual for electronics on a satellite. Once these are packed in isolation, the temperature range is quite small.
The problem is now that tests show that the batteries don't deliver up to spec for temperatures around -40 degrees Celsius and possibly lower. The electronics man thought up one solution, which would involves a separate battery pack. Together with a temperature sensor in a control loop, the second pack makes certain that the main battery pack (feeding the electronics) is kept at the right temperature. It must be checked now that there is enough room above the electronics casing in the frame that's carried by the balloon.