Planet Linux Australia
Mr. Bayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models.
Download, extract. Note that the developers have produced a tarbomb which will require a separate directory created before download. This has been raised as a bug.
Note that more recent versions of MrBayes make much better use of autoconfiguration tools.
- The asylum debate we have to have: not as simple as you may think http://t.co/K81q9GFR5M #auspol 10:42:00, 2015-05-23
- A tech boom aimed at the few instead of the world http://t.co/8Q4U4vhOSv 20:32:05, 2015-05-22
- Powerwall: Solar energy storage batteries ‘set to transform Australian electricity industry’ http://t.co/uwoijLZPek 18:27:15, 2015-05-22
- We should be attacking institutionalised child abuse; instead we put people in concentration camps http://t.co/nWSKKFfq4e 10:42:00, 2015-05-22
- Australia’s digital economy is growing twice as fast as the rest of the economy, could be worth $139b by 2020 http://t.co/3lpGZqBNjO 16:33:01, 2015-05-21
- Not content with creating #climatechange, coal industry exploits Ebola for $ http://t.co/0iQgzQ33Ag 16:33:05, 2015-05-20
- Studies of more than 1.25 million children shows NO link between autism and vaccines http://t.co/ANprbaTYqJ 14:19:01, 2015-05-20
- Australia’s fictitious “budget emergency” in perspective vs other countries via a handy #infographic http://t.co/Iez984tnvH #auspol 10:42:01, 2015-05-20
- The amount Australians spend on #education has more than doubled in the last nine years: ABS http://t.co/grdf6t7WBf http://t.co/405cG315Kc 20:32:09, 2015-05-19
- Achieving a universal level of basic skills for every school child would produce massive economic gains: report http://t.co/Ptfxe6IpS8 18:27:00, 2015-05-19
lol, I wouldn’t1.
1. If I absolutely had to, I wouldn’t do it the same as Ryan.WordPress isn’t (and will never be) Linux
ZYpp is the dependency solver used by OpenSUSE (and its PHP port in Composer), it was born of the need to solve complex dependency trees. The good news is, WordPress doesn’t have the same problem, and we shouldn’t create that problem for ourselves.
One of the most common-yet-complex issues is determining how to handle different version requirements by different packages. If My Amazing Plugin requires WP-API 1.9, but Your Wonderful Plugin requires WP-API 2.0, we have a problem. There are two ways to solve it – Windows solves it by installing multiple versions of the dependency, and loading the correct version for each package. This isn’t a particularly viable option in PHP, because trying to load two different versions of the same code in the same process is not my idea of a fun time.
The second option, which ZYpp solves, is to try and find a mutually compatible version of the dependency that each plugin can use. The biggest problem with this method is that it can’t always find a solution. If there’s no compatible way of installing the libraries, it has to throw back to the user to make the decision. This isn’t a viable option, as 99.999*% worth of users wouldn’t be able to tell the difference between WP-API versions 1.9 and 2.0, and nor should they.
But there’s a third option.Technical Debt as a Service
Code libraries are, by their nature, developer facing. A user never really needs to know that they exist, in the same way that they don’t need to know about WP_Query. In WordPress Core, we strive for (and often achieve) 100% backwards compatibility between major versions. If we were going to implement plugin dependencies, I would make it a requirement that the code libraries shoulder the same burden: don’t make a user choose between upgrades, just always keep the code backwards compatible. If you need to make architectural changes, include a backwards compatible shim to keep things working nicely.
This intentionally moves the burden of upgrading to the developer, rather than the end user.What Version?
If we’re going to require library developers to maintain backwards compatibility, we can do away with version requirements (and thus, removing the need for a dependency solver). If a plugin needs a library, it can just specify the library slug.Better Living Through Auto Updates
If we were to implement plugin dependencies, I think it’d be a great place to introduce auto updates being enable by default. There’s no existing architecture for us to take into account, so we can have this use the current WordPress best practices. On top of that, it’s a step towards enabling auto updates for all Core releases, and it encourages developers to create backwards compatible libraries, because their library will almost certainly be updated before a plugin using it is.Let’s Wrap This Up
I’m still not convinced plugin dependencies is a good thing to put in Core – it introduces significant complexities to plugin updates, as well as adding another dependency on WordPress.org to Core. But it’s definitely a conversation worth having.
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.
To check memory, I boot into memtest86+ from the grub menu and let it run overnight.
Then I check the hard drives using:smartctl -t long /dev/sdX badblocks -swo badblocks.out /dev/sdX Configuration apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:
- turn off daily auto-commits
- turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:dpkg-reconfigure debconf
Since I use vim for all of my configuration file editing, I make it the default editor:update-alternatives --config editor ssh apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.
I also ensure that the locale I use is available on the server by adding it the list of generated locales:dpkg-reconfigure locales
Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms email@example.com,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers firstname.lastname@example.org,aes256-ctr,aes192-ctr,aes128-ctr MACs email@example.com,firstname.lastname@example.org,email@example.com,hmac-sha2-512,hmac-sha2-256,firstname.lastname@example.org UsePrivilegeSeparation sandbox AuthenticationMethods publickey PasswordAuthentication no PermitRootLogin no AcceptEnv LANG LC_* TZ LogLevel VERBOSE AllowGroups sshuser
or the following for wheezy servers:HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:KexAlgorithms ...,diffie-hellman-group-exchange-sha1 MACs ...,hmac-sha1
Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.
I also create a new group and add the users that need ssh access to it:addgroup sshuser adduser francois sshuser
and add a timeout for root sessions by putting this in /root/.bash_profile:TMOUT=600 Security checks apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:/var/log/apache2/error.log /var/log/mail.err /var/log/mail.warn /var/log/mail.info /var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:chmod a+rx /var/log/apache2 chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:create 644 root adm
I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):INTRO=0 FQDN=0
Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:Tiger_Check_RUNPROC=Y Tiger_Check_DELETED=Y Tiger_Check_APACHE=Y Tiger_FSScan_WDIR=Y Tiger_SSH_Protocol='2' Tiger_Passwd_Hashes='sha512' Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres' Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd' General hardening apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub update-grub Entropy and timekeeping apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.Preventing mistakes apt-get install molly-guard safe-rm sl
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.
In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:#!/bin/sh cat /var/run/reboot-required 2> /dev/null || true
to send me a notification whenever a kernel update requires a reboot to take effect.Handy utilities apt-get install renameutils atool iotop sysstat lsof mtr-tiny
Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.Apache configuration apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.
I enable these in /etc/apache2/conf.d/security:<Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> ServerTokens Prod ServerSignature Off
and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.
I also create a new /etc/apache2/conf.d/servername which contains:ServerName machine_hostname Mail apt-get install postfix
Configuring mail properly is tricky but the following has worked for me.
In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.
Change the following in /etc/postfix/main.cf:inet_interfaces = loopback-only myhostname = (fully qualified hostname) smtp_tls_security_level = may smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in /etc/aliases:
- set francois as the destination of root emails
- set an external email address for francois
- set root as the destination for www-data emails
before running newaliases to update the aliases database.
Create a new cronjob (/etc/cron.hourly/checkmail):#!/bin/sh ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box.
Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.Network tuning
To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:net.core.default_qdisc=fq_codel
GAMESS (General Atomic and Molecular Electronic Structure System (GAMESS)) is a general ab initio quantum chemistry package. You will need to agree to the license prior to download, which will provide a link to gamess-current.tar.gz
Download and extract, load the environment variables for atlas and gcc.
module load atlas/3.10.2
module load gcc/4.9.1
Craige McWhirter: Craige McWhirter: How To Resolve a Volume is Busy Error on Cinder With Ceph Block Storage
There are a number of reasons why a volume may be reported by Ceph as busy, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.
If you were to look at the volume in Cinder, that status is usually available, the record looks in order. When you check Ceph, you'll see that the volume still exists there too.% cinder show f8867d43-bc82-404e-bcf5-6d345c32269e | grep status | status | available | # rbd -p my.ceph.cinder.pool ls | grep f8867d43-bc82-404e-bcf5-6d345c32269e volume-f8867d43-bc82-404e-bcf5-6d345c32269e
Perhaps there's a lock on this volume. Let's check for locks and then remove them if we find one:# rbd lock list my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e
If there are any locks on the volume, you can use lock remove using the id and locker from the previous command to delete the lock:# rbd lock remove <image-name> <id> <locker>
What if there are no locks on the volume but you're still unable to delete it from either Cinder or Ceph? Let's check for snapshots:# rbd -p my.ceph.cinder.pool snap ls volume-f8867d43-bc82-404e-bcf5-6d345c32269e SNAPID NAME SIZE 2072 snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3 25600 MB
When you attempt to delete that snapshot you will get the following:# rbd snap rm my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3 rbd: snapshot 'snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3' is protected from removal. 2015-05-22 01:21:52.504966 7f864f71c880 -1 librbd: removing snapshot from header failed: (16) Device or resource busy
This reveals that it was the snapshot that was busy and locked all along.
Now we need to unprotect the snapshot:# rbd snap unprotect my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3
You should now be able to delete the volume and it's snapshot via Cinder.
JAGS is Just Another Gibbs Sampler. It is a program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation not wholly unlike BUGS.
tar xvf JAGS-3.4.0.tar.gz
mv JAGS-3.4.0 jags-3.4.0
The config script takes the following form
install=$(basename $(pwd) | sed 's%-%/%')
MuTect is a method developed at the Broad Institute for the reliable and accurate identification of somatic point mutations in next generation sequencing data of cancer genomes.
For complete details, please see the publication in Nature Biotechnology:
Cibulskis, K. et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnology (2013).doi:10.1038/nbt.2514
Download after login.
The PROJ.4 Cartographic Projections library was originally written by Gerald Evenden then of the USGS.
Download, extract, install.
tar xvf proj-4.9.1.tar.gz
The config file is a quick executable.
./configure --prefix=/usr/local/$(basename $(pwd) | sed 's#-#/#')
GDAL (Geospatial Data Abstraction Library) is a translator library for raster and vector geospatial data formats.
Download, extract, install.
The config file is a quick executable.
./configure --prefix=/usr/local/$(basename $(pwd) | sed 's#-#/#')
Rosetta is a library based object-oriented software suite which provides a robust system for predicting and designing protein structures, protein folding mechanisms, and protein-protein interactions.
You'll need a license
Download, extract, load scons, and compile.
tar xvf rosetta_src_2015.19.57819_bundle.tgz
module load scons
SCons is a software construction tool (build tool, or make tool) implemented in Python, that uses Python scripts as "configuration files" for software builds.
tar xvf scons-2.3.4.tar.gz
python setup.py install --prefix=/usr/local/scons/2.3.4
Change to the appropriate modules directory, check for .desc and .version and .base, create a symblink to .base
ln -s .base 2.3.4
Freesurfer is a set of tools for analysis and visualization of structural and functional brain imaging data.
Create a source directory, change to it, download, extract, discover that everything is bundled, create the application directory and move everything across.
Good news everybody!
This week I've started pulling everything together to bring both For Science! and Purser Explores The World back to the internet airwaves :)
I won't reveal what the return episode of Purser Explores The World is going to be about, but suffice to say it's going to continue the same explorations and interview style that previous episodes had.
For Science! of course is going to be the return of Mel, Mags and I doing our thing about science news and getting our rant on (well Mags and Mel more than me but anyway). I'm also going to be looking at either expanding the show to include a new segment or create a smaller podcast that will be talking to researchers around the country, not more than say 15 or 20 minutes long in which we find out a bit more about the work the researcher is doing, how they got started in science and so on.
I have some other thoughts about Angry Beanie and its direction, but they are for another blog post I think.Blog Catagories: angry beanie
APM:Plane 3.3.0 released
The ardupilot development team is proud to announce the release of version 3.3.0 of APM:Plane. This is a major release with a lot of changes. Please read the release notes carefully!
The last stable release was 3 months ago, and since that time we have applied over 1200 changes to the code. It has been a period of very rapid development for ArduPilot. Explaining all of the changes that have been made would take far too long, so I've chosen some key changes to explain in detail, and listed the most important secondary changes in a short form. Please ask for details if there is a change you see listed that you want some more information on.
This is the first release of APM:Plane where ARMING_CHECK and ARMING_REQUIRE both default to enabled. That means when you upgrade if you didn't previously have arming enabled you will need to learn about arming your plane.
Please see this page for more information on arming:
I know many users will be tempted to disable the arming checks, but please don't do that without careful thought. The arming checks are an important part of ensuring the aircraft is ready to fly, and a common cause of flight problems is to takeoff before ArduPilot is ready.
Re-do Accelerometer Calibration
Due to a change in the maximum accelerometer range on the Pixhawk all users must re-do their accelerometer calibration for this release. If you don't then your plane will fail to arm with a message saying that you have not calibrated the accelerometers.
Only 3D accel calibration
The old "1D" accelerometer calibration method has now been removed, so you must use the 3D accelerometer calibration method. The old method was removed because a significant number of users had poor flights due to scaling and offset errors on their accelerometers when they used the 1D method. My apologies for people with very large aircraft who find the 3D method difficult.
Note that you can do the accelerometer calibration with the autopilot outside the aircraft which can make things easier for large aircraft.
After an auto-landing the autopilot will now by default disarm after LAND_DISARMDELAY seconds (with a default of 20 seconds). This feature is to prevent the motor from spinning up unexpectedly on the ground
after a landing.
It is now possible to configure your autopilot for hardware in the loop simulation without loading a special firmware. Just set the parameter HIL_MODE to 1 and this will enable HIL for any autopilot. This is designed to make it easier for users to try HIL without having to find a HIL firmware.
SITL on Windows
The SITL software in the loop simulation system has been completely rewritten for this release. A major change is to make it possible to run SITL on native windows without needing a Linux virtual machine. There should be a release of MissionPlanner for Windows soon which will make it easy to launch a SITL instance.
The SITL changes also include new backends, including the CRRCSim flight simulator. This gives us a much wider range of aircraft we can use for SITL. See http://dev.ardupilot.com/wiki/simulation-2/ for more information.
Throttle control on takeoff
A number of users had problems with pitch control on auto-takeoff, and with the aircraft exceeding its target speed during takeoff. The auto-takeoff code has now been changed to use the normal TECS throttle control which should solve this problem.
Rudder only support
There is a new RUDDER_ONLY parameter for aircraft without ailerons, where roll is controlled by the rudder. Please see the documentation for more information on flying with a rudder only aircraft:
http://plane.ardupilot.com/wiki/ardupla ... udder_only
We have managed to keep support for the APM1 and APM2 in this release, but in order to fit it in the limited flash space we had to disable some more features when building for those boards. For this release the AP_Mount code for controlling camera mounts is disabled on APM1/APM2.
At some point soon it will become impractical to keep supporting the APM1/APM2 for planes. Please consider moving to a 32 bit autopilot soon if you are still using an APM1 or APM2.
New INS code
There have been a lot of changes to the gyro and accelerometer handling for this release. The accelerometer range on the Pixhawk has been changed to 16g from 8g to prevent clipping on high vibration aircraft, and the sampling rate on the lsm303d has been increased to 1600Hz.
An important bug has also been fixed which caused aliasing in the sampling process from the accelerometers. That bug could cause attitude errors in high vibration environments.
Numerous Landing Changes
Once again there have been a lot of improvements to the automatic landing support. Perhaps most important is the introduction of a smooth transition from landing approach to the flare, which reduces the tendency to pitch up too much on flare.
There is also a new parameter TECS_LAND_PMAX which controls the maximum pitch during landing. This defaults to 10 degrees, but for many aircraft a smaller value may be appropriate. Reduce it to 5 degrees if you find you still get too much pitch up during the flare.
Other secondary changes in this release include:
- a new SerialManager library which gives much more flexible management of serial port assignment
- changed the default FS_LONG_TIMEOUT to 5 seconds
- raised default IMAX for roll/pitch to 3000
- lowered default L1 navigation period to 20
- new BRD_SBUS_OUT parameter to enable SBUS output on Pixhawk
- large improvements to the internals of PX4Firmware/PX4NuttX for better performance
- auto-formatting of microSD cards if they can't be mounted on boot (PX4/Pixhawk only)
- a new PWM based driver for the PulsedLight Lidar to avoid issues with the I2C interface
- fixed throttle forcing to zero when disarmed
- only reset mission on disarm if not in AUTO mode
- much better handling of steep landings
- added smooth transition in landing flare
- added HIL_MODE parameter for HIL without a special firmware
- lowered default FS_LONG_TIMEOUT to 5 seconds
- mark old ELEVON_MIXING mode as deprecated
- fixed 50Hz MAVLink support
- support DO_SET_HOME MAVLink command
- fixed larger values of TKOFF_THR_DELAY
- allow PulsedLight Lidar to be disabled at a given height
- fixed bungee launch (long throttle delay)
- fixed a bug handling entering AUTO mode before we have GPS lock
- added CLI_ENABLED parameter
- removed 1D accel calibration
- added EKF_STATUS_REPORT MAVLink message
- added INITIAL_MODE parameter
- added TRIM_RC_AT_START parameter
- added auto-disarm after landing (LAND_DISARMDELAY)
- added LOCAL_POSITION_NED MAVLink message
- avoid triggering a fence breach in final stage of landing
- rebuild glide slope if we are above it and climbing
- use TECS to control throttle on takeoff
- added RUDDER_ONLY parameter to better support planes with no ailerons
- updated Piksi RTK GPS driver
- improved support for GPS data injection (for Piksi RTK GPS)
- added NAV_LOITER_TO_ALT mission item
- fixed landing approach without an airspeed sensor
- support RTL_AUTOLAND=2 for landing without coming to home first
- disabled camera mount support on APM1/APM2
- added support for SToRM32 and Alexmos camera gimbals
- added support for Jaimes mavlink enabled gimbal
- improved EKF default tuning for planes
- updated support for NavIO and NavIO+ boards
- updated support for VRBrain boards
- fixes for realtime threads on Linux
- added simulated sensor lag for baro and mag in SITL
- made it possible to build SITL for native Windows
- switched to faster accel sampling on Pixhawk
- added coning corrections on Pixhawk
- set ARMING_CHECK to 1 by default
- disable NMEA and SiRF GPS on APM1/APM2
- support MPU9255 IMU on Linux
- updates to BBBMINI port for Linux
- added TECS_LAND_PMAX parameter
- switched to synthetic clock in SITL
- support CRRCSim FDM backend in SITL
- new general purpose replay parsing code
- switched to 16g accel range in Pixhawk
- added FENCE_AUTOENABLE=2 for disabling just fence floor
- added POS dataflash log message
- changed GUIDED behaviour to match copter
- added support for a 4th MAVLink channel
- support setting AHRS_TRIM in preflight calibration
- fixed a PX4 mixer out of range error
Many thanks to everyone who contributed to this release. We have a lot of new developers contributing which is really great to see! Also, apologies for those who have contributed a pull request but not yet had it incorporated (or had feedback on the change). We will be trying to get to as many PRs as we can soon.
Best wishes to all APM:Plane users from the dev team, and happy flying!
Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it ;)
Seriously though, I have a tendency to lose things sometimes and thought that posting it here would be my best chance of never losing my them. Since it needed to be presented in public it would also mean that it would force me into writing more complete recipes rather than simply scrawling down whatever seemed pertinent at the time. (I never thought that I would be presented with opportunities through this. More on this later.)
In spite of all this, you're probably wondering why the recipes lack a bit of detail still and how I ended up with this particular style of cooking.
As you can guess from my name, I have an asian (Vietnamese to be more precise) background. Growing up I learnt that our cooking was often extremely tedious, required a lot of preparation, tasted great but often didn't fill me. Ultimately, this meant that my family wanted me to spend less time helping in the kitchen and more time tending to me studies. To a certain extent, this family policy has served us well. Many of the kids are well educated and have done well professionally.
The problem is that if you've ever worked worked a standard week over any period of time then you ultimately realise that a lot of the time you don't want to spend heaps of time cooking whether for yourself or for others (this style doesn't work long term).
This is where I radically differ from my family. Many of them see cooking as a necessary chore (who wants to die, right? :-)) and they labour over it or else they love it with such a passion that they lose sight of the fact that there's only 24 hours in a day (there are/have been some professional chefs in the family). Ultimately, they end up wearing themselves out day after day but I've learnt to strip back recipes to their core flavours so that I can cook decent tasting food in reasonable amounts of time.
Like others, I went through multiple phases from a culinary perspective. As a child I loved to eat most things thrown at me (but my family didn't want me in the kitchen). In my teenage years, I used to enjoy and revel in fast and fatty foods but basically grew out of it as I discovered that it wasn't all that filling and could result in poor health. Just like the protaganist of 'Supersize Me' I found out that some of my bodily functions didn't work quite as well on this particular diet.
Eating out was much the same because they often added unhealthy elements to meals (high levels of MSG, sugar, salt, etc... to boost the taste). Not to mention the fact, that serving sizes could sometimes be low and prices relatively high. I basically had no choice but to learn to cook for myself. In the beginning, I began trying to reproduce restaurant meals badly. I didn't have the reportoire to be able to reproduce and balance flavours well enough to do a half decent job. Over time, I spent more time exploring cheat restaurants, diners, etc... around where I studied and/or worked. I also watched, read, and in general spent more time in the grocer just trying random sauces, spices, and so on... I developed a sense of flavour and how to achieve them from base ingredients.
This is why none of the recipes contain exact amounts of ingredients (at the moment). It's also because that was the way I learnt to cook (I was taught a bit by some of my aunts), some of the lesser talented members of the family had a tendency to fiddle constantly so listing amounts was basically useless, some people (family or not) aren't willing to share ingredients so you just have to figure it out when and if you have to, and finally I figured out that it was the easiest way for me to learn to cook. When you look at a recipe, you're often doing mental arithmetic in order to make it 'taste right'. By developing a better sense of taste I could mostly forgo this and not have to suffer the consequences of a mathematical screw up (it happened enough times in the family for me to learn to not become so reliant on it).
In general my perspective with regards to food are the following:
- kids will eventually learn what fills them and fast food will make them feel like horrible. They will grow out of it and eat properly eventually if they are exposed to the right foods
- rely on machinery when you can. Why waste you're time cutting food perfectly if you can get it done in a fraction of the time using the right equipment?
- why bother with perfection if you can achieve 95% of the taste and 50% apparent effort
- I'd much rather spend time enjoying food than cooking it
- prior to marinating any piece of meat I create the core sauce/marinade seperately first and then add the meat. There's no chance of food posioning and I get to have an idea what it will taste like
- balance of flavours is more important than exact amounts over and over again. You may have a different preference from time to time also. Obviously, the converse is also true. Exact amounts give you a basis from which to work from
- don't think that more resources will make you a better chef. It's possible that the exact opposite is true at times. Think about the food of the wealthy versus that of the poor. The poor have to make the most of everything that is thrown at them, extracting every last single ounce of flavour from something small/cheap while the wealthy have the basically mix and match the very best each and every time. From a chef's perspective this means that they don't have the chance to understand flavours at a more elemental/core level
- shop from specialist butchers, fishmongers, etc... they will often be able to get you unusual cuts/meats, have better knowledge, do extra things like cutting down large bones for soup stocks and they are also often quite a bit cheaper
- don't freeze if you can avoid it (or at least avoid freezing some foods). Some people I know use it as a technique to save time. For some dishes this is true but for others it can alter the actual structure (and sometimes faste. Think about soups versus meats when they are dethawed correctly and incorrectly.) of the food involved leaving it a mess when you finally prepare and eat it
- fresh means fresh. Leave fish (and some meats) in the fridge for even a day after leaving the better/stable environment at a supermarket or fishmonger and it will begin to smell and taste slightly rank. This effect increases exponentially over time
- try everything whether that be sauces, spices, restaurants, cultures, etc... You will find cheap opportunties if you go to the right places and ultimately you will end up healther (you learn that better tasting food is often healther as well), happier (variety is the spice of life), and possibly wealthier because of it (you can save a lot by learning to cook well). The wider you're vocabulary, the better your cooking will become...
- balance of flavours as key. Even if you stuff up a recipe you can rescue it if you know enough about this. Added too much sugar? Use sourness to balance it out, etc...
- don't learn from a single source. If you learn purely through celebrity chefs and books you'll realise that a lot of what they do is quite gimmicky. A lot of the ingredients that they use aren't very accessible, expensive, in spite of what they say. Use your head to strip the recipes back to core flavours to save you time and money (in procuring them)
- learning to cook well will take time. Have patience. It took me a long while before I could build a sufficient 'vocabulary' before I could build dishes that were worth staying at home for. It took me more time to learn how to reverse engineer dishes at restaurants. Use every resource at your disposal (the Internet has heaps of free information, remember?).
I got an email last year pointing out a cosmetic issue with changelogs.debian.net. I think at the time of the email, the only problem was some bitrot in PHP's built-in server variables making some text appear incorrectly.
I duly added something to my TODO list to fix it, and it subsequently sat there for like 13 months. In the ensuing time, Debian changed some stuff, and my code started incorrectly handling a 302 as well, which actually broke it good and proper.
I finally got around to fixing it.
I also fixed a problem where sometimes there can be multiple entries in the Sources file for a package (switching to using api.ftp-master.debian.org would also address this), which caused sometimes caused an incorrect version of the changelog to be returned.
In the resulting tinkering, I learned about api.ftp-master.debian.org, which is totally awesome. I could stop maintaining and parsing a local copy of sid's Sources file, and just make a call to this instead.
Finally, I added linking to CVEs, because it was a quick thing to do, and adds value.
In light of api.ftp-master.debian.org, I'm very tempted to rewrite the redirector. The code is very old and hard for present-day Andrew to maintain, and I despise PHP. I'd rather write it in Python today, with some proper test coverage. I could also potentially host it on AppEngine instead of locally, just so I get some experience with AppEngine
It's also been suggested that I fold the changes into the changelog hosting on ftp-master.debian.org. I'm hesitant to do this, as it would require changing the output from plain text to HTML, which would mess up consumers of the plain text (like the current implementation of changelogs.debian.net)
I’m currently working on a Digital Voice (DV) mode that will work at negative SNRs. So I started thinking about where the theoretical limits are:
- Lets assume we have a really good rate 0.5 FEC code that approaches the Shannon Limit of perfectly correcting random bit errors up to a channel BER of 12%
- A real-world code this good requires a FEC frame size of 1000′s of bits which will mean long latency (seconds). Lets assume that’s OK.
- A large frame size with perfect error correction means we can use a really low bit rate speech codec that can analyse seconds of speech at a time and remove all sorts of redundant information (like silence). This will allow us to code more efficiently and lower the bit rate. Also, we only want speech quality just on the limits of intelligibility. So lets assume a 300 bit/s speech codec.
- Using rate 0.5 FEC that’s a bit rate over the channel of 600 bit/s.
- Lets assume QPSK on a AWGN channel. It’s possible to make a fading channel behave like a AWGN channel if we use diversity, e.g. a long code with interleaving (time diversity), or spread spectrum (frequency diversity).
- QPSK at around 12% BER requires an Eb/No of -1dB or an Es/No of Eb/No + 3 = 2dB. If the bit rate is 600 bit/s the QPSK symbol rate is 300 symbols/s
So we have SNR = Es/No – 10*log10(NoiseBW/SymbolRate) = 2 – 10*log10(3000/300) = -8dB. Untrained operators find SSB very hard to use beneath 6dB, however I imagine many Ham contacts (especially brief exchanges of callsigns and signal reports) are made well beneath that. DV at -8dB would be completely noise free, but of low quality (e.g. a little robotic) and high latency.
For VHF applications C/No is a more suitable measurement, this is a C/No = SNR – 10*log10(3000) = 26.7dBHz (FM is a very scratchy readability 5 at around 43dBHz). That’s roughly a 20dB (100 x) power improvement over FM!
Update: It turns out the DA was trolling. We all now know that DrupalCon North America 2017 will be in New Orleans. I've kept this post up as I believe the information about handling unpublished nodes is relevant. I have also learned that m4032404 is enabled by default in govCMS.
When a user doesn't have access to content in Drupal a 403 forbidden response is returned. This is the case out of the box for unpublished content. The problem with this is that sensitive information may be contained in the URL. A great example of this the DrupalCon site.
The way to avoid this is to use the m4032404 module which changes a 403 response to a 404. This simple module prevents your site leaking information via URLs.AttachmentSize DrupalCon-Philadephia.png139.21 KB