Planet Linux Australia
I don't read as much as I should these days, but one author I always make time for is John Scalzi. This is the next book in the Old Man's War universe, and it continues from where The Human Division ended on a cliff hanger. So, let's get that out of the way -- ending a book on a cliff hanger is a dick move and John is a bad bad man. Then again I really enjoyed The Human Division, so I will probably forgive him.
I don't think this book is as good as The Human Division, but its a solid book. I enjoyed reading it and it wasn't a chore like some books this far into a universe can be (I'm looking at you, Asimov share cropped books). The conclusion to the story arc is sensible, and not something I would have predicted, so overall I'm going to put this book on my mental list of the very many non-terrible Scalzi books.
Tags for this post: book john_scalzi combat aliens engineered_human old_mans_war age colonization human_backup cranial_computer personal_ai
Related posts: The Last Colony ; The Human Division; Old Man's War ; The Ghost Brigades ; Old Man's War (2); The Ghost Brigades (2) Comment Recommend a book
We’ve finally had time to finalise Korora 22 and images are now available. I strongly recommend downloading with BitTorrent if you can.
We are not shipping Adobe Flash by default from 22 onwards, due to consistent security flaws. We still include the repository however, so users can install via the package manager or command line if they really want it:
sudo dnf install flash-plugin
Alternatively, install Google Chrome which includes the latest version of Flash.
Also, KDE 4 is not available for this release, so if you are not ready to move to KDE 5, then please stick to Korora 21.
The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.Linux
For most Linux-based computers, you can look and change the current regulatory domain using these commands:iw reg get iw reg set CA
where CA is the two-letter country code when the device is located.
On Debian and Ubuntu, you can make this setting permanent by putting the country code in /etc/default/crda.
Finally, to see the list of channels that are available in the current config, use:iwlist wlan0 frequency OpenWRT
In order to persist your changes though, you need to use the uci command:uci set wireless.radio0.country=CA uci set wireless.radio1.country=CA uci commit wireless
where wireless.radio0 and wireless.radio1 are the wireless devices specific to your router. You can look them up using:uci show wireless
To test that it worked, simply reboot the router and then look at the selected regulatory domain:iw reg get Scanning the local wifi environment
Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).
Excellent keynote on using Python in education. Many interesting insights into what's being done well, what needs improving and how to contribute.Slow Down, Compose Yourself - How Composition Can Help You Write Modular, Testable Code
- Modularity and testability traits are desirable.
- Tests are the only way ti ensure your code works
- When designing a system, plan for these traits
- Fake components can be used to return hard to replicate conditions ie: a database returning "no more space"
- Ensure you have correctness at every level.
- Suggests using practices from better (functional programming) languages like Haskell with Python.
- Trosnoth is a 2D side scrolling game.
- Team strategy game
- Involve people at every step of the process.
- Have a core group of interested people and allow them to contribute and become involved to increase commitment.
- Build community.
- Get people excited
- Be prepared to be wrong.
- Encourage people to be involved.
- Start with "good enough".
- Re-writing everything blocked contributions for the period of the re-write.
- Look for ideas anywhere.
- Mistakes are a valuable learning opportunity.
- Your people and your community are what makes the project work.
by Todd Owen (http://2015.pycon-au.org/schedule/30057/view_talk?day=saturday)
- Ansible is a push model of configuration management and automation.
- Pull model of Chef and Puppet is unnecessarily complex.
- Ansible's simplicity is a great asset
- Graceful and clear playbooks.
- Simple is better than complex.
- Complex is better than complicated.
- Flat is better than nested.
- Ansible modules are organised without hierarchy.
- Ansible only uses name spaces for roles.
- Explicit is better than implicit. Implicit can become invisible to users.
- Ansible id very consistent.
- Simple by design.
- Easy to learn.
by Tim Butler
- Prefers to think of containers as application containers.
- Docker is an abstraction layer focussed on the application it delivers.
- Easy, repeatable deployments.
- Not a fad, introduced 15 years again.
- Google launch over 2 billion containers per week.
- Docker is fast, lightweight, isolated and easy.
- Rapidly changing and improving.
- Quickly changing best practices.
- Containers are immutable.
- No mature multi-host deployment.
- Security issues addressed via stopping containers and starting a patched one (milliseconds)
- Simple and clear subcommands.
- Docker compose for orchestration.
- Docker machine creates to the Docker host for you.
- Volume plugin
- New networking system so it actually works at scale.
- Salt stack provides:
- configuration management
- loosely coupled infrastructure coordination.
- remote execution
- unittest 101
- create a test directory
- write a test
- gave an example of a "hello world" of tests.
- uses unittest.main.
- Nose extends unittest to better facilitate multiple unit tests.
- Mock allows the replacement of modules for testing.
- Keep tests small.
- Claims Python on mobile is very viable.
- Most of the failing bugs are not services you want on mobile anyway.
- libffi problems needs to be resolved.
- Compiler issues not quite resolved.
- libffi problems needs to be resolved.
- What not Jython? Does not compile on Android. Nor will many of it's dependencies.
- Jython is probably not the right solution anyway.
- Thinks you may be able to compile Python directly to Java...uses Byterun as an example.
- Kivy works now and worth using.
- His project Toga is coming along nicely.
- Admits he may be on a fools errand but thinks this is achievable.
by Victor Palma
- The sysadmin mindset of get things fixed as fast as possible needs to be shed.
- Take the time to step back and consider problems.
- Keep things simple, explicit and consistent.
- Do comments in reStructured Text.
- Unless you're testing, you're not really coding.
- Tools include tox and nose.
- Don't be afraid to experiment.
- Find something you are passionate about and manipulate that data.
- Gerrit does code review.
- Zuul tests things right before they merge and will only merge them if they pass.
- Only Zuul can commit to master.
- Zuul uses gearman to manage Jenkins jobs.
- Uses NNFI - nearest non-failing item
- Use jenkins-gearman instead of jenkins-gerrit to reproduce the work flow.
by Monty Taylor
- Create truth in realistic acting
- Know what problem you're trying to solve.
- Develop techniques to solve the problem.
- Don't confuse the techniques with the result.
- Willingness to change with new information.
What Monty Wants
- Provide computers and networks that work.
- Should not chase 12-factor apps.
- Kubernetes / CoreOS are already providing these frameworks
- OpenStack should provide a place for these frameworks to work.
- By default give a directly routable IP.
inaugust.com/talks/a-vision-for-the-future.htmlThe Future of Identity (Keystone) in OpenStack
- Moving to Fernet Tokens as the default, everywhere.
- No database requirement
- Limited token size
- Will support all the features of existing token types.
- Problems with UUID or PKI tokens:
- SQL back end
- PKI tokens are too large.
- Moving from bespoke WSGI to Flask
- Moving to a KeystoneAuth Library to remove the need for the client to be everywhere.
- Keystone V3 API...everywhere. Focus on removing technical debt.
- V2 API should die.
- Deprecating the Keystone client in favour of the openstack client.
- Paste.ini functionality being moved to core and controlled via policy.json
- Gave a great overview of OpenStack / CoreOS / Containers
- All configuration management sucks. Ansible sucks less.
- CI/CD pipelines are repeatable.
by Jamie Lennox
- SAML is the initially supported WebSSO.
- Ipsilon has SAML frontend, supports SSSD / PAM on the backend.
- Requires Keystone V3 API everywhere.
- Jamie successfully did live demo that demonstrated the work flow.
by Angus Lees
- Uses Linux kernel separation to restrict available privileges.
- Gave a brief history of rootwrap`.
- Fast and safe.
- Still in beta
by Monty Taylor
- Shade's existence is a bug.
- Take OpenStack back to basics
- Keeps things simple.
Once in a while you want to start a daemon with differing parameters from the norm.
For example, the default parameters to Fedora's packaging of ladvd give too much access to unauthenticated remote network units when it allows those units to set the port description on your interfaces. So let's use that as our example.
With systemd unit files in /etc/systemd/system/ shadow those in /usr/lib/systemd/system/. So we could copy the ladvd.service unit file from /usr/lib/... to /etc/..., but we're old, experienced sysadmins and we know that this will lead to long run trouble. /usr/lib/systemd/system/ladvd.service will be updated to support some new systemd feature and we'll miss that update in the copy of the file.
What we want is an "include" command which will pull in the text of the distributor's configuration file. Then we can set about changing it. Systemd has a ".include" command. Unfortunately its parser also checks that some commands occur exactly once, so we can't modify those commands as including the file consumes that one definition.
In response, systemd allows a variable to be cleared; when the variable is set again it is counted as being set once.
Thus our modification of ladvd.service occurs by creating a new file /etc/systemd/system/ladvd.service containing:.include /usr/lib/systemd/system/ladvd.service [Service] # was ExecStart=/usr/sbin/ladvd -f -a -z # but -z allows string to be passed to kernel by unauthed external user ExecStart= ExecStart=/usr/sbin/ladvd -f -a
 At the very least, a security issue equal to the "rude words in SSID lists" problem. At it's worst, an overflow attack vector.
I’ve been working on the Linear Predictive Coding (LPC) modeling used in the Codec 2 700 bit/s mode to see if I can improve the speech quality. Given this mode was developed in just a few days I felt it was time to revisit it for some tuning.
LPC fits a filter to the speech spectrum. We update the LPC model every 40ms for Codec 2 at 700 bit/s (10 or 20ms for the higher rate modes).
Speech Codecs typically use a 10th order LPC model. This means the filter has 10 coefficients, and every 40ms we have to send them to the decoder over the channel. For the higher bit rate modes I use about 37 bits/frame for this information, which is the majority of the bit rate.
However I discovered I can get away with a 6th order model, if the input speech is filtered the right way. This has the potential to significantly reduce the bit rate.
Our ear perceives speech based on the frequency of peaks in the speech spectrum. When the peaks in the speech spectrum are indistinct, we have trouble understanding what is being said. The speech starts to sound muddy. With analog radio like SSB (or in a crowded room), the troughs between the peaks fill with noise as the SNR degrades, and eventually we can’t understand what’s being said.
The LPC model is pretty good at representing peaks in the speech spectrum. With a 10th order LPC model (p=10) you get 10 poles. Each pair of poles can represent one peak, so with p=10 you get up to 5 independent peaks, with p=6, just 3.
I discovered that LPC has some problems if the speech spectrum has big differences between the low and high frequency energy. To find the LPC coefficients, we use an algorithm that minimises the mean square error. It tends to “throw poles” at the highest energy part of signal (frequently near DC), while ignoring the still important, lower energy peaks at higher frequencies above 1000Hz. So there is a mismatch in the way LPC analysis works and how our ears perceive speech.
For example I found that samples like hts1a and ve9qrp code quite well, but cq_ref and kristoff struggle. The former have just 12dB between the LF and HF parts of the speech spectrum, the latter 40dB. This may be due to microphones, input filtering, or analog shaping.
Another problem with using an unconventionally low LPC order like p=6 is that the model “runs out of poles”. Some speech signals may have 4 or 5 peaks, so the poor LPC model gets all confused and tries to reach a compromise that just sounds bad.
I messed around with a bunch of band pass filters that I applied to the speech samples before LPC modeling. These filters whip the speech signal into a shape that the LPC model can work with. I ran various samples (hts1a, hts2a, cq_ref, ve9qrp_10s, kristoff, mmt1, morig, forig, x200_ext, vk5qi) through them to come up with the best compromise for the 700 bits/mode.
Even though the latter sample is band limited, it is easier to understand as the LPC model is doing a better job of clearly representing those peaks.
After some experimentation with sox I settled on two different filter types: a sox “bandpass 1000 2000″ worked on some, whereas on others with more low frequency content “bandpass 1500 2000″ sounded better. Some helpful discussions with Glen VK1XX had suggested that a two band AGC was common in broadcast audio pre-processing, and might be useful here.
However through a process of frustrated experimentation (I was stuck on cq_ref for a day) I found that a very sharp skirted filter between 300 and 2600Hz did a pretty good job. Like p=6 LPC, a 2600Hz cut off is quite uncommon for speech coding, but SSB users will find it strangely familiar…….
Note that for the initial version of the 700 bit/s mode (currently in use in FreeDV 700) I have a different band pass filter design I chose more or less at random on the day that sounds like this with p=6 LPC. This filter now appears to be a bit too severe.
Here is a little chunk of speech from hts1a:
Below are the original (red) and p=6 LPC models (green line) without and with a sox “bandpass 1000 2000″ filter applied. If the LPC model was perfect green and red would be superimposed. Open each image in a new browser tab then jump back and forth. See how the two peaks around 550 and 1100Hz are better defined with the bandpass filter? The error (purple) in the 500 – 1000 Hz region is much reduced, better defining the “twin peaks” for our long suffering ears.
Here are three spectrograms of me saying “D G R”. The dark lines represent the spectral peaks we use to perceive the speech. In the “no BPF” case you can see the spectral peaks between 2.2 and 2.3 seconds are all blurred together. That’s pretty much what it sounds like too – muddy and indistinct.
Note that compared to the original, the p=6 BPF spectrogram is missing the pitch fundamental (dark line near 0 Hz), and a high frequency peak at around 2.5kHz is indistinct. Turns out neither of these matter much for intelligibility – they just make the speech sound band limited.
OK, so over the last few weeks I’ve spent some time looking at the effects of microphone placement, and input filtering on p=6 LPC models. Now time to look at quantisation of the 700 mode parameters then try it again over the air and see if the speech quality is improved. To improve performance in the presence of bit errors I’d also like to get the trellis based decoding into a real world usable form. When the entire FreeDV 700 mode (codec, modem, error handling) is working OK compared to SSB, time to look at porting to the SM1000.
Command Line Magic
I’m working with the c2sim program, which lets me explore Codec 2 in a partially quantised or incomplete state. I pipe audio in and out between various sox stages.
Note these simulations sound a lot better than the final Codec 2 at 700 bit/s as nothing else is quantised/decimated, e.g. it’s all at a 10ms frame rate with original phases. It’s a convenient way to isolate the LPC modeling step with as much fidelity as we can.
If you want to sing along here are a couple of sample command lines. Feel free to ask me any questions:
sox -r 8000 -s -2 ../../raw/hts1a.raw -r 8000 -s -2 -t raw - bandpass 1000 2000 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -
sox -r 8000 -s -2 ../../raw/cq_ref.raw -r 8000 -s -2 -t raw - sinc 300 sinc -2600 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -
Interactive map for this route.
Tags for this post: blog pictures 20150729 photo sydney
Related posts: In Sydney!; In Sydney for the day; A further update on Robyn's health; RIP Robyn Boland; Weekend update; Bigger improvements
See more thumbnails
I also made a quick and dirty 360 degree video of the view of LA from the top of the nike control radar tower:
Interactive map for this route.
Tags for this post: blog pictures 20150727-nike_missile photo california
Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; Views from a lookout on Mulholland Drive, Bel Air
Tags for this post: blog pictures 20150727-lookout photo california
Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; VTA station for the Santa Clara Convention Center
Interactive map for this route.
Tags for this post: blog pictures 20150727 photo california bushwalk
Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica
- Everyone will bend over backwards for you if you’re famous, but if you’re not then you’ll be left out to die http://t.co/v7EvUWXorx 10:42:13, 2015-07-26
- Now this is real news. Shame that many don’t care because most of the victims are poor. http://t.co/CtkLyfnlnb 14:19:00, 2015-07-25
- Your taste in music says a lot about how you think: Cambridge study http://t.co/0rAmN3GqY8 10:42:05, 2015-07-25
- …and so the Age of Entitlement continues unapologetically http://t.co/3P7pUd0sC6 #auspol 18:27:04, 2015-07-24
- The effects of everyday sexism on young children http://t.co/2oswzu7bJH 14:19:01, 2015-07-24
- Your Smartphone Usage is Linked to Your Level of Depression http://t.co/qngN5Nwa6Q 10:42:00, 2015-07-24
- Country kids have better work ethic than urban, affluent kids; youth disconnected from the workforce http://t.co/tL9i9K0EOJ 16:33:16, 2015-07-21
- RT @TeenMogul: 8 online classes that will make you smarter about business
http://t.co/sjz4hcQfSp http://t.co/X9xws6PBIW 19:14:49, 2015-07-20
- Australia one of the only advanced economies without dedicated youth entrepreneurship initiatives supported by govt http://t.co/TaYAJs8LMS 16:33:04, 2015-07-20
- Dropping out of education is more likely to kill you than smoking: study http://t.co/Vb1FctGmjD 14:19:05, 2015-07-20
No, I didn’t attend
- Raspberry Pi hacks
- Microservices – Why, what and how to get there
- Introduction to planning and running tech events
- Docker in production: Reality, not hype
- Continuous delivery and large microservice architectures: Reflections on Ioncannon
- Choose boring technology
- 9 Big-Picture Takeaways From OSCON as Open Source Goes Mainstream
- Heard Around the Web: OSCON 2015
- 2015 OSCON Interview collection
- Signals from OSCON 2015
This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.
People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.
So why does microphone placement matter?
Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.
A microphone is a bit like a radio front end:
We assume linearity (the microphone signal isn’t clipping).
Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.
Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.
When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.
A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.
Science in my Lounge Room 1 – Proximity Effect
I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.
To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).
This spreadsheet shows the results over a couple of runs (levels in dB).
So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!
Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.
Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.
Science in my Lounge Room 2 – Multipath
So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.
The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.
Lets take a look at the frequency response close up and at 25cm:
Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:
- The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
- They both have a resonance around 500Hz.
- The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.
OK, lets look at the reflections:
A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.
Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.
Looking at a zoomed version of the 25cm spectrum:
I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.
Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:
Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.
OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!
If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.
How to break a low bit rate speech codec
Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.
This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.
Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?
Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.
In Sep 2014 I had some interesting discussions around the effect of microphones, small speakers, and speech samples with Mike, OH2FCZ, who has is an audio professional. Thanks Mike!
In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee. This year, LSS has been upgraded to a hosted event. I didn’t realize that this meant that LSS registration was available entirely standalone. To quote an email thread:
If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration. You will only have access to The Linux Security Summit.
Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.
There may be a number of people who registered for LinuxCon but who only wanted to attend LSS. In that case, please contact the program committee at lss-pc_AT_lists.linuxfoundation.org.
Apologies for any confusion.
Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install, I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)
mrda@host:~$ python -c 'import pywsman'
mrda@host:~$ tox -evenv --notest(venv)mrda@host:~$ python -c 'import pywsman' Traceback (most recent call last): File "<string>", line 1, in <module>ImportError: No module named pywsman# WAT?
Let's try something else that's installed system-wide(venv)mrda@host:~$ python -c 'import six' # Works
Why does six work, and pywsman not?(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*-rw-r--r-- 1 root root 1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info-rw-r--r-- 1 root root 22857 Jan 6 2014 /usr/lib/python2.7/dist-packages/six.py -rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*-rw-r--r-- 1 root root 80590 Jun 16 2014 /usr/lib/python2.7/dist-packages/pywsman.py -rw-r--r-- 1 root root 293680 Jun 16 2014 /usr/lib/python2.7/dist-packages/_pywsman.so
The only thing that comes to mind is that pywsman wraps a .so
A work-around is to tell venv that it should use the system-wide install of pywsman, like this:
# Kill the old venv first (venv)mrda@host:~$ deactivate mrda@host:~$ rm -rf .tox/venv
Now startover mrda@host:~$ tox -evenv --notest --sitepackages pywsman (venv)mrda@host:~$ python -c "import pywsman"# Fun and Profit!
Binh Nguyen: Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation
If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...
If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.
It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?
If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.
If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.
There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.
Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.
The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.
Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.Installing Zotero
There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)
Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.
So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.
I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.
After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:$ cd $ mv .mozilla/firefox/*.default/zotero .zotero Full text indexing of PDF files
Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.
Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.
The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.Manual configuration of PDF full indexing utilities on Debian
Install the pdftotext and pdfinfo programs:$ sudo apt-get install poppler-utils
Find the kernel and architecture:$ uname --kernel-name --machine Linux armv7l
In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:$ cd ~/.zotero $ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m) $ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)
Install a small helper script to alter pdftotext paramaters:$ cd ~/.zotero $ wget -O redirect.sh https://raw.githubusercontent.com/zotero/zotero/4.0/resource/redirect.sh $ chmod a+x redirect.sh
Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:$ cd ~/.zotero $ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version $ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version
Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:PDF indexing pdftotext version 0.26.5 is installed pdfinfo version 0.26.5 is installed
Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.
Linux Users of Victoria (LUV) Announce: LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences
200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
• Jon Oxer, Open Machines Building Open Hardware
• Chris Samuel, VLSCI: Supercomputing for Life Sciences
200 Victoria St. Carlton VIC 3053 (formerly the EPA building)
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.August 4, 2015 - 18:30