Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 34 min ago

Erik de Castro Lopo: Benchmarking and QuickChecking readInt.

Wed, 2014-06-11 20:16

I'm currently working on converting my http-proxy library from using the Data.Enumerator package to Data.Conduit (explanation of why in my last blog post).

During this conversion, I have been studying the sources of the Warp web server because my http-proxy was originally derived from the Enumerator version of Warp. While digging through the Warp code I found the following code (and comment) which is used to parse the number provided in the Content-Length field of a HTTP header:

-- Note: This function produces garbage on invalid input. But serving an -- invalid content-length is a bad idea, mkay? readInt :: S.ByteString -> Integer readInt = S.foldl' (\x w -> x * 10 + fromIntegral w - 48) 0

The comment clearly states that that this function can produce garbage, specifically if the string contains anything other than ASCII digits. The comment is also correct that an invalid Content-Length is a bad idea. However, on seeing the above code, and remembering something I had seen recently in the standard library, I naively sent the Yesod project a patch replacing the above code with a version that uses the readDec function from the Numeric module:

import Data.ByteString (ByteString) import qualified Data.ByteString.Char8 as B import qualified Numeric as N readInt :: ByteString -> Integer readInt s = case N.readDec (B.unpack s) of [] -> 0 (x, _):_ -> x

About 3-4 hours after I submitted the patch I got an email from Michael Snoyman saying that parsing the Content-Length field is a hot spot for the performance of Warp and that I should benchmark it against the code I'm replacing to make sure there is no unacceptable performance penalty.

That's when I decided it was time to check out Bryan O'Sullivan's Criterion bench-marking library. A quick read of the docs and bit of messing around and I was able to prove to myself that using readDec was indeed much slower than the code I wanted to replace.

The initial disappointment of finding that a more correct implementation was significantly slower than the less correct version quickly turned to joy as I experimented with a couple of other implementations and eventually settled on this:

import Data.ByteString (ByteString) import qualified Data.ByteString.Char8 as B import qualified Data.Char as C readIntTC :: Integral a => ByteString -> a readIntTC bs = fromIntegral $ B.foldl' (\i c -> i * 10 + C.digitToInt c) 0 $ B.takeWhile C.isDigit bs

By using the Integral type class, this function converts the given ByteString to any integer type (ie any type belonging to the Integral type class). When used, this function will be specialized by the Haskell compiler at the call site to to produce code to read string values into Ints, Int64s or anything else that is a member of the Integral type class.

For a final sanity check I decided to use QuickCheck to make sure that the various versions of the generic function were correct for values of the type they returned. To do that I wrote a very simple QuickCheck property as follows:

prop_read_show_idempotent :: Integral a => (ByteString -> a) -> a -> Bool prop_read_show_idempotent freader x = let posx = abs x in posx == freader (B.pack $ show posx)

This QuickCheck property takes the function under test freader and QuickCheck will then provide it values of the correct type. Since the function under test is designed to read Content-Length values which are always positive, we only test using the absolute value of the value randomly generated by QuickCheck.

The complete test program can be found on Github in this Gist and can be compiled and run as:

ghc -Wall -O3 --make readInt.hs -o readInt && ./readInt

When run, the output of the program looks like this:

Quickcheck tests. +++ OK, passed 100 tests. +++ OK, passed 100 tests. +++ OK, passed 100 tests. Criterion tests. warming up estimating clock resolution... mean is 3.109095 us (320001 iterations) found 27331 outliers among 319999 samples (8.5%) 4477 (1.4%) low severe 22854 (7.1%) high severe estimating cost of a clock call... mean is 719.4627 ns (22 iterations) benchmarking readIntOrig mean: 4.653041 us, lb 4.645949 us, ub 4.663823 us, ci 0.950 std dev: 43.94805 ns, lb 31.52653 ns, ub 73.82125 ns, ci 0.950 benchmarking readDec mean: 13.12692 us, lb 13.10881 us, ub 13.14411 us, ci 0.950 std dev: 90.63362 ns, lb 77.52619 ns, ub 112.4304 ns, ci 0.950 benchmarking readRaw mean: 591.8697 ns, lb 590.9466 ns, ub 594.1634 ns, ci 0.950 std dev: 6.995869 ns, lb 3.557109 ns, ub 14.54708 ns, ci 0.950 benchmarking readInt mean: 388.3835 ns, lb 387.9500 ns, ub 388.8342 ns, ci 0.950 std dev: 2.261711 ns, lb 2.003214 ns, ub 2.585137 ns, ci 0.950 benchmarking readInt64 mean: 389.4380 ns, lb 388.9864 ns, ub 389.9312 ns, ci 0.950 std dev: 2.399116 ns, lb 2.090363 ns, ub 2.865227 ns, ci 0.950 benchmarking readInteger mean: 389.3450 ns, lb 388.8463 ns, ub 389.8626 ns, ci 0.950 std dev: 2.599062 ns, lb 2.302428 ns, ub 2.963600 ns, ci 0.950

At the top of the output is proof that all three specializations of the generic function readIntTC satisfy the QuickCheck property. From the Criterion output its pretty obvious that the Numeric.readDec version is about 3 times slower that the original function. More importantly, all three version of this generic function are an order of magnitude faster than the original.

That's a win! I will be submitting my new function for inclusion in Warp.

Update : 14:13

At around the same time I submitted my latest version for readInt Vincent Hanquez posted a comment on the Github issue suggesting I look at the GHC MagicHash extension and pointed me to an example.

Sure enough, using the MagicHash technique resulted in something significantly faster again.

Update #2 : 2012-01-29 19:46

In version 0.3.0 and later of the bytestring-lexing package there is a function readDecimal that is even faster than the MagiHash version.

Erik de Castro Lopo: From Gedit to Geany.

Wed, 2014-06-11 20:16

After effectively giving up on Nedit, my text editor of choice for the last fifteen years, I gave Gedit a serious try.

For a full two weeks, I stuck with Gedit, including the intense 2½ day hacking session of AusHac2010. Unfortunately, switching from a very full featured editor like Nedit to Gedit was painful. There were a bunch of features that I had grown used to that were just absent or inconvienient in Gedit. The problem is that Gedit aims to be a relatively full featured programmer's editor while still being the default easy-to-use editor in GNOME. As far as I am concerned, these two aims are in conflict, making Gedit an adequate simple text editor and a poor editor for advanced coders.

After butting my head against basic usability issues with Gedit I was even considered either modifying it extensively using plugins or maybe even forking it and maintaining a forked version. Yes, that would be a huge pain in the neck, but fortunately that will not now be necessary.

In response to my blog post titled "R.I.P. Nedit" fellow Haskell hacker and Debian Haskell Group member Joachim Breitner suggested I have a look at the Geany text editor and IDE.

Geany is obviously a tool aimed squarely as an editor for full time, committed programmers. Its also much more than just an editor, in that it has many features of an IDE (Integrated Development Environment). In fact, when I first fired it up it looked like this (click for a larger view):





On seeing this I initially thought Geany was not for me. Fortunately I found that the extra IDE-like features can easily be hidden, providing me with a simple-to-use, highly configurable, advanced text editor. The features I really like are:

  • High degree of configurability, including key bindings.
  • Syntax highlighting (configurable) for a huge number of languages.
  • Custom syntax highlighting (ie user definable highlighting for languages not currently supported by Geany).
  • Regex search and search/replace.
  • Search and replace within a selected area only.
  • Highlighting of matching braces and brackets.
  • Language specific editing modes and auto indentation.
  • Go to specified line number.
  • Plugins.

There are still a few little niggles, but nothing like the pain I experienced trying to use Gedit. For instance, when run from the command line, Geany will open new files in a tab of an existing Geany instance. With multiple desktop workspaces, this is sub optimal. It would be much nicer if Geany would start a new instance if there was not already an instance running on the current workspace. After a brief inspection of the Gedit sources (Gedit has the desired feature), I came up with a fix for this issue which I will be submitting to the Geany development mailing list after a couple of days of testing.

Another minor issue (shared with Gedit) is that of fonts. Nedit uses bitmap fonts while Geany (and Gedit) use TrueType fonts. When I choose light coloured fonts on a black background I find the fonts in Geany (and Gedit) a lot fuzzier than the same size fonts in Nedit. I've tried a number of different fonts including Inconsolata but I've currently settled on DejaVu Sans Mono although I'm not entirely satisfied.

Currently my Geany setup (editing some Haskell code) looks like this:





Light text on a black background with highlighting using a small number of colours; red for types, green for literals, yellow for keywords etc.

Geany is a great text editor. For any committed coders currently using either Nedit or Gedit and not entirely happy, I strongly recommend that you give Geany a try.

Erik de Castro Lopo: LLVM Backend for DDC : Very Nearly Done.

Wed, 2014-06-11 20:16

The LLVM backend for DDC that I've been working on sporadically since June is basically done. When compiling via the LLVM backend, all but three of 100+ tests in the DDC test suite pass. The tests that pass when going via the C backend but fail via the LLVM backend are of two kinds:

  1. Use DDC's foreign import construct to name a C macro to perform a type cast where the macro is defined in one of C header files.
  2. Use static inline functions in the C backend to do peek and poke operations on arrays of unboxed values.

In both of these cases, DDC is using features of the C language to make code generation easier. Obviously, the LLVM backend needs to do something else to get the same effect.

Fixing the type casting problem should be relatively simple. Ben is currently working on making type casts a primitive of the Core language so that both the C and LLVM backends can easily generate code for them.

The array peek and poke problem is little more complex. I suspect that it too will require the addition of new Core language primitive operations. This is a much more complex problem than the type casting one and I've only just begun to start thinking about it.

Now that the backend is nearly done, its not unreasonable to look at its performance. The following table shows the compile and run times of a couple of tests in the DDC test suite compiling via the C and the LLVM backend.



Test name C Build Time LLVM Build Time C Run Time LLVM Run Time 93-Graphics/Circle 3.124s 3.260s 1.701s 1.536s 93-Graphics/N-Body/Boxed 6.126s 6.526s 7.649s 4.899s 93-Graphics/N-Body/UnBoxed 3.559s 4.017s 9.843s 6.162s 93-Graphics/RayTracer 12.890s 13.102s 13.465s 8.973s 93-Graphics/SquareSpin 2.699s 2.889s 1.609s 1.604s 93-Graphics/Styrene 13.685s 14.349s 11.312s 8.527s

Although there is a small increase in compile times when compiling via LLVM, the LLVM run times are significantly reduced. The conceptual complexity of the LLVM backend is also low (the line count is about 4500 lines, which will probably fall with re-factoring) and thanks to LLVM's type checking being significantly better than C's, I think its reasonable to be more confident in the quality of the LLVM backend than the existing C backend. Finally, implementing things like proper tail call optimisation will be far easier in LLVM backend than in C.

All in all, I think doing this LLVM backend has been an interesting challenge and will definitely pay off in the long run.

Erik de Castro Lopo: LLVM Backend for DDC : Milestone #3.

Wed, 2014-06-11 20:16

After my last post on this topic, I ran into some problems with the AST (abstract syntax tree) that was being passed to my code for LLVM code generation. After discussing the problem with Ben, he spent some time cleaning up the AST definition, the result of which was that nearly all the stuff I already had, stopped working. This was a little disheartening. That and the fact that I was really busy, meant that I didn't touch the LLVM backend for a number of weeks.

When I finally did get back to it, I found that it wasn't as broken as I had initially thought. Although the immediate interface between Ben's code and mine had changed significantly, all the helper functions I had written were still usable. Over a week and a bit, I managed to patch everything up again and get back to where I was. I also did a lot of cleaning up and came up with a neat solution to a problem which was bugging me during my previous efforts.

The problem was that structs defined via the LLVM backend needed to have exactly the same memory layout as the structs defined via the C backend. This is a strict requirement for proper interaction between code generated via C and LLVM. This was made a little difficult by David Terei's haskell LLVM wrapper code (see previous post) which makes all structs packed by default, while structs on the C side were not packed. Another dimension of this problem was finding an easy way to generate LLVM code to access structs in a way that was easy to read and debug in the code generator and also not require different code paths for generating 32 and 64 bit code.

Struct layout is tricky. Consider a really simple struct like this:

struct whatever { int32_t tag ; char * pointer ; } ;

On a 32 bit system, that struct will take up 8 bytes; 4 bytes for the int32_t and 4 for the pointer. However, on a 64 bit system, where pointers are 8 bytes in size, the struct will take up 16 bytes. Why not 12 bytes? Well, some 64 bit CPUs (Alpha and Sparc64 are two I can think of) are not capable of unaligned memory accesses; a read from memory into a CPU register where the memory address (in bytes) is not an integer multiple of the size of the register. Other CPUs like x86_64 can read unaligned data, but reading unaligned data is usually slower than reading correctly aligned data.

In order to avoid unaligned, the compiler assumes that the start address of the struct will be aligned to the correct alignment for the biggest CPU register element in the struct, in this case the pointer. It then adds 4 bytes of padding between the int32_t and the pointer to ensure that if the struct is correctly aligned then the pointer will also be correctly aligned.

Because structs are packed in the David Terei's code, the above struct would require a different definition on 32 and 64 bit systems, ie

; 32 bit version of the struct %struct.whatever.32 = type <{ i32, i8 * }> ; 64 bit version of the struct %struct.whatever.64 = type <{ i32, [4 * i8], i8 * }>

where the 64 bit version contains 4 padding bytes. However, the difference between these two definitions causes another problem. To access fields within a struct, LLVM code uses the getelementptr operator which addresses fields by index. Unfortunately, the index (zero based) of the pointer is 1 for the 32 bit version and 2 for the 64 bit version. That would make code generation a bit of a pain in the neck.

The solution is allow the specification of LLVM structs in Haskell as a list of LlvmStructField elements, using

data LlvmStructField = AField String LlvmType -- Field name and type. | APadTo2 -- Pad next field to a 2 byte offset. | APadTo4 -- Pad next field to a 4 byte offset. | APadTo8 -- Pad next field to a 8 byte offset. | APadTo8If64 -- Pad next field to a 8 byte offset only -- for 64 bit.

Note that the AField constructor requires both a name and the LlvmType. I then provide functions to convert the LlvmStructField list into an opaque LlvmStructDesc type and provide the following functions:

-- | Turn an struct specified as an LlvmStructField list into an -- LlvmStructDesc and give it a name. The LlvmStructDesc may -- contain padding to make it conform to the definition. mkLlvmStructDesc :: String -> [LlvmStructField] -> LlvmStructDesc -- | Retrieve the struct's LlvmType from the LlvmStructDesc. llvmTypeOfStruct :: LlvmStructDesc -> LlvmType -- | Given and LlvmStructDesc and the name of a field within the -- LlvmStructDesc, retrieve a field's index with the struct and its -- LlvmType. structFieldLookup :: LlvmStructDesc -> String -> (Int, LlvmType)

Once the LlvmStructDesc is built for a given struct, fields within the struct can be addressed in the LLVM code generator by name, making the Haskell code generator code far easier to read.

Pretty soon after I got the above working I also managed to get enough LLVM code generation working to compile a complete small program which then runs correctly. I consider that to be milestone 3.

Erik de Castro Lopo: LLVM Backend for DDC : Milestone #2.

Wed, 2014-06-11 20:16

For a couple of weeks after AusHac 2010 I didn't manage to find any time to working on DDC at all, but I'm now back on it and late last week I reached the second milestone on the LLVM backend for DDC. The backend now has the ability to box and unbox 32 bit integers and perform simple arithmetic operations on valid combinations of them.

Disciple code that can currently be compiled correctly via LLVM includes basic stuff like:

identInt :: Int -> Int identInt a = a plusOneInt :: Int -> Int plusOneInt x = x + 1 addInt :: Int -> Int -> Int addInt a b = a + b addInt32U :: Int32# -> Int32# -> Int32# addInt32U a b = a + b addMixedInt :: Int32# -> Int -> Int addMixedInt a b = boxInt32 (a + unboxInt32 b) cafOneInt :: Int cafOneInt = 1 plusOne :: Int -> Int plusOne x = x + cafOneInt

where Int32# specifies an unboxed 32 bit integer and Int32 specifies the boxed version.

While writing the Haskell code for DDC, I'm finding that its easiest to generate LLVM code for a specific narrow case first and then generalize it as more cases come to light. I also found that the way I had been doing the LLVM code generation was tedious and ugly, invloving lots of concatenation of small lists. To fix this I built myself an LlvmM monad on top of the StateT monad:

type LlvmM = StateT [[LlvmStatement]] IO

Using this I can then generate a block of LLVM code as a list of LlvmStatements and add it to the monad using an addBlock function which basically pushes the blocks of code down onto a stack:

addBlock :: [LlvmStatement] -> LlvmM () addBlock code = do state <- get put (code : state)

The addBlock function is then used as the base building block for a bunch of more specific functions like these:

unboxInt32 :: LlvmVar -> LlvmM LlvmVar unboxInt32 objptr | getVarType objptr == pObj = do int32 <- lift $ newUniqueReg i32 iptr0 <- lift $ newUniqueNamedReg "iptr0" (pLift i32) iptr1 <- lift $ newUniqueNamedReg "iptr1" (pLift i32) addBlock [ Comment [ show int32 ++ " = unboxInt32 (" ++ show objptr ++ ")" ] , Assignment iptr0 (GetElemPtr True objptr [llvmWordLitVar 0, i32LitVar 0]) , Assignment iptr1 (GetElemPtr True iptr0 [llvmWordLitVar 1]) , Assignment int32 (Load iptr1) ] return int32 readSlot :: Int -> LlvmM LlvmVar readSlot 0 = do dstreg <- lift $ newUniqueNamedReg "slot.0" pObj addBlock [ Comment [ show dstreg ++ " = readSlot 0" ] , Assignment dstreg (Load localSlotBase) ] return dstreg readSlot n | n > 0 = do dstreg <- lift $ newUniqueNamedReg ("slot." ++ show n) pObj r0 <- lift $ newUniqueReg pObj addBlock [ Comment [ show dstreg ++ " = readSlot " ++ show n ] , Assignment r0 (GetElemPtr True localSlotBase [llvmWordLitVar n]) , Assignment dstreg (Load (pVarLift r0)) ] return dstreg readSlot n = panic stage $ "readSlot with slot == " ++ show n

which are finally hooked up to do things like:

llvmVarOfExp (XUnbox ty@TCon{} (XSlot v _ i)) = do objptr <- readSlot i unboxAny (toLlvmType ty) objptr llvmVarOfExp (XUnbox ty@TCon{} (XForce (XSlot _ _ i))) = do orig <- readSlot i forced <- forceObj orig unboxAny (toLlvmType ty) forced

When the code generation of a single function is complete it the list of LlvmStatement blocks is then retrieved, reversed and concatenated to produce the list of LlvmStatements for the function.

With the LlvmM monad in place converting DDC's Sea AST into LLVM code is now pretty straight forward. Its just a matter of finding and implementing all the missing pieces.

Erik de Castro Lopo: LLVM Backend : Milestone #1.

Wed, 2014-06-11 20:16

About 3 weeks ago I started work on the LLVM backend for DDC and I have now reached the first milestone.

Over the weekend I attended AusHac2010 and during Friday and Saturday I managed to get DDC modified so I could compile a Main module via the existing C backend and another module via the LLVM backend to produce an executable that ran, but gave an incorrect answer.

Today, I managed to get a very simple function actually working correctly. The function is trivial:

identInt :: Int -> Int identInt a = a

and the generated LLVM code looks like this:

define external ccc %struct.Obj* @Test_identInt(%struct.Obj* %_va) { entry: ; _ENTER (1) %local.slotPtr = load %struct.Obj*** @_ddcSlotPtr %enter.1 = getelementptr inbounds %struct.Obj** %local.slotPtr, i64 1 store %struct.Obj** %enter.1, %struct.Obj*** @_ddcSlotPtr %enter.2 = load %struct.Obj*** @_ddcSlotMax %enter.3 = icmp ult %struct.Obj** %enter.1, %enter.2 br i1 %enter.3, label %enter.good, label %enter.panic enter.panic: call ccc void ()* @_panicOutOfSlots( ) noreturn br label %enter.good enter.good: ; ----- Slot initialization ----- %init.target.0 = getelementptr %struct.Obj** %local.slotPtr, i64 0 store %struct.Obj* null, %struct.Obj** %init.target.0 ; --------------------------------------------------------------- %u.2 = getelementptr inbounds %struct.Obj** %local.slotPtr, i64 0 store %struct.Obj* %_va, %struct.Obj** %u.2 ; br label %_Test_identInt_start _Test_identInt_start: ; alt default br label %_dEF1_a0 _dEF1_a0: ; br label %_dEF0_match_end _dEF0_match_end: %u.3 = getelementptr inbounds %struct.Obj** %local.slotPtr, i64 0 %_vxSS0 = load %struct.Obj** %u.3 ; --------------------------------------------------------------- ; _LEAVE store %struct.Obj** %local.slotPtr, %struct.Obj*** @_ddcSlotPtr ; --------------------------------------------------------------- ret %struct.Obj* %_vxSS0 }

That looks like a lot of code but there are a couple of points to remember:

  • This includes code for DDC's garbage collector.
  • DDC itself is still missing a huge number of optimisations that can added after the compiler actually works.

I have found David Terei's LLVM AST code that I pulled from the GHC sources very easy to use. Choosing this code was definitely not a mistake and I have been corresponding with David, which has resulted in a few updates to this code, including a commit with my name on it.

LLVM is also conceptually very, very sound and easy to work with. For instance, variables in LLVM code are allowed to contain the dot character, so that its easy to avoid name clashes between C function/variable names and names generated during the generation of LLVM code, by making generated names contain a dot.

Finally, I love the fact that LLVM is a typed assembly language. There would have been dozens of times over the weekend that I generated LLVM code that the LLVM compiler rejected because it would't type check. Just like when programming with Haskell, once the code type checked, it actually worked correctly.

Anyway, this is a good first step. Lots more work to be done.

Erik de Castro Lopo: LLVM Backend for DDC.

Wed, 2014-06-11 20:16

With the blessing of Ben Lippmeier I have started work on an new backend for his DDC compiler. Currently, DDC has a backend that generates C code which then gets run through GNU GCC to generate executables. Once it is working, the new backend will eventually replace the C one.

The new DDC backend will target the very excellent LLVM, the Low Level Virtual Machine. Unlike C, LLVM is specifically designed as a general retargetable compiler backend. It became the obvious choice for DDC when the GHC Haskell compiler added an LLVM backend which almost immediately showed great promise. Its implementation was of relatively low complexity in comparison to the existing backends and it also provided pretty impressive performance. This GHC backend was implemented by David Terei as part of an undergraduate thesis in the Programming Languages and Systems group an UNSW.

Since DDC is written in Haskell, there are two obvious ways to implement an LLVM backend:

  1. Using the haskell LLVM bindings available on hackage.
  2. Using David Terei's code that is part of the GHC compiler.

At first glance, the former might well be the more obvious choice, but the LLVM bindings have a couple of drawbacks from the point of view of using them in DDC. In the end, the main factor in choosing which to use was Ben's interest in boostrapping the compiler (compiling the compiler with itself) as soon as possible.

The existing LLVM bindings use a number of advanced Haskell features, that is, features beyond that of the Haskell 98 standard. If we used the LLVM bindings in DDC, that would mean the DDC would have to support all the features needed by the binding before DDC could be bootstrapped. Similarly, the LLVM bindings use GHC's Foreign Function Interface (FFI) to call out the the LLVM library. DDC currently does have some FFI support, but this was another mark against the bindings.

By way of contrast, David Terei's LLVM backend for GHC is pretty much standard Haskell code and since it generates text files containing LLVM's Intermediate Representation (IR), a high-level, typed assembly language, there is no FFI problem. The only downside of David's code is that the current version in the GHC Darcs tree uses a couple of modules that are private to GHC itself. Fortunately, it looks like these problems can be worked around with relatively little effort.

Having decided to use David's code, I started hacking on a little test project. The aim of the test project to set up an LLVM Abstract Syntax Tree (AST) in Haskell for a simple module. The AST is then pretty printed as a textual LLVM IR file and assembled using LLVM's llc compiler to generate native assembler. Finally the assembler code is compiled with a C module containing a main function which calls into the LLVM generated code.

After managing to get a basic handle on LLVM's IR code, the test project worked; calling from C into LLVM generated code and getting the expected result. The next step is to prepare David's code for use in DDC while making it easy to track David's upstream changes.

Binh Nguyen: Cloud and Internet Security

Wed, 2014-06-11 19:55
If you've been watching this blog you may have noticed that there hasn't been a lot of activity lately. Part of this has to do with me working on other projects. One of these includes a report that I call "Cloud and Internet Security" which is basically a follow up of "Building a Cloud Computing Service" and the "Convergence Effect". If you're curious, both documents were/have been submitted to various organisations where more good can be done with them. Moreover, I consider both both works to be "WORKS IN PROGRESS" and I may make extensive alterations without reader notice. The latest versions are likely to be available here:

https://sites.google.com/site/dtbnguyen/

Cloud and Internet SecurityABSTRACT

A while back I wrote two documents called 'Building a Cloud Service' and the 'Convergence Report'. They basically documented my past experiences and detailed some of the issues that a cloud company may face as it is being built and run. Based on what had transpired since, a lot of the concepts mentioned in that particular document are becoming widely adopted and/or are trending towards them. This is a continuation of that particular document and will attempt to analyse the issues that are faced as we move towards the cloud especially with regards to security. Once again, we will use past experience, research, as well as current events trends in order to write this particular report. 

Personal experience indicates that keeping track of everything and updating large scale documents is difficult and depending on the system you use extremely cumbersome. The other thing readers have to realise is that a lot of the time even if the writer wants to write the most detailed book ever written it’s quite simply not possible. Several of my past works (something such as this particular document takes a few weeks to a few months to write depending on how much spare time I have) were written in my spare time and between work and getting an education. If I had done a more complete job they would have taken years to write and by the time I had completed the work updates in the outer world would have meant that the work would have meant that at least some of the content would have been out of date. Dare I say it, by the time that I have completed this report itself some of the content may have come to fruition as was the case with many of the technologies with the other documents? I very much see this document as a starting point rather than a complete reference for those who are interested in technology security. 

Note that the information contained in this document is not considered to be correct nor the only way in which to do things. It’s a mere guide to how the way things are and how we can improve on them. Like my previous work, it should be considered a work in progress. Also, note that this document has gone through many revisions and drafts may have gone out over time. As such, there will be concepts that may have been picked up and adopted by some organisations while others may have simply broken cover while this document was being drafted and sent out for comment. It also has a more strategic/business slant when compared to the original document which was more technically orientated. 

No illicit activity (as far as I know and have researched) was conducted during the formulation of this particular document. All information was obtained only from publicly available resources and any information or concepts that are likely to be troubling has been redacted. Any relevant vulnerabilities or flaws that were found were reported to the relevant entities in question (months have passed).

Rusty Russell: Donation to Jupiter Broadcasting

Wed, 2014-06-11 17:28

Chris Fisher’s Jupiter Broadcasting pod/vodcasting started 8 years ago with the Linux Action Show: still their flagship show, and how I discovered them 3 years ago.  Shows like this give access to FOSS to those outside the LWN-reading crowd; community building can be a thankless task, and as a small shop Chris has had ups and downs along the way.  After listening to them for a few years, I feel a weird bond with this bunch of people I’ve never met.

I regularly listen to Techsnap for security news, Scibyte for science with my daughter, and Unfilter to get an insight into the NSA and what the US looks like from the inside.  I bugged Chris a while back to accept bitcoin donations, and when they did I subscribed to Unfilter for a year at 2 BTC.  To congratulate them on reaching the 100th Unfilter episode, I repeated that donation.

They’ve started doing new and ambitious things, like Linux HOWTO, so I know they’ll put the funds to good use!

Colin Charles: On-disk/block-level encryption for MariaDB

Wed, 2014-06-11 17:26

I don’t normally quote The Register, but I was clearing tabs and found this article: 350 DBAs stare blankly when reminded super-users can pinch data. It is an interesting read, telling you that there are many Snowden’s in waiting, possibly even in your organisation. 

From a MariaDB standpoint, you probably already read that column level encryption as well as block level encryption for some storage engines are likely to come to MariaBD 10.1 via a solution by Eperi. However with some recent breaking news, Google is also likely to do this – see this thread about MariaDB encryption on maria-discuss. 

Google has already developed on-disk/block-level encryption for InnoDB, Aria (for temporary tables), binary logs and temporary files. The code isn’t published yet, but will likely happen soon, so clear benefits of open source development principles. 

Elsewhere, if you’re trying to ensure good policies for users, don’t forget to start with the audit plugin and roles.

Related posts:

  1. MariaDB 5.1.44 released
  2. Tab sweep: Google and MariaDB
  3. my disk died, and i’m in intech

Colin Charles: RHEL7 now with MariaDB

Wed, 2014-06-11 16:26

Congratulations to the entire team at Red Hat, for the release of Red Hat Enterprise Linux 7 (RHEL7). The release notes have something important, under Web Servers & Services:

MariaDB 5.5

MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features; for example, a non-blocking client API library, the Aria and XtraDB storage engines with enhanced performance, better server status variables, and enhanced replication.

Detailed information about MariaDB can be found at https://mariadb.com/kb/en/what-is-mariadb-55/.

This is a huge improvement over MySQL 5.1.73 currently shipping in RHEL6. I’m really looking forward to welcome more MariaDB users. Remember if you are looking for information, find it at the Knowledge Base. If you’ve found a bug, report it at Jira (upstream) or Bugzilla (Red Hat). If you want to chat with friendly developers and users, hop on over to #maria on irc.freenode.net. And don’t forget we have some populated mailing lists: maria-discuss and maria-developers.

Related posts:

  1. MariaDB replaces MySQL in RHEL7
  2. MariaDB in Red Hat Software Collections
  3. Some MariaDB related news from the Red Hat front

TasLUG: Hobart meeting - June 19th - (The aptosid fullstory)

Wed, 2014-06-11 16:25
Welcome to June. Yep. short days... stout beers. And source. LOTS OF SOURCE! I'm in the

middle of my exam session at uni so won't have time to prepare the usual slides and news

this month.



When: Thursday, June 19th, 18:00 for an 18:30 start

Where: Upstairs, Hotel Soho, 124 Davey St, Hobart.



Agenda:



18:00 - early mingle, chin wagging, discussion and install issues etc



19:00 - Trevor Walkley - aptosid fullstory




    This months talk will be given by Trevor Walkley, an aptosid

    dev,(bluewater on IRC), on building an iso using aptosid fullstory

    scripts which are currently held on github (and the 'how to do it' is

    not well known).



    A live build will take place (hopefully debian sid will cooperate on the

    night) followed by a live installation of the build to the famous milk

    crate computer belonging Scott, (faulteh on IRC).



20:00 - Meeting end. Dinner and drinks are available at the venue during the meeting.



We will probably get to a discussion on the Hobart LCA 2017 bid, ideas for upcoming

Software Freedom Day in September, Committee nomination and voting,

so our pre-talk discussion should be packed full of jam.



Also in June:

28th - Launceston meeting

July:

11-13th - Gov Hack 2014 - There's at least a Hobart venue for this event.

17th - OpenStack 4th Birthday - RSVP here: http://taslug-openstack.eventbrite.com.au/

September:

20th - Software Freedom Day - events in Hobart and Launceston

Michael Still: LCA2015 opens its Call for Proposals

Tue, 2014-06-10 18:28
LCA2015 will be in Auckland, New Zealand next year, and the Call for Proposals has just opened! The conference is one of the best venues in Australia and New Zealand to get word out about your Open Source project, as well as learning about the cool things that other people are doing. This is the third time the conference has been in New Zealand, and its looking to be an excellent event.



This one call for proposals covers papers, tutorials, and mini conferences.



For more information about the CFP, checkout http://lca2015.linux.org.au/cfp. Mini conference proposals should go to http://lca2015.linux.org.au/miniconf-cfp.



Tags for this post: conference lca2015 cfp

Related posts: LCA 2006: CFP closes today; Got Something to Say? The LCA 2013 CFP Opens Soon!; We all know that the LCA2014 CFP is open, right?; Call for papers opens soon



Comment

linux.conf.au News: The call for proposals for linux.conf.au 2015 is now open!

Tue, 2014-06-10 18:28

To submit your proposal, create an account, and select Submit aproposal or Submit a miniconf from your profile menu.

The conference is a meeting place for the free and open source software communities. It will be held in Auckland at the University of Auckland Business School from Monday 12 to Friday 16 January, 2015, and provides a unique opportunity for open source developers, students, users and hackers to come together, share new ideas and collaborate.

Important Dates
  • Call For Proposals
    • Call for proposals opens: 9 June 2014
    • Call for proposals closes: 13 July 2014
    • Email notifications from papers committee: September 2014
  • Call For Miniconfs
    • Miniconf CFP opens 9 June 2014 (TBC)
    • Miniconf CFP closes 13 July 2014
    • Email acceptances start Sept 2014
  • Conference dates:
    • Early bird registrations open 23 September 2014 (TBC)
    • Conference: Monday 12 January to Friday 16 January, 2015

Andrew Pollock: [life] Day 132: Kindergarten, Court and not much else

Tue, 2014-06-10 18:26

Today was the divorce hearing. I didn't need to be present, but I wanted to anyway, so I took a taxi into the city and a ferry back home afterwards.

I didn't feel like doing much after that, so I walked Zoe's enrollment paperwork into Morningside State School and had some lunch at the Hawthorne Garage and read the paper for a change.

Sarah had advised me that Zoe had woken up early, so I should probably pick her up from Kindergarten in the car, so I drove over to pick her up. She hadn't napped though.

I had some paperwork to drop off to my financial adviser's office in West End, so we drove over there and dropped it off, and then back to Megan's house for a play date, after her tennis class.

After that we went home briefly until Sarah picked up Zoe.

Peter Miller: The Not-so-gentle Answer: 12. Refractory

Tue, 2014-06-10 18:25
12. Refractory means “no idea” Lately, when people ask me “how are you?” I have to choose between being polite and being accurate. Most people get the polite answer “I’m still standing”. It turns out I may have been too gentle.  I have managed to out-live all of the predictions made by my specialists.  In [...]

Michael Still: More wood turning

Tue, 2014-06-10 15:28
Just another batch of things I've worked on recently.



                     



Tags for this post: wood turning 20140610-woodturning photo



Comment

Francois Marier: CrashPlan and non-executable /tmp directories

Tue, 2014-06-10 15:22

If your computer's /tmp is non-executable, you will run into problems with CrashPlan.

For example, the temp directory on my laptop is mounted using this line in /etc/fstab:

tmpfs /tmp tmpfs size=1024M,noexec,nosuid,nodev 0 0

This configuration leads to two serious problems with CrashPlan.

CrashPlan client not starting up

The first one is that while the daemon is running, the client doesn't start up and doesn't print anything out to the console.

You have to look in /usr/local/crashplan/log/ui_error.log to find the following error message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: Can't load library: /tmp/.cpswt/libswt-gtk-4234.so Can't load library: /tmp/.cpswt/libswt-gtk.so no swt-gtk-4234 in java.library.path no swt-gtk in java.library.path /tmp/.cpswt/libswt-gtk-4234.so: /tmp/.cpswt/libswt-gtk-4234.so: failed to map segment from shared object: Operation not permitted at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.C.<clinit>(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source) at com.backup42.desktop.CPDesktop.<init>(CPDesktop.java:266) at com.backup42.desktop.CPDesktop.main(CPDesktop.java:200)

To fix this, you must tell the client to use a different directory, one that is executable and writable by users who need to use the GUI, by adding something like this to the GUI_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

-Djava.io.tmpdir=/home/username/.crashplan-tmp Backup waiting forever

The second problem is that once you're able to start the client, backups are stuck at "waiting for backup" and you can see the following in /usr/local/crashplan/log/engine_error.log:

Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(JNAInotifyFileWatcherDriver.java:21) at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393) at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331) at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66) at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073) at com.code42.utils.AWorker.run(AWorker.java:158) at java.lang.Thread.run(Thread.java:744)

This time, you must tell the server to use a different directory, one that is executable and writable by the CrashPlan engine user (root on my machine), by adding something like this to the SRV_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

-Djava.io.tmpdir=/var/crashplan

Lev Lafayette: Critical Issues in the Teaching of High Performance Computing to Postgraduate Scientists

Tue, 2014-06-10 13:29

Presentation to ICCS 2014 International Conference on Computational Science, Cairns, June 10, 2014

Abstract

High performance computing is in increasing demand, especially with the need to conduct parallel processing on very large datasets, whether evaluated by volume, velocity and variety. Unfortunately the necessary skills - from familiarity with the command line interface, job submission, scripting, through to parallel programming - is not commonly taught at the level required for most researchers. As a result the uptake of HPC usage remains disproportionately low, with emphasis on system metrics taking priority, leading to a situation described as 'high performance computing considered harmful'. Changing this is not of a problem of computational science but rather a problem for computational science which can only be resolved from an multi-disciplinary approach. The following example addresses the main issues in such teaching and thus makes an appeal to some universality in application which may be useful for other institutions.

For the past several years the Victorian Partnership for Advanced Computing (VPAC) has conducted a range of training courses designed to bring the capabilities of postgraduate researchers to a level of competence useful for their research. These courses have developed in this time, in part through providing a significantly wider range of content for varying skillsets, but more importantly by introducing some of the key insights from the discipline of adult and tertiary education in the context of the increasing trend towards lifelong learning. This includes an andragagical orientation, providing integrated structural knowledge, encouraging learner autonomy, self-efficacy, and self-determination, utilising appropriate learning styles for the discipline, utilising modelling and scaffolding for example problems (as a contemporary version of proximal learning), and following up with a connectivist mentoring and outreach program in the context of a culturally diverse audience.

Keywords adult and tertiary education, high performance and scientific computing

read more

Russell Coker: Google Hardware Support

Tue, 2014-06-10 12:27

Ironically just 5 days after writing about how I choose Android devices for a long service life [1] my wife’s Nexus 5 (with 32G of RAM (sorry Flash storage) to give it a long useful life) totally died. It reported itself as being fully charged and then 15 minutes later it was off and could not be revived. No combination of pushing the power button and connecting the power cable caused the screen to light up or any sound to be emitted.

Google has a nice interactive support site for nexus devices that describes more ways of turning a phone on than any reasonable person could imagine. After trying to turn the phone on in various ways (plugged and unplugged etc) it gave me a link to get phone support. Clicking on that put me in the queue to RECEIVE a phone call and a minute later a lady who spoke English really well (which is unusual for telephone support) called me to talk me through the various options.

Receiving a phone call is a much better experience than making a call. It meant that if the queue for phone support was long then I could do other things until the phone rings. It’s impossible to be productive at other tasks while listening for hold music to stop and a person to start talking. The cost of doing this would be very tiny, while there would be some cost in hardware and software to have a web site that tells me how long I can expect to wait for a call a more basic implementation where I just submit my number and wait for a call would be very cheap to implement. The costs of calls from the US to Australia (and most places where people can afford a high end Android phone) are quite cheap for home users and are probably cheaper if you run a call center. If the average support call cost Google $1 and 3% of phones have support calls then that would be an extra cost of $0.03 per phone. I expect that almost everyone who buys a $450 phone would be happy to pay a lot more than $0.03 to avoid the possibility of listening to hold music!

I received the phone call about a minute after requesting it, this was nice but I wonder how long I would have waited if I hadn’t requested a call at 1AM Australian time (presumably during the day in a US call center). In any case getting a 1 minute response is great for any time of the day or night, lots of call centers can’t do that.

While the phone support is much better than most phone support, it would be nice if they added some extra options. I think it would be good to have webchat and SMS as options for support for the benefit of people who don’t want to speak to strangers. This would be useful to a lot of people on the Autism Spectrum and probably others too.

The phone call wasn’t particularly productive, it merely confirmed that I had followed all the steps on the support website. Then I received an email telling me about the web site which was a waste of time as I’d covered that in the phone call.

I have just replied to their second email which asked for the IMEI of the phone to start the warranty return process. We could have saved more than 24 hours delay if this had been requested in the first email or the phone call. Google could have even requested the IMEI through the web site before starting the phone call. It would have been even easier if Google had included the device IMEI in the email they sent me to confirm the purchase as searching for old email is a lot easier than searching through my house for an old box. Another option for Google would be to just ask me for the Gmail account used for the purchase, as I only bought one Nexus 5 on that account they then have all the purchase details needed for a warranty claim.

While the first call was a great experience the email support following that has been a waste of time. I’m now wondering if they aim to delay the warranty process for a few days in the hope that the phone will just start working again.

Related posts:

  1. Support Gay Marriage in case You Become Gay A common idea among the less educated people who call...
  2. LUV Hardware Library What is it? Last month I started what I am...
  3. Wyndham Resorts is a Persistent Spammer Over the last week I have received five phone calls...