Dec 8, 2009

pefs and l2filter moved to github

I've just moved pefs and l2filter development to github. Hope it helps people to follow development.

pefs repository (github.com/glk/pefs) can be used to to compile and run pefs without applying any patches.

pefs changelog:
  • support running on msdosfs
  • enable dircache only on file systems that are known to support it
  • add man page
  • add pefs getkey command
  • intial implementation of pefs PAM module

l2filter repository (github.com/glk/l2filter) contains only patches. There is fresh patch against 8-STABLE with some minor improvements comparing to 7-STABLE version. 9-CURRENT patch is a bit outdated at the moment, as I'm waiting for Luigi Rizzo to finish ipfw refactoring work first.

Oct 16, 2009

pefs dircache benchmark

I've recently added directory caching into pefs.

Despite of being directory listing cache (like dirhash for ufs) it also acts as encrypted file name cache. So that there is no need to decrypt names for the same entries all the time. That was really big issue because directory listing has to be reread on almost every vnode lookup operation. It made operations on directories with 1000 and more files too time consuming.

The cache is getting updated at two points: during vnode lookup operation and during readdir call. Vnode generation attribute is used to monitor directory changes (the same way NFS works) and expire the cache if it changes. There is no per-operation monitoring because that would violate stacked filesystem nature (and also complicate the code). There are some issues regarding large directories handling within dircache. First of all results of consequent readdir calls considered inconsistent, i.e cache expires if user provided buffer is too small to fit entire directory listing. And while doing a vnode lookup search doesn't terminate if matching directory entry found, it further traverses directory to update the cache.

There is vfs.pefs.dircache_enable sysctl to control cache validity. Setting it to zero would force always treating cache as invalid, and thus dircache would function only as a file name encryption cache.

At the moment caching is only enabled for name decryption, but there are operations like rm or rmdir which perform name encryption on every call to pass data to underlying filesystem. Enabling caching for such operations is not going to be hard, but I just want code to stabilize a bit before moving further.

I've performed two types of tests: dbench and handling directories with large number of files. I've used pefs mounted on top of tmpfs to measure pefs overhead but not disk io performance. Salsa20 algorithms with 256 bit key was chosen because of being the fastest available. Before each run underlying tmpfs filesystem was remounted. Each test was run for 3 times, and average of results is shown in charts (distribution was less then 2%). Also note that I've used kernel with some extra debugging compiled in (invariants, lock debugging).




dbench doesn't show much difference with dircache enable comparing to plain pefs and old pefs without dircache: 143,635 Mb/s against 116,746 Mb/s; although, it's 18% improvement witch is very good imho. Also interesting is that result gets just a bit lower after setting vfs.pefs.dircache_enable=0: 141,289 Mb/s with dircache_enable=0 against 143,635 Mb/s.

Dbench uses directories with small number of entries (usually ~20). That perfectly explains the results achieved. Handling large directories is where dircache shines. I've used the following trivial script for testing, it creates 1000 or 2000 files, does 'ls -l' and removes these files:
for i in `jot 1000`; do
touch test-$i
done
ls -Al >/dev/null
find . -name test-\* -exec rm '{}' +




The chart speaks for itself. And per file overhead looks much closer to expected linear growth after running the same test for 3000 files:

Oct 1, 2009

Encrypting private directory with pefs

pefs is a kernel level cryptographic filesystem. It works transparently on top of other filesystems and doesn't require root privileges. There is no need to allocate another partition and take additional care of backups, resizing partition when it fills up, etc.

After installing pefs create a new directory to encrypt. Let it be ~/Private:

% mkdir ~/Private

And mount pefs on top of it (root privileges are necessary to mount filesystem unless you have vfs.usermount sysctl set to non-zero):

% pefs mount ~/Private ~/Private

At this point ~/Private behaves like read-only filesystem because no keys are set up yet. To make it useful add a new key:

% pefs addkey ~/Private

After entering a passphrase, you can check active keys:

% pefs showkeys ~/Private
Keys:
0 b0bed3f7f33e461b aes256-ctr


As you can see AES algorithm is used by default (in CTR mode with 256 bit key). It can be changed with pefs addkey -a option.

You should take into account that pefs doesn't save any metadata. That means that there is no way for filesystem to "verify" the key. To work around it key chaining can be used (pefs showchain, setchain, delchain). I'm going show how it works in next posts.

Let's give it a try:

% echo "Hello WORLD" > ~/Private/test
% ls -Al ~/Private
total 1
-rw-r--r-- 1 gleb gleb 12 Oct 1 12:55 test
% cat ~/Private/test
Hello WORLD


Here is what it looks like at lower filesystem level:

% pefs unmount ~/Private
% ls -Al ~/Private
total 1
-rw-r--r-- 1 gleb gleb 12 Oct 1 12:55 .DU6eudxZGtO8Ry_2Z3Sl+tq2hV3O75jq
% hd ~/Private/.DU6eudxZGtO8Ry_2Z3Sl+tq2hV3O75jq
00000000 7f 1e 1b 05 fc 8a 5c 38 fc d8 2d 5f |......\8..-_|
0000000c

Your result is going to be different because pefs uses random tweak value to encrypt files. This tweak is saved in encrypted file name. Using the tweak also means that the same files have different encrypted content.

Sep 23, 2009

pefs crypto primitives (updated)

Supported data encryption algorithms: AES and Camellia (with 128, 192 and 256 bits key sizes). Adding another block cipher with 128 block size should be trivial.

File names are always encrypted using AES-128 in CBC mode with zero IV. Encrypted file name consists of a unique per file tweak, checksum and name itself:
XBase64(checksum || E(tweak || filename))

Checksum is VMAC of encrypted tweak and file name:
checksum = VMAC(E(tweak || filename))

Both checksum and tweak have 64 bit length.

Main reason for not providing alternatives to name encryption algorithm is to keep design simple. Data encryption is different from name encryption here: encrypted data, unlike encrypted file name, is not parsed in any way by pefs and user expects to be able to use secure/fast/best-name cipher.

Name has such structure to work around some of CBC shortcomings. Random tweak value is placed at the beginning of the first encrypted block. That gives us unique encrypted file names and eliminates the need of dealing with initial IV (IV is zero and name is padded with zeros).

Encrypt-then-Authenticate construction is used. In addition to being most secure variant it allows checking if the name was encrypted by the given key without performing decryption. VMAC was chosen because of it performance characteristics and its ability to produce 64 bit MAC (without truncation of original result like in HMAC case). 64 bit size is almost mandatory here because larger MAC would result in much larger file name and it can hardly improve security. But the real reason is that no real "authentication" performed. It's designed to be just a cryptographic checksum (sounds incorrect but I can't find a better wording), so that breaking VMAC wouldn't result in breaking encrypted data, besides name checksum doesn't authenticate encrypted data. Checksum's main purpose is to be able to find a key the file is encrypted with.

Encrypted directory/socket/device name also contains tweak but it's used solely to randomize first CBC block and keep name structure uniform.

Idea behind tweak is to get unique per file ciphertext. Block ciphers (AES, Camellia) operate in XTS mode. 64 bit tweak value concatenated with 64 bit file offset form tweak used by XTS. All encryption operations performed on 4096 bytes sectors ("block" in XTS notion). Incomplete sectors are also encrypted according to XTS standard. But encryption of sectors smaller then 128 bits is not defined for XTS, in such situation CTR mode is used with tweak value generated according to XTS. If full 4096 byte sector is zero (all 4096 bytes are zero) before decryption it is not decrypted and treated as hole is sparse file.

4 different keys are used for cryptographic operations: one for name encryption, one for VMAC and two keys for data encryption as required by XTS. These keys are derived from 512 bit user supplied key using HKDF algorithm based on HMAC-SHA512 (IETF draft). The kernel part expects cryptographically strong key from userspace. This key is generated with PBKDF on using HMAC-SHA512 from passphrase.

Standard implementations of ciphers are used, but I do not use opencrypto framework, so there is no hardware acceleration available. opencrypto is not used mainly because it lacks full support for XTS mode (OpenBSD version is not able to encrypt incomplete sectors). opencrypto is rather heavy weight (extra initialization and memory allocations) so using may even worsen performance (hardware initialization costs for encrypting short chunks with different keys).

Besides pefs supports multiply keys, mixing files encrypted with different keys in single directory, transparent(unencrypted) mode, key chaining (adding a series of keys by entering just one of them) and more. I'm going to write about it soon.

Sep 16, 2009

pefs benchmark

pefs is a stacked cryptographic filesystem for FreeBSD. It has started as a Goggle Summer of Code'2009.

I've just come across performance comparison of eCryptfs against plain ext4 filesystem on Ubuntu, benchmark I was going to perform on my own.

I run dbench benchmarks regularly while working on pefs. But use it mostly as a stress test tool. I haven't reached the point I can start working on improving performance yet. But measuring pefs overhead is going to be interesting.

Unfortunately I fail to interpret dbench results from the article. They've used dbench 4, while I'm using dbench 3 from ports. But never the less result of 4-8 Mb/s looks too strange for me.

I've benchmarked 4 and 16 dbench clients on zfs, pefs with salsa20 encryption (256 bit key) on top of same zfs partition and pefs with aes encryption (128 bit key, ctr mode). I executed benchmark for 3 times in each setup.

First of all, cipher throughput:
salsa20 ~205.5 Mb/s
aes128 ~81.3 Mb/s

Benchmark results:





In both cases (4 and 16 clients) CPU was limiting factor, disks where mostly idle. This explains such divergence in zfs results, I've actually benchmarked zfs arc cache performance. Because of unpredictable zfs inner workings one can get the best aes128 result surprisingly close to the worst salsa20 one (salsa20 is ~2.5 times faster than aes128).

The graph comparing average values:


Conclusion is that pefs is 2x times slower. But that shouldn't be solely because of encryption. From my previous testing I can conclude that it's mostly filesystem overhead:

  • Current pefs implementation avoids data caching (to prevent double caching and restrain one's paranoia). I had version using buffer management for io (bread/bwrite) it's performance was awful, something like 20-30 Mb/s with salsa20 encryption.

  • Sparse files (add file resizing here too) are implemented very poorly: it requires exclusive lock and fills gap with zeros. While this "gap" is likely to be filled by application really soon.

  • Lookup operation is very expensive. It calls readdir and decrypts name for each directory entry.



eCryptfs IOzone benchmark also shows 2x difference

Mar 24, 2009

Layer2 dummynet

Haven't posted about progress with lyear2 filtering for a while. One notable improvement is addition of ethernet address masks to dummynet.

Just configure a pipe. New masks available: src-ether and dst-ether (and a shortcut for specifying both of them: ether)
# ipfw pipe 1 config bw 1Mb mask ether


And use it:
# ipfw add 1100 pipe 1 src-ether 00:11:11:11:11:11 dst-ether 00:22:22:22:22:22 out via bridge0 layer2
# ipfw add 1200 pipe 1 dst-ether 00:11:11:11:11:11 src-ether 00:22:22:22:22:22 out via bridge0 layer2



# ipfw pipe show
00001: 1.000 Mbit/s 0 ms 50 sl. 2 queues (64 buckets) droptail
mask: ff:ff:ff:ff:ff:ff -> ff:ff:ff:ff:ff:ff tag: 0x0000
BKT _Source Ether Addr_ _Dest. Ether Addr__ Tag Tot_pkt/bytes Pkt/Byte Drp
40 00:11:11:11:11:11 00:22:22:22:22:22 0 2 196 0 0 0
43 00:22:22:22:22:22 00:11:11:11:11:11 0 2 196 0 0 0

Besides, masking packet by tag is also there:
# ipfw add 200 pipe 1 ip from any to any tagged 1-1000 via bridge0 layer2

As several tags per packet supported, it is necessary to specify desired tag range, tag, or any tag:
# ipfw add 200 pipe 1 ip from any to any tagged any via bridge0 layer2