Solaris 11 has been release a few days ago. I was anxious to upgrade as I was using Solaris Express 2010.11 for some time and I was hitting a couple of bugs. One was a nasty IP layer bug (BAD TRAP: type=e (#pf Page fault) rp=ffffff005c9b1040 addr=20 occurred in module “ip” due to a NULL pointer dereference) causing kernel panics – not a good thing for a storage server.
Since I was using a version of 11 already, an experimental upgrade was not a problem. With the BE (boot environments) feature, one could boot into any version safely. BE is an awesome feature. Need to install a patch? Install into a boot environment – any problems reboot into the old environment. BEs leverage ZFS snapshots to create a clone of your boot disk, install any patches onto it and allow you to switch flawlessly between the two.
The upgrade process
The upgrade was extremely easy. With the pkg manager – everything is fully automated. Simply run the update and wait. It downloads everything as needed, creates a clone, upgrades it by installing and removing any packages required and makes it current. The next restart brings you the new upgraded release.
So I gave it a try – and it worked – flawlessly. I was pleasantly surprised and happy. Of course, it did give me a scare after the first reboot. Took nearly 15 minutes (compared to 2) as it had to initialize something about the packages. It even converted by /etc/hostname* network config files to the new ipadm method – which I love. The results
Well that IP panic bug is gone – so the server is now “stable”. There were a few surprises as usual, for example “secret” new features, such as 1 MB recordsize for ZFS filesystems. This was not documented anywhere. Now your recordsize can be set to 1 MB for files. The default of 128 KB has not changed. This makes ZFS very close to first releases of ASM. This recordsize has only specific use cases, but nevertheless it is very welcome. For instance, it reduces the amount of metadata used for large caches. It also increases the IO size and block split size for large arrays. For example with the popular 10 disk RAIDZ2 – the default recordsize will split 128/(10-2) 16K chunks on each disk drive. With the new 1 MB block size it will be 128K per drive. This should give a nice boots in performance (to be tested soon!). Keep in mind that split-block storage only works for the very first block. For files of 2 or more blocks (records) it’s always rounded up. So with 1MB recordsize, a 4K file takes 4K. However a 1.1MB file takes 2 MB of space. I would imagine the use case for this new recordsize is for large file storage and would be especially useful to maximize compression (to be tested soon as well).
There were unpleasant results of course – the “sharesmb” property now takes only “on” and “off”. The previous “name=share_name” no longer works. If you had datasets with the property set – it still works, even if you upgrade your pools/zfs but you can no longer change it. This is quite annoying, as supposedly pool version 33 comes with “33 Improved share support” – but I could not find any documentation explaining where the improvement is. I only see reduction in functionality.
The big disappointments
Some ZFS metadata is not cached at all. The problem that I have been trying to solve since Expess (check my post on OTN Forums) remains. It’s a big problem, as when I run “zfs list” or when the snapshoting takes place – there’s a lot of disk reads. The zfs list command is annoyingly slow and keeps getting slower with more snapshots.
COMSTAR iSCSI target (server) is unstable. With both linux and windows clients – the disk just freezes for long periods of time and then a burst of IOs succeed, followed by a freeze again. This happens only during load and during the freeze period there are still iSCSI commands traffic (nop-send/nop-receive) between the servers (observed with iscsisnoop.d), but no actual IO. The end result is that Solaris 11 is unusable as iSCSI server. This is very disappointing as there were no such issues with Solaris Express 11 2010.11.
That’s my field use report so far, still exploring this release.
6 Comments. Leave new
I was quite confused about the new sharesmb options too. It turns out they’ve separated out the share properties into a dedicated sub-command of zfs. See page 154 of the zfs documentation, available at:
https://docs.oracle.com/cd/E23824_01/pdf/821-1448.pdf
Regards,
Adam
“The block size cannot be changed after the volume has been written, so set the block size at volume creation time.” How exactly do you specify this?
Christo,
have the situation improved with the latest SRUs? As you know I’m running Solaris 10 for my ZFS storage server. I was contemplating upgrading to 11 + SRU 11.4.
Yes, much better, quite stable. iscsi client on linux is pretty reliable. Windows 7 iscsi boot and works fine, but needs some special gateway setup (windows thing).
Note if you are cloning, the mac address is in the boot code, some hex coding required.
Some memory management bugs were resolved in last few SRUs.
– ZFS volumes still not fully cached for certain operations (requires major rewrite of cache code)
– A kernel panic caused by JAVA still not resolved
– zfs snapshot aka “timeslider” code still has issues with not creating and removing certain snapshots with multi-level setup
I think it’s quite usable at this stage.
ok so I’m running Solaris 11 now ;-)
Regarding smbshare name, not sure if that’s what you meant but the following works:
sudo zfs set share=name=share,path=/storage/share,prot=smb storage/share
When ZFS list is slow, use ‘zfs list -o name -s name’ to avoid unneeded IO.