<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="https://openzfsonosx.org/w/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://openzfsonosx.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=210.172.146.228</id>
		<title>OpenZFS on OS X - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://openzfsonosx.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=210.172.146.228"/>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Special:Contributions/210.172.146.228"/>
		<updated>2026-05-09T17:25:53Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.22.3</generator>

	<entry>
		<id>https://openzfsonosx.org/wiki/O3XWiki:Donations</id>
		<title>O3XWiki:Donations</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/O3XWiki:Donations"/>
				<updated>2016-05-30T23:38:05Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* Additional Thanks to Donors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Thanks ==&lt;br /&gt;
&lt;br /&gt;
The OpenZFS on OS X project would like to thank the following companies:&lt;br /&gt;
&lt;br /&gt;
'''GMO Internet''' [http://www.gmo.jp] for the hosting and rack space&lt;br /&gt;
&lt;br /&gt;
'''GlobalSign''' [http://globalsign.com] for the Open Source free SSL certificate&lt;br /&gt;
&lt;br /&gt;
'''OpenZFS''' [http://open-zfs.org] The main ZFS software collective&lt;br /&gt;
&lt;br /&gt;
== Donations ==&lt;br /&gt;
&lt;br /&gt;
The best way to show your appreciation for the OpenZFS project is to donate to the upstream project at [http://open-zfs.org http://open-zfs.org]&lt;br /&gt;
&lt;br /&gt;
If you wish to donate specifically to the OS X project, you can do so with PayPal at '''japan@lundman.net'''. But be aware any donations will most likely be used on beer and pizza, and other OpenZFS conferences, possibly not on the feature you wish for. :)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Additional Thanks to Donors ===&lt;br /&gt;
&lt;br /&gt;
 Tommy Lätti&lt;br /&gt;
 Daniel Bretoi&lt;br /&gt;
 Luke Lorenz&lt;br /&gt;
 Tommy Thorn&lt;br /&gt;
 Tim Henrion &lt;br /&gt;
 John Parnaby&lt;br /&gt;
 John Douglass&lt;br /&gt;
 Daniel Pearson&lt;br /&gt;
 John Wood&lt;br /&gt;
 Raoul Callaghan&lt;br /&gt;
 Josh Jordan&lt;br /&gt;
 Ottmar Klaas&lt;br /&gt;
&lt;br /&gt;
and those who wished to remain anonymous!&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Changelog</id>
		<title>Changelog</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Changelog"/>
				<updated>2015-12-16T01:11:06Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== OpenZFS_on_OS_X_1.?.?.dmg  ===&lt;br /&gt;
&lt;br /&gt;
* SPL: enhanced kmem pressure system ''(rottegift)''&lt;br /&gt;
* SPL: Rewrite TSD using AVL tree ''(Jorgen Lundman)''&lt;br /&gt;
* Cache names in getattr ''(Jorgen Lundman)''&lt;br /&gt;
* InvariantDisks serial fixes ''(cbreak)''&lt;br /&gt;
* Hardlink LinkID fixes ''(Jorgen Lundman)''&lt;br /&gt;
* ACL fixes (trivials and group) ''(Jorgen Lundman)''&lt;br /&gt;
* IOkit deadlock on export fixes ''(Jorgen Lundman)''&lt;br /&gt;
* MAF and deadlocks in zvol fixes ''(Jorgen Lundman)''&lt;br /&gt;
* 2605 want to resume interrupted zfs send ''(Matthew Ahrens)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.4.5.dmg 2015-10-19 ===&lt;br /&gt;
&lt;br /&gt;
* Remove deadlock with zil_lwb_commit ''(Jorgen Lundman)''&lt;br /&gt;
* Remove memory leak in znodes leading to beachball ''(Jorgen Lundman)''&lt;br /&gt;
* Do not call ctldir unmount ''(Jorgen Lundman)''&lt;br /&gt;
* xcode 7 compile fixes ''(ilovezfs)''&lt;br /&gt;
* Adhere to SIP in installer on EC ''(ilovezfs)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.4.2.dmg 2015-09-24 ===&lt;br /&gt;
&lt;br /&gt;
* correct kernel thread priorities ''(Jorgen Lundman)''&lt;br /&gt;
* VFS nolocks rework from 10a286 ''(Jorgen Lundman)''&lt;br /&gt;
* vnop_pageout_v2 replacement ''(Jorgen Lundman)''&lt;br /&gt;
* Permanent Storage work, incomplete ''(Jorgen Lundman)''&lt;br /&gt;
* mmapped file data written twice fix ''(Jorgen Lundman)''&lt;br /&gt;
* InvariantDisks fixes ''(ilovezfs)'' ''(cbreak)''&lt;br /&gt;
* SA corruption fixes ''(ZFSOnLinux)''&lt;br /&gt;
* SA recover status alerts when detected ''(Jorgen Lundman)''&lt;br /&gt;
* Modify-After-Free bugs and deadlock fixes ''(Jorgen Lundman)''&lt;br /&gt;
* Complete Re-port of IllumOS taskq ''(Jorgen Lundman)''&lt;br /&gt;
* Revert back to using taskq_dispatch_ent() ''(Jorgen Lundman)''&lt;br /&gt;
* Remove async unlinkeddrain ''(Jorgen Lundman)''&lt;br /&gt;
* Remove internal unused flag XATTR ''(Brendon Humphrey)''&lt;br /&gt;
* Additional ioctls from HFS ''(Brendon Humphrey)''&lt;br /&gt;
* Merge with upstream ZOL 20150520&lt;br /&gt;
* New pool feature &amp;quot;filesystem_limits&amp;quot;&lt;br /&gt;
* New pool feature &amp;quot;large_blocks&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.3.2-RC1 2015-05-02 ===&lt;br /&gt;
(Forum build)&lt;br /&gt;
* Remove serial console debug messages ''(Jorgen Lundman)''&lt;br /&gt;
* uiocopy failed to set direction ''(Jorgen Lundman)''&lt;br /&gt;
* SMAP work for Broadwell chipset ''(Jorgen Lundman)''&lt;br /&gt;
* Device removal panic fixes ''(Jorgen Lundman)''&lt;br /&gt;
* LASTUNMOUNT property was incorrect ''(Jorgen Lundman)''&lt;br /&gt;
* getxattr FinderInfo fixes ''(Jorgen Lundman)''&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.3.1-r2.dmg  2015-04-08 ===&lt;br /&gt;
&lt;br /&gt;
* vnop_pagein to zero non-aligned trailing block causing clang to core ''(Jorgen Lundman)''&lt;br /&gt;
* ADDEDTIME should updated when moved to a different directory ''(Jorgen Lundman)''&lt;br /&gt;
* Remove vnode throttle ''(Jorgen Lundman)''&lt;br /&gt;
* zfs create -p fix for non-root ''(ilovezfs)''&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.3.1.dmg 2015-04-01 ===&lt;br /&gt;
&lt;br /&gt;
* readonly mounts failed to unmount ''(Jorgen Lundman)''&lt;br /&gt;
* readonly import failed to create ZVOL devnodes ''(Jorgen Lundman)''&lt;br /&gt;
* vnode_getwithvid panic race ''(Jorgen Lundman)''&lt;br /&gt;
* sa_modify_attrs SA corruption ''(Tim Chase)''&lt;br /&gt;
* zconfigd added for persistent config ''(Brendon Humphrey, ilovezfs)''&lt;br /&gt;
* Fix missing FIFO named-pipes VNOPs ''(Jorgen Lundman)''&lt;br /&gt;
* Fake HFS related IOCTLs for _kMDQueryScope filter in Spotlight ''(Jorgen Lundman)''&lt;br /&gt;
* Add new 10.10 ATTR to vnop_getattr  ''(Jorgen Lundman)''&lt;br /&gt;
* FNDRINFO and ADDEDTIME support ''(Jorgen Lundman)''&lt;br /&gt;
* InvariantDisks fixes ''(ilovezfs)''&lt;br /&gt;
* Skip optical media on zpool import and add timeout ''(Jorgen Lundman)''&lt;br /&gt;
* Initial secpolicy framework ''(ilovezfs)''&lt;br /&gt;
* zpool status -L to resolve symlinks ''(ilovezfs)''&lt;br /&gt;
* mmap pageout/pagein partial requests fix ''(Jorgen Lundman)''&lt;br /&gt;
* kstat fixes and addition ''(Jorgen Lundman)''&lt;br /&gt;
* Unmount/reboot delay fixed, direct reclaim ''(Jorgen Lundman)''&lt;br /&gt;
* rollback/suspendfs would delay waiting for reclaim ''(Jorgen Lundman)''&lt;br /&gt;
* 'com.apple.mimic_hfs' property added to identify as 'hfs' ''(Brendon Humphrey)''&lt;br /&gt;
&lt;br /&gt;
=== 1.3.1-RC5 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/367a1108b174ee81e4ed128741b23e797afb8f16 367a1108b174ee81e4ed128741b23e797afb8f16]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/63a9a59e7de2353a974da0fe65004f59a8bf5946 63a9a59e7de2353a974da0fe65004f59a8bf5946]&lt;br /&gt;
&lt;br /&gt;
* New daemon called &amp;quot;InvariantDisks&amp;quot; providing persistent paths to use with the zpool command: /var/run/disk/by-id, by-path, by-serial (cf. https://github.com/cbreak-black/InvariantDisks) ''(Gerhard Röthlin)''&lt;br /&gt;
* Speed up ZVOL unmap by skipping unmaps that are fully unaligned and by only using zil_commit for unmap if sync=always ''(Evan Susarret and Jorgen Lundman)''&lt;br /&gt;
* Fix lacking force-positive mount options ''(Jorgen Lundman)''&lt;br /&gt;
* Simplified zed daemonization ''(ilovezfs)''&lt;br /&gt;
* Bump Spotlight auto-enable until Sun, 01 Feb 2015 00:00:00 UTC ''(ilovezfs)''&lt;br /&gt;
* Fix Finder tags modification bug by truncating xattr before overwriting. ''(Jorgen Lundman)''&lt;br /&gt;
&lt;br /&gt;
=== 1.3.1-RC4 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/367a1108b174ee81e4ed128741b23e797afb8f16 367a1108b174ee81e4ed128741b23e797afb8f16]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/96c4b5c8296e7482abfb6b2f018ef932b68248cf 96c4b5c8296e7482abfb6b2f018ef932b68248cf]&lt;br /&gt;
&lt;br /&gt;
* ZFS: Return correct VA_NAME in vnop_getattr for dataset mountpoints ''(Jorgen Lundman)''&lt;br /&gt;
&lt;br /&gt;
=== 1.3.1-RC3 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/367a1108b174ee81e4ed128741b23e797afb8f16 367a1108b174ee81e4ed128741b23e797afb8f16]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/73ead71a49e2530ecfef8017b3552b37e11c65e4 73ead71a49e2530ecfef8017b3552b37e11c65e4]&lt;br /&gt;
&lt;br /&gt;
* ZFS: ZEVO empty SA panic fix ''(Jorgen Lundman)''&lt;br /&gt;
* Set B_NOCACHE to stop possibly double caching block data ''(Jorgen Lundman and Evan Susarret)''&lt;br /&gt;
* arcstat.pl included&lt;br /&gt;
&lt;br /&gt;
=== 1.3.1-RC2 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/f4581407d18ea555fe5cd07e9e7912e96575ac5d f4581407d18ea555fe5cd07e9e7912e96575ac5d]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/8bf68a82822d492ec9aae0bc8e93d2917ec79937 8bf68a82822d492ec9aae0bc8e93d2917ec79937]&lt;br /&gt;
&lt;br /&gt;
* ZFS: Release XATTRs in vnop_remove quicker ''(Jorgen Lundman)''&lt;br /&gt;
* ZFS: Early clearing of z_vnode cause NULL vp panic ''(Jorgen Lundman)''&lt;br /&gt;
* ZFS: Fix deadlock in vnop_reclaim ''(Jorgen Lundman)''&lt;br /&gt;
&lt;br /&gt;
=== 1.3.1-RC1 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/8c89b46ca872572281ed62b506958a66a912f243 8c89b46ca872572281ed62b506958a66a912f243]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/91b0052b9167c5447ee8c29d90126af3b621acf7 91b0052b9167c5447ee8c29d90126af3b621acf7]&lt;br /&gt;
&lt;br /&gt;
* SPL: kstat support, including tunables. ''(Brendon Humphrey)''&lt;br /&gt;
* SPL: change from mutex allocations to inline ''(Jorgen Lundman)''&lt;br /&gt;
* SPL: port of IllumOS kmem ''(Brendon Humphrey)''&lt;br /&gt;
* memory pressure sensor and memory reap support ''(Brendon Humphrey)''&lt;br /&gt;
* Improve unmount/export code ''(Jorgen Lundman)''&lt;br /&gt;
* Handle vnop_pageout() calls during vnode_create ''(Jorgen Lundman)''&lt;br /&gt;
* Fix reply to getattrlist regarding case sensitivity to fix install of Adobe software ''(Jorgen Lundman)''&lt;br /&gt;
* Fix vfs_vget() for Spotlight and SMB. Enable spotlight on mounts. ''(Jorgen Lundman)''&lt;br /&gt;
* Fix zfs.util for whole disk checks ''(ilovezfs)''&lt;br /&gt;
* Add working arcstat.pl ''(Brendon Humphrey)''&lt;br /&gt;
* Work around for legacy mount points and unsupported versions. ''(ilovezfs)''&lt;br /&gt;
* Fix bug for fragmentation when spacemap_histogram is disabled ''(ilovezfs)''&lt;br /&gt;
* Open disks as root to fix scrub hang as user. ''(Jorgen Lundman)'' ''(ilovezfs)''&lt;br /&gt;
* Fix zfs diff ''(Jorgen Lundman)''&lt;br /&gt;
* SPL: condvar timeout, fix cache devices sometimes not being used ''(Jorgen Lundman)'' &lt;br /&gt;
* reclaim restructuring. Enable delete fast path, and actual release of xattrs ''(Jorgen Lundman)''&lt;br /&gt;
* enable userquota/groupquota accounting ''(Jorgen Lundman)''&lt;br /&gt;
* Temporary fix for missing .Trashes folder ''(Jorgen Lundman)'' ''(ilovezfs)''&lt;br /&gt;
* Automatically remove old .metadata_never_index before Dec 15th&lt;br /&gt;
* Make unlinked_drain async, and optional user disable ''(Jorgen Lundman)''&lt;br /&gt;
* Merge with ZOL upstream-20141120 ''(Jorgen Lundman)''&lt;br /&gt;
* Attempt to detech and remove invalid entries on unlinked-drain list ''(Jorgen Lundman)''&lt;br /&gt;
* Move mount default to /Volumes ''(ilovezfs)''&lt;br /&gt;
&lt;br /&gt;
Add pool features: async_destroy empty_bpobj lz4_compress spacemap_histogram enabled_txg hole_birth extensible_dataset embedded_data bookmarks                       &lt;br /&gt;
&lt;br /&gt;
Illumos 5138&lt;br /&gt;
Illumos 4753&lt;br /&gt;
Illumos 5116&lt;br /&gt;
Illumos 5135&lt;br /&gt;
Illumos 5139&lt;br /&gt;
Illumos 5147&lt;br /&gt;
Illumos 5161&lt;br /&gt;
Illumos 5177&lt;br /&gt;
Illumos 5174&lt;br /&gt;
Illumos 5140&lt;br /&gt;
Illumos 5117&lt;br /&gt;
Illumos 5049&lt;br /&gt;
IllumoS 4873&lt;br /&gt;
Illumos 4970-4974&lt;br /&gt;
Illumos 5034&lt;br /&gt;
Illumos 4631&lt;br /&gt;
Illumos 4976-4984&lt;br /&gt;
Illumos 4914&lt;br /&gt;
Illumos 4881&lt;br /&gt;
Illumos 4897&lt;br /&gt;
Illumos 4390&lt;br /&gt;
Illumos 4757, 4913&lt;br /&gt;
Illumos 3835&lt;br /&gt;
Illumos 4754, 4755&lt;br /&gt;
Illumos #4374&lt;br /&gt;
Illumos 4368, 4369&lt;br /&gt;
Illumos 4370, 4371&lt;br /&gt;
Illumos 4171, 4172&lt;br /&gt;
Illumos #4756&lt;br /&gt;
Illumos #4730&lt;br /&gt;
Illumos #4101, #4102, #4103, #4105, #4106&lt;br /&gt;
&lt;br /&gt;
=== OpenZFS_on_OS_X_1.3.0.dmg 2014-07-24 ===&lt;br /&gt;
&lt;br /&gt;
spl [https://github.com/openzfsonosx/spl/commit/80e411aecac0716d779703ecc0f032232bdad91c 80e411aecac0716d779703ecc0f032232bdad91c]&lt;br /&gt;
&lt;br /&gt;
zfs [https://github.com/openzfsonosx/zfs/commit/b223a573025bb5ef84e6e08b74c9f24b61eacc0b b223a573025bb5ef84e6e08b74c9f24b61eacc0b]&lt;br /&gt;
&lt;br /&gt;
* Print the spl version found instead of &amp;quot;v0.01&amp;quot; ''(ilovezfs)''&lt;br /&gt;
* Only replace a pre-existing custom icon if it's the snowflake ''(ilovezfs)''&lt;br /&gt;
* Run osascript as the logged-in user so the notifications actually show up ''(ilovezfs)''&lt;br /&gt;
* Check for ZEVO either still installed or uninstalled but pre-reboot, and display error for the user ''(ilovezfs)''&lt;br /&gt;
* Fix &amp;quot;Load the module manually by running ...&amp;quot; when kexts are in /Library/Extensions on OS X 10.9+ ''(ilovezfs)''&lt;br /&gt;
* Fix mutex leaks, resulting in eventual panic in &amp;quot;mutex_enter()&amp;quot;. ''(Jorgen Lundman)''&lt;br /&gt;
* Fix spa_strdup freeing wrong size, causing kmem havok. ''(Jorgen Lundman)''&lt;br /&gt;
* Enhance bmalloc to include free size, bounds and use after free; checks. ''(Brendon Humphrey)''&lt;br /&gt;
* Fix zdb 'hang' waiting for reclaim_thread ''(Jorgen Lundman)''&lt;br /&gt;
* Autoimport work and fixes ''(ilovezfs)''&lt;br /&gt;
* sysctl normalization code from legacy port, default off ''(Jorgen Lundman)''&lt;br /&gt;
* Fix hang at export due to spotlight references ''(ilovezfs)''&lt;br /&gt;
* Reboot hang fix (wait for reclaim thread) ''(Jorgen Lundman)''&lt;br /&gt;
* Reboot hang fix, take 2. (zed ignoring TERM) ''(Jorgen Lundman)''&lt;br /&gt;
* Added spl_wait_interruptible functions ''(Jorgen Lundman)''&lt;br /&gt;
* Merged ZOL-0.6.3 &lt;br /&gt;
* ZVOL unmap support ''(Evan Susarret)''&lt;br /&gt;
* Better disk icon support ''(ilovezfs)''&lt;br /&gt;
* onexit fixes, clean zfs send holds ''(Jorgen Lundman)''&lt;br /&gt;
* Replace MALLOC calls to use bmalloc for performance ''(Brendon Humphrey)''&lt;br /&gt;
* OS X Yosemite 10.10 compile fixes ''(ilovezfs)''&lt;br /&gt;
* zp reclaim vs zget remodel fix deadlocks ''(Jorgen Lundman)''&lt;br /&gt;
* Support legacy mountpoints ''(ilovezfs)''&lt;br /&gt;
* Initial non-root support ''(ilovezfs)''&lt;br /&gt;
* Rewrite ioctl after upstream ''(Jorgen Lundman)''&lt;br /&gt;
* Normalized lookup panic fix ''(Jorgen Lundman)''&lt;br /&gt;
&lt;br /&gt;
and, of course, all the fixes in ZFS on Linux 0.6.3. Thanks guys!&lt;br /&gt;
&lt;br /&gt;
== OpenZFS_on_OS_X_1.2.7.dmg 2014-05-15 ==&lt;br /&gt;
&lt;br /&gt;
* Merged with ZFSOnLinux pre-0.6.3 dated Apr 8 2014 ''(6ac770b1961b9468daf0c69eae6515c608535789)''&lt;br /&gt;
* create_thread( 75%*num_cpus ) would create literal 75 threads, instead of the intended 3 threads on quad core machine ''(Jorgen Lundman)''&lt;br /&gt;
* VMEM allocate changed to use bmalloc (slice, SLAB, allocator on top of k_m_a) ''(Brendon Humphrey)''&lt;br /&gt;
* Add ZED (ZFS Event Daemon) to handle events (send alerts, emails) on pool issues. ''(Chris Dunlap)''&lt;br /&gt;
* name cache fixes (existing files claimed as missing, missing files claimed as existing) ''(Jorgen Lundman)''&lt;br /&gt;
* Change pool sync to remove 'idle' pool writes every 30s. ''(Jorgen Lundman)''&lt;br /&gt;
* Work around ZFS recv deadlock ''(ilovezfs)''&lt;br /&gt;
* vnop_pageout fixes for zerod blocks beyond EOF (POSIX) ''(Jorgen Lundman)''&lt;br /&gt;
* Add autoimport, zed startup scripts ''(ilovezfs)''&lt;br /&gt;
* ctldir (.zfs) fixes and cleanup ''(Jorgen Lundman)''&lt;br /&gt;
* Finder hardlinks fixes ''(Jorgen Lundman)''&lt;br /&gt;
* Reclaim fixes, throttle and waiting on vp changes ''(Jorgen Lundman)''&lt;br /&gt;
* ZVOL upstream incompatibility fixes  ''(Evan Susarret)'' '''*1'''&lt;br /&gt;
* ZFS rollback and promote fixes ''(ilovezfs)''&lt;br /&gt;
* Rework EFI label, and wholedisk detection, Core Storage ''(Jorgen Lundman, ilovezfs)''&lt;br /&gt;
&lt;br /&gt;
Which should result in greater stability, large performance enhancements, and finally capable of using more of the available memory.&lt;br /&gt;
&lt;br /&gt;
'''The Installer no longer contain 32bit versions.''' &lt;br /&gt;
&lt;br /&gt;
'''*1''' Note that 1.2.0's ZFS Volumes are unintentionally incompatible with other platform version of ZFS, except for volblocksize = 512.&lt;br /&gt;
&lt;br /&gt;
== 1.2.0.dmg 2014-03-13 ==&lt;br /&gt;
&lt;br /&gt;
* First release&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Panic</id>
		<title>Panic</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Panic"/>
				<updated>2015-05-12T00:53:06Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* Alternate symbol lookup with lldb */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Kernel panics ==&lt;br /&gt;
&lt;br /&gt;
One of the most useful settings to assist with debugging is telling Darwin kernel to keep the symbols from kexts. This can&lt;br /&gt;
be set using the nvram command, and requires a reboot.&lt;br /&gt;
&lt;br /&gt;
First check to see if you have any special boot-args set and add the new keepsyms instruction.&lt;br /&gt;
 # nvram boot-args=&amp;quot;keepsyms=y debug=0x144&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and reboot the machine for it to take effect.&lt;br /&gt;
&lt;br /&gt;
[[https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KernelProgramming/build/build.html#//apple_ref/doc/uid/TP30000905-CH221-BABCCIDH Table 20-1]] in Apple's Kernel Programming Guide has a summary of the meaning of the debug options.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Panic decoding ===&lt;br /&gt;
&lt;br /&gt;
If you get a panic but you do not have symbols enabled, it can be decoded using the atos command.&lt;br /&gt;
&lt;br /&gt;
In a standard panic log, you will see something like:&lt;br /&gt;
&lt;br /&gt;
 # cd /Library/Logs/DiagnosticReports/&lt;br /&gt;
 # less Kernel_2014-03-13-093629_OSX109.panic&lt;br /&gt;
 Backtrace (CPU 0), Frame : Return Address&lt;br /&gt;
 0xffffff8088843b10 : 0xffffff7f85e25759  : '''0xffffff7f80dcf760''' &lt;br /&gt;
 0xffffff8088843b40 : 0xffffff7f85e25560  : '''0xffffff7f80dcf423''' &lt;br /&gt;
 0xffffff8088843be0 : 0xffffff7f85e08f27  : '''0xffffff7f80dc491a'''&lt;br /&gt;
 &lt;br /&gt;
       Kernel Extensions in backtrace:&lt;br /&gt;
         net.lundman.spl(1.0)[7F69C13B-C730-3475-99E9-53861AC6C54E]@0xffffff7f85d2a000-&amp;gt;0xffffff7f85d36fff&lt;br /&gt;
         net.lundman.zfs(1.0)[5637421D-EE17-33F1-ACB2-8FA38BC5A5A6]@'''0xffffff7f80d54000'''-&amp;gt;0xffffff7f85f38fff&lt;br /&gt;
&lt;br /&gt;
We can then run the command&lt;br /&gt;
&lt;br /&gt;
  # xcrun '''atos''' -arch '''x86_64''' -l '''0xffffff7f80d54000''' -o ../zfs/module/zfs/zfs.kext/Contents/MacOS/zfs   '''0xffffff7f80dcf760 0xffffff7f80dcf423 0xffffff7f80dc491a'''&lt;br /&gt;
 got symbolicator for ../zfs/module/zfs/zfs.kext/Contents/MacOS/zfs, base address 0&lt;br /&gt;
 spa_taskqs_init (in zfs) (spa.c:888)&lt;br /&gt;
 spa_create_zio_taskqs (in zfs) (spa.c:972)&lt;br /&gt;
 spa_activate (in zfs) (spa.c:1094)&lt;br /&gt;
&lt;br /&gt;
Which can be repeated for spl, and spl load address as well, if needed.&lt;br /&gt;
&lt;br /&gt;
And for kernel addresses, look for &amp;quot;kernel slide:&amp;quot; value, I assumed 0 in this example&lt;br /&gt;
 xcrun atos -arch x86_64 -d -o /Volumes/KernelDebugKit/mach_kernel -s 0   0xffffff8000222f79 0xffffff80002dc24e 0xffffff80002f3746 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you are not panicing, but would like to print the stack at a certain point in the kext, you can use&lt;br /&gt;
&lt;br /&gt;
 OSReportWithBacktrace(&amp;quot;I am here: vp %p\n&amp;quot;, vp);&lt;br /&gt;
&lt;br /&gt;
in `printf` style notation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Alternate symbol lookup with lldb ===&lt;br /&gt;
&lt;br /&gt;
Panic:&lt;br /&gt;
 panic(cpu 5 caller 0xffffff80088d1066): trying to interlock destroyed mutex (0xffffff8029196000)&lt;br /&gt;
 Backtrace (CPU 5), Frame : Return Address&lt;br /&gt;
 0xffffff81f49fba80 : 0xffffff8008822fa9 &lt;br /&gt;
 0xffffff81f49fbb00 : 0xffffff80088d1066 &lt;br /&gt;
 0xffffff81f49fbb10 : 0xffffff800889c75e &lt;br /&gt;
 0xffffff81f49fbbe0 : 0xffffff80088ae60c &lt;br /&gt;
 0xffffff81f49fbc00 : '''0xffffff7f8a4252e0'''&lt;br /&gt;
 0xffffff81f49fbdf0 : 0xffffff80089ffea9 &lt;br /&gt;
         net.lundman.zfs(1.0)[0EC79B06-3C9F-3529-8450-42222507F310]@'''0xffffff7f8a33c000'''-&amp;gt;0xffffff7f8a545fff&lt;br /&gt;
&lt;br /&gt;
Assuming you have the same build as panic report, in this case 1.2.7&lt;br /&gt;
 # lldb&lt;br /&gt;
 (lldb) target create --no-dependents --arch x86_64 module/zfs/zfs   #Binary before moved into zfs.kext&lt;br /&gt;
 (lldb) target modules load --file zfs __TEXT '''0xffffff7f8a33c000'''&lt;br /&gt;
 (lldb) image lookup --verbose --address '''0xffffff7f8a4252e0'''&lt;br /&gt;
 &lt;br /&gt;
      Address: zfs[0x00000000000e92e0] (zfs.__TEXT.__text + 950160)&lt;br /&gt;
      Summary: zfs`zfs_vnop_pageout + 1264 at zfs_vnops_osx.c:1236&lt;br /&gt;
       Module: file = &amp;quot;/Users/lundman/x/zfs/module/zfs/zfs&amp;quot;, arch = &amp;quot;x86_64&amp;quot;&lt;br /&gt;
  CompileUnit: id = {0x00000000}, file = &amp;quot;/Users/lundman/x/zfs/module/zfs/zfs_vnops_osx.c&amp;quot;, language = &amp;quot;c89&amp;quot;&lt;br /&gt;
    '''LineEntry''': [0xffffff7f8a4252da-0xffffff7f8a4252f0): /Users/lundman/x/zfs/module/zfs/'''zfs_vnops_osx.c:1236'''&lt;br /&gt;
&lt;br /&gt;
zfs_vnops_osx.c:1236&lt;br /&gt;
     tx = dmu_tx_create(zfsvfs-&amp;gt;z_os);&lt;br /&gt;
    dmu_tx_hold_write(tx, zp-&amp;gt;z_id, off, len);&lt;br /&gt;
    '''dmu_tx_hold_bonus(tx, zp-&amp;gt;z_id);'''&lt;br /&gt;
    err = dmu_tx_assign(tx, TXG_NOWAIT);&lt;br /&gt;
&lt;br /&gt;
Or just for the kernel&lt;br /&gt;
&lt;br /&gt;
 (lldb) target create --no-dependents --arch x86_64 mach_kernel&lt;br /&gt;
 (lldb) target modules load --file mach_kernel --slide 0x000000000b600000&lt;br /&gt;
 (lldb) image lookup -a 0xffffff800b8d6aa7&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
https://developer.apple.com/library/mac/qa/qa1264/_index.html&lt;br /&gt;
&lt;br /&gt;
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KernelProgramming/build/build.html#//apple_ref/doc/uid/TP30000905-CH221-BABDGEGF&lt;br /&gt;
&lt;br /&gt;
https://developer.apple.com/library/mac/documentation/Darwin/Reference/Manpages/man8/kext_logging.8.html&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-03-25T05:13:01Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* vnode_create thread */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:O3X development]]&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Flamegraphs ===&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
 dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&lt;br /&gt;
it will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph&lt;br /&gt;
&lt;br /&gt;
 ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
 ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is '''rsync -ar /usr/ /BOOM/deletea/''' running;&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''bonnie++''' in various stages;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed-hover&amp;quot;&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZVOL block size ===&lt;br /&gt;
&lt;br /&gt;
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, iokit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc can not be reported correctly.&lt;br /&gt;
&lt;br /&gt;
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== vnode_create thread ===&lt;br /&gt;
&lt;br /&gt;
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.&lt;br /&gt;
&lt;br /&gt;
One idea is we create a vnode_create thread (with each dataset).  The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution.  Once we have passed the &amp;quot;unlock&amp;quot; part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.&lt;br /&gt;
&lt;br /&gt;
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.&lt;br /&gt;
&lt;br /&gt;
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). &lt;br /&gt;
&lt;br /&gt;
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp-&amp;gt;z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.&lt;br /&gt;
&lt;br /&gt;
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Appears to deadlock from time to time. &lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Has not yet deadlocked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Conclusions;&lt;br /&gt;
&lt;br /&gt;
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp-&amp;gt;z_vnode''' is already set. &lt;br /&gt;
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. &lt;br /&gt;
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.&lt;br /&gt;
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-03-25T05:12:40Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* vnode_create thread */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:O3X development]]&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Flamegraphs ===&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
 dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&lt;br /&gt;
it will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph&lt;br /&gt;
&lt;br /&gt;
 ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
 ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is '''rsync -ar /usr/ /BOOM/deletea/''' running;&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''bonnie++''' in various stages;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed-hover&amp;quot;&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZVOL block size ===&lt;br /&gt;
&lt;br /&gt;
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, iokit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc can not be reported correctly.&lt;br /&gt;
&lt;br /&gt;
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== vnode_create thread ===&lt;br /&gt;
&lt;br /&gt;
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.&lt;br /&gt;
&lt;br /&gt;
One idea is we create a vnode_create thread (with each dataset).  The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution.  Once we have passed the &amp;quot;unlock&amp;quot; part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.&lt;br /&gt;
&lt;br /&gt;
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.&lt;br /&gt;
&lt;br /&gt;
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). &lt;br /&gt;
&lt;br /&gt;
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp-&amp;gt;z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.&lt;br /&gt;
&lt;br /&gt;
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Appears to deadlock from time to time. &lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Has not yet deadlocked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Conclusions;&lt;br /&gt;
&lt;br /&gt;
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp-&amp;gt;z_vnode''' is already set. &lt;br /&gt;
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block 8to wait for it to be filled in) once we are close to leaving the VNOP.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. &lt;br /&gt;
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.&lt;br /&gt;
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-03-25T03:46:49Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* vnode_create thread */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:O3X development]]&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Flamegraphs ===&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
 dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&lt;br /&gt;
it will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph&lt;br /&gt;
&lt;br /&gt;
 ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
 ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is '''rsync -ar /usr/ /BOOM/deletea/''' running;&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''bonnie++''' in various stages;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed-hover&amp;quot;&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZVOL block size ===&lt;br /&gt;
&lt;br /&gt;
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, iokit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc can not be reported correctly.&lt;br /&gt;
&lt;br /&gt;
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== vnode_create thread ===&lt;br /&gt;
&lt;br /&gt;
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.&lt;br /&gt;
&lt;br /&gt;
One idea is we create a vnode_create thread (with each dataset).  The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution.  Once we have passed the &amp;quot;unlock&amp;quot; part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.&lt;br /&gt;
&lt;br /&gt;
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.&lt;br /&gt;
&lt;br /&gt;
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). &lt;br /&gt;
&lt;br /&gt;
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp-&amp;gt;z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.&lt;br /&gt;
&lt;br /&gt;
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Appears to deadlock from time to time. &lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.&lt;br /&gt;
&lt;br /&gt;
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.&lt;br /&gt;
&lt;br /&gt;
Has not yet deadlocked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Conclusions;&lt;br /&gt;
&lt;br /&gt;
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp-&amp;gt;z_vnode''' is already set. &lt;br /&gt;
&lt;br /&gt;
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. &lt;br /&gt;
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.&lt;br /&gt;
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-03-18T02:43:36Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* Development */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
 dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&lt;br /&gt;
it will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph&lt;br /&gt;
&lt;br /&gt;
 ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
 ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is '''rsync -ar /usr/ /BOOM/deletea/''' running;&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''bonnie++''' in various stages;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed-hover&amp;quot;&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-03-18T02:41:55Z</updated>
		
		<summary type="html">&lt;p&gt;210.172.146.228: /* Development */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
 dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&lt;br /&gt;
it will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph&lt;br /&gt;
&lt;br /&gt;
 ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
 ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is '''rsync -ar /usr/ /BOOM/deletea/''' running;&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''bonnie++''' in various stages;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order [[File:create.svg]]|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;/div&gt;</summary>
		<author><name>210.172.146.228</name></author>	</entry>

	</feed>