FreeNAS® is © 2011-2015 iXsystems

FreeNAS® 和 FreeNAS® logo 均为 iXsystems 公司注册商标.

FreeBSD 是 FreeBSD 基金会注册商标

Written by users of the FreeNAS® network-attached storage operating system.

Version 9.3

Copyright © 2011-2015 iXsystems

本手册涵盖 FreeNAS® 9.3 安装和使用。

FreeNAS® 用户手册仍在完善,依赖于许多人的贡献。如果你愿意帮助完善手册,请您查阅 README 。如果你使用 IRC Freenode,欢迎加入 #freenas 频道,在这里你可以找到更多 FreeNAS® 用户。

FreeNAS® 用户手册允许您在 创作共享署名授权协议 下共享和再发行。你可以在 iXsystems 所发布手册的基础上复制、发行、翻译。

FreeNAS® 和 FreeNAS® logo 是 iXsystems 的注册商标。

3ware® 和 LSI® 是 LSI 公司的注册商标。

Active Directory® 是微软公司的注册商标。

Apple, Mac 和 Mac OS 是 Apple 公司的注册商标。

Chelsio® 是 Chelsio 公司的注册商标。

Cisco® 是思科公司的注册商标。

Django® 是 Django 软件基金会的注册商标。

Facebook® 是 Facebook 公司的注册商标。

FreeBSD 和 FreeBSD logo 是 FreeBSD 基金会的注册商标。

Fusion-io 是 Fusion-io 公司的注册商标。

Intel、Intel logo、Pentium Inside、Pentium 是英特尔公司的注册商标。

LinkedIn® 是 LinkedIn 公司的注册商标。

Linux® 是 Linus Torvalds 的注册商标。

Marvell® 是 Marvell 的注册商标。

Oracle 是甲骨文公司的注册商标。

Twitter 是 Twitter 公司的注册商标。

UNIX® 是 Open Group 的注册商标。

VirtualBox® 是甲骨文公司的注册商标。

VMware® 是 VMware 公司的注册商标。

Wikipedia® 是 Wikimedia 公司的注册商标。

Windows® 是微软公司的注册商标。

排版约定

FreeNAS® 9.3 用户手册采用下列排版方式:

  • 按钮、图标、字段、列、和线框等图形元素名称使用双引号标记,例如: 点击 “测试” 按钮。
  • 菜单用斜体和箭头分隔,例如:系统 ‣ 信息 .
  • 命令文本用 粗体 代表。命令和命令输出用绿色框。
  • 卷、数据集和文件名用蓝色框,像这样: /getnas/music
  • 键盘操作用蓝色框,例如:按 回车
  • 粗体字: 用来强调重点。
  • 斜体字: 代表设备名称或输入WebGUI某字段的文字。

1. 介绍

FreeNAS® 是一个基于 FreeBSD 开发的嵌入式开源网络附加存储 network-attached storage (NAS) 操作系统,在 BSD license 授权许可协议下发布。NAS 是一种专门为文件共享和存储优化的操作系统。

FreeNAS® 提供了基于浏览器的图形化控制界面。它内建的网络协议可以实现各种类型的操作系统通过网络来访问存储。内置的插件系统可以通过安装额外的软件来实现丰富的功能。

1.1. FreeNAS 9.3 新功能

FreeNAS® 9.3 修复的 BUG

它基于 FreeBSD 9.3 稳定版,增加了 新功能 ,支持的 硬件列表 ,合并了自 FreeBSD 9.3 发布以来的所有 安全更新

  • 现在开始 FreeNAS® 仅支持 64 位。
  • FreeNAS® 现在开始仅支持 ZFS。“UFS 卷管理器” 已被移除且无法将磁盘格式化为 UFS 格式。当然,考虑到兼容性,仍可以通过 “导入磁盘” 功能将 UFS 格式的磁盘导入系统,以便于将数据拷贝到 ZFS 存储池。
  • 现在开始,仅提供一种类型的系统安装文件,即 .iso ,该文件可以烧录到CD或写入U盘。它是新版 FreeNAS® 的安装文件,且必须执行安装。
  • 现在开始,FreeNAS® 会将系统盘格式化为 ZFS,并使用 GRUB 引导启动系统。支持多启动环境,可以方便的从错误的升级、更新、配置失败中还原。
  • 新的系统安装器允许选择多系统盘,可将多个启动设备组成镜像。
  • WebGUI 开始支持 IPv6 访问。
  • 支持 NFSv4,添加 Kerberized NFS 支持。
  • 系统日志替代为 syslog-ng
  • 新增初始设置向导。全新安装系统,该向导将在 root 密码设置完成后运行,可以帮助你快速创建卷和共享,若不希望使用设置向导直接关闭即可。可以在树形菜单中选择 设置向导 再次运行它。
  • 你可以在 系统 ‣ 启动器 中管理系统启动环境。
  • 你可以在 System ‣ Tunables 中管理 rc.conf 系统变量。
  • 你可以在 系统 ‣ 更新 管理系统更新。
  • 你可以在 系统 ‣ CAs 导入或创建内部的或中级CA(Certificate Authority)机构。
  • 你可以在 系统 ‣ 证书 导入证书或创建自签名证书。所有服务均支持使用证书,可在服务配置中选择导入或创建证书。
  • 你可以在 存储器 ‣ 卷 ‣ 查看卷 界面点击 “升级” 按钮,升级 ZFS 存储池版本。
  • 你可以在 存储器 ‣ VMware-Snapshot 管理 VMware 快照。
  • 新增 afpusers 命令。类似于 macusers 命令,它可以列出当前连接到 AFP 共享的所有用户。
  • Kernel iSCSI 被替换为 istgt 。改善了对 VMware VAAI 加速的支持,同时添加了 Microsoft ODX 加速和 Windows 2012 集群。基于 Zvol 的 LUNs 现在可以通过 GUI 增加容量。LUNs 现在支持动态扩容,无需断开或停止发起端的 iSCSI 服务。
  • 添加支持链路层发现协议 ( LLDP )。允许网络设备在局域网中报告其身份、容量以及相邻设备。
  • Net-SNMP 被替换为 bsnmpd 用作 SNMP 服务。
  • /usr/local/share/snmp/mibs/FREENAS-MIB.txt 添加 MIB 用来通过 net-snmp 生成 ZFS 统计信息。
  • 添加支持通过 服务 ‣ WebDAV 配置 WebDAV。提供 HTTP 身份验证和SSL加密支持。
  • Linux jail 模板被移除。取而代之的提供 VirtualBox 模板,提供基于浏览器管理的 phpVirtualBox,可以借此安装所需的 Linux 或其他操作系统。
  • 移除了大部分 FreeBSD jail 模板,仅保留一种 FreeBSD 模板,从而减少用户在选择模板时的困惑。
  • 插件和 Jails 开始支持 DHCP IPv4 和 IPv6。
  • 添加了 cruciblewds, MediaBrowser, s3cmd, SickRage, Sonarr, Syncthing 等新插件。由于 Minidlna 插件不支持 FastCGI,因此已将其移除。Jail 中手动安装 Minidlna 请参考 这篇文章
  • 添加 Atheros AR813x/AR815x 千兆网卡驱动支持, alc(4)

WebGUI 组织形式做了如下调整:

  • 系统 ‣ 系统信息 变更为 系统 ‣ 信息
  • 系统 ‣ 设置 拆分为 系统 ‣ 常规系统 ‣ 高级系统 ‣ 电邮系统 ‣ 系统数据集
  • 系统 ‣ 系统控制系统 ‣ 微调 合并为 系统 ‣ 微调 。“类型” 字段已被添加到 系统 ‣ 微调 你可以选择 “Loader” 或 “Sysctl”。
  • NTP 服务器被移至 系统 ‣ 常规
  • 系统 ‣ 设置 ‣ SSL 被移至 系统 ‣ 常规 ‣ 设置 SSL 证书
  • 新增 任务 菜单。它们被移至任务菜单:计划任务、启动/关机 脚本、Rsync 任务、S.M.A.R.T 测试。
  • 新增 快照 菜单到存储。
  • iSCSI 配置被移至 共享 ‣ 块级存储 (iSCSI)
  • 服务 ‣ 目录服务 做为独立的菜单。
  • 服务 ‣ Directory Services ‣ Domain Controller 被移至 服务 ‣ Domain Controller
  • 新增 服务 ‣ LLDP
  • 注销登录从右上角移至左侧树形菜单。

添加或删除了一下字段:

  • “Console setup” 菜单新增 “System Update” 选项。“Console setup” 的 “Reset WebGUI login credentials” 重命名为 “Reset Root Password”。
  • 系统 ‣ 常规 新增 “证书” 下拉菜单和 “WebGUI -> HTTPS 端口”。
  • “系统数据及” 和 “系统日志” 从 系统 ‣ 高级 移至 系统 ‣ 系统数据集
  • 系统 ‣ 高级 新增 “性能测试” 按钮。
  • “固件更新” 从 系统 ‣ 高级 移至 系统 ‣ 更新 ‣ 手动更新
  • “Directory Services” 被启用,已从 系统 ‣ General 移除。FreeNAS® 现在支持 系统安全服务守护进程(SSSD) 支持多目录服务。
  • “Rebuild LDAP/AD Cache” 按钮已从 系统 ‣ Advanced 移除。重命名为 “Rebuild Directory Service Cache” 并在各种目录服务配置界面中显示。
  • rc.conf “类型” 已添加至 系统 ‣ Tunables
  • “HTTP 代理” 已添加至 网络 ‣ 全局配置
  • “Channel” 已添加至 网络 ‣ IPMI .
  • 任务 ‣ 计划任务 ‣ 查看计划任务 添加 “Run Now” 按钮。
  • A “Rsync Create” checkbox has been added to Tasks ‣ Rsync Tasks ‣ Add Rsync Task .
  • 修改了存储器中图标按钮名称。“自动导入卷” 修改为 “导入卷”,“导入卷” 修改为 “导入磁盘”,“ZFS 卷管理器” 修改为 “卷管理器”,“ZFS Scrubs” 修改为 “Scrubs”。
  • “修改权限” 界面新增 “Apply Owner (user)”, “Apply Owner (group)” 和 “Apply Mode”。
  • 存储 ‣ 卷 ‣ 创建ZFS数据集 增加 “Case Sensitivity” 下拉菜单。
  • 存储 ‣ 卷 ‣ 查看卷 增加 “升级” 按钮,无需在命令行中升级存储池。
  • “Change Permissions” 增加三种 “权限类型”: Unix , Mac , Windows
  • The “Volume Status” screen now shows the status of the latest ZFS scrub, the number of errors, number of repaired blocks, and the date of the last scrub.
  • The “Volume Status” screen now shows the resilvering status when a disk is replaced.
  • The “Enable High Speed Ciphers” checkbox has been replaced by the “Encryption Cipher” drop-down menu in Storage ‣ Replication Tasks ‣ Add Replication Tasks . This allows you to temporarily disable encryption for the initial replication which can significantly reduce the time needed for the initial replication.
  • The “Workgroup Name” field and “Use keytab” checkbox are deprecated and have been removed from Directory Service ‣ Active Directory . The “Enable” and “Site Name” fields and the “Idmap backend”, “Windbind NSS Info”, and “SASL wrapping” drop-down menus have been added to Directory Service ‣ Active Directory . The “Kerberos Server” and “Kerberos Password Server” fields have been replaced by the “Kerberos Realm” drop-down menu.
  • The “Encryption Mode” field has been removed from Directory Service ‣ LDAP . The “Enable” and “Samba Schema” checkboxes, “SUDO Suffix”, “LDAP timeout”, and “DNS timeout” fields, and the “Kerberos Realm”, “Kerberos Keytab”, and “Idmap backend” drop-down menus have been added.
  • The “Enable” checkbox has been added to Directory Service ‣ NIS .
  • The “Use default domain” and “Enable” checkboxes and the “Idmap backend” drop-down menu have been added to Directory Service ‣ NT4 .
  • Directory Service ‣ Kerberos Realms and Directory Service ‣ Kerberos Keytabs have been added. Added keytabs are stored in the configuration database so that they persist across reboots and system upgrades.
  • The “Database Path” field has been moved from Sharing ‣ Apple (AFP) Share ‣ Add Apple (AFP) Share to Services ‣ AFP .
  • The “Hosts Allow” and “Hosts Deny” fields have been added to Sharing ‣ Apple (AFP) Share ‣ Add Apple (AFP) Share .
  • The “Bind IP Addresses” and “Global auxiliary parameters” fields have been added to Services ‣ AFP .
  • The “Zero Device Numbers” field has been moved from Services ‣ AFP to Sharing ‣ Apple (AFP) Share ‣ Add Apple (AFP) Share .
  • The “Security” selection fields have been added to Sharing ‣ Unix (NFS) Shares ‣ Add Unix (NFS) Share .
  • The “Use as home share” checkbox and “VFS Objects” fields have been added to Sharing ‣ Windows (CIFS) Shares ‣ Add Windows (CIFS) Share .
  • Sharing ‣ Block (iSCSI) ‣ Target Global Configuration has been reduced to the configuration options used by kernel iSCSI. The “ISNS Servers” and the “Pool Available Size Threshold” fields have been added.
  • The “Available Size Threshold”, “Enable TPC”, and “Xen initiator compat mode” fields have been added to Sharing ‣ Block (iSCSI) ‣ Extents ‣ Add Extent .
  • The “Target Flags” and “Queue Depth” fields are now deprecated and have been removed from Sharing ‣ (Block) iSCSI ‣ Targets ‣ Add Target .
  • The “Domain logons”, “Obey pam restrictions”, and “Bind IP Addresses” checkboxes and the “Idmap Range Low” and “Idmap Range High” fields have been added to Services ‣ CIFS . The “Enable home directories”, “Enable home directories browsing”, “Home directories”, and “Homes auxiliary parameters” fields have been removed from Services ‣ CIFS as they have been replaced by the “Use as home share” checkbox in Sharing ‣ Windows (CIFS) Shares ‣ Add Windows (CIFS) Share .
  • Services ‣ Directory Services has been renamed to Services ‣ Domain Controller .
  • The “Kerberos Realm” drop-down menu has been added to Services ‣ Domain Controller .
  • The “IP Server” field has been added to Services ‣ Dynamic DNS .
  • The “TLS use implicit SSL” checkbox has been removed from Services ‣ FTP as this feature is deprecated. The “Certificate and private key” field has been replaced by the “Certificate” drop-down menu which is integrated into the new Certification Manager, allowing one to select their own certificates.
  • The “Enable NFSv4” checkbox has been added to Services ‣ NFS .
  • The “vanilla” option has been removed from Jails ‣ Add Jails as it was confusing.
  • The “NIC” drop-down menu has been added to Jails ‣ Add Jails so that the interface to use for jail connections can be specified.
  • The “Upload Plugin” button has been removed from the “Jails” screen. To install a plugin, use “Plugins” instead.
  • The “ZFS” tab has been added to Reporting , providing graphs for “ARC Size” and “ARC Hit Ratio”.

1.2. 截止 9.3-RELEASE FreeNAS 系统变化

从9.3版本开始,FreeNAS® 采用 “滚动发布” 替代传统单点发布模式。新的 更新 机制使得系统安全修复、BUG修复和增加新功能变得非常简单。由于一些更新营销到了WebGUI界面,因此在这里罗列9.3-RELEASE开始发生的一些变化。

  • Samba 更新至 4.1.17 修复了安全问题。
  • Netatalk 更新至 3.1.7
  • SSSD 更新至 1.11.7
  • 添加 Intel X710 10GbE 适配器驱动。
  • 添加 mrsas(4) LSI MegaRAID 驱动。
  • 支持 Mach Xtreme MX-ES/MXUB3 和 Kingston DT100G2 USB 驱动。
  • 添加了man手册,可以通过 Shell 查看。
  • 启动存储池采用 LZ4 压缩算法以节省系统盘空间。
  • 添加驱动器热替换支持。如果为存储池中某个存储设备添加了备用设备,当该设备发生故障,系统将自动使用备用设备对其进行替换。
  • An installation of STABLE, as of 201501212031, now creates two boot environments. The system will boot into the default boot environment and users can make their changes and update from this version. The other boot environment, named Initial-Install can be booted into if the system needs to be returned to a pristine, non-configured version of the installation.
  • The “Create backup” and “Restore from a backup” options have been added to the FreeNAS® console setup menu shown in Figure 3a.
  • The “Microsoft Account” checkbox has been added to Account ‣ Users ‣ Add User .
  • The ability to set the boot pool scrub interval has been added to System ‣ Boot .
  • The size of and the amount of used space in the boot pool is displayed in System ‣ Boot .
  • The “Enable automatic upload of kernel crash dumps and daily telemetry” checkbox has been added to System ‣ Advanced .
  • A “Backup” button has been added to System ‣ Advanced .
  • The “Periodic Notification User” drop-down menu has been added to System ‣ Advanced .
  • The system will issue an alert if an update fails and the details of the failure will be written to /data/update.failed .
  • The “Confirm Passphrase” field has been added to System ‣ CAs ‣ Import CA and System ‣ Certificates ‣ Import Certificate .
  • The “Support” tab has been added to System ‣ Support , providing a convenient method for reporting a bug or requesting a new feature.
  • The “Rsync Create” checkbox has been renamed to “Validate Remote Path” and the “Delay Updates” checkbox has been added to Tasks ‣ Rsync Tasks ‣ Add Rsync Task .
  • A reboot is no longer required when creating Link Aggregations .
  • The “Exclude System Dataset” checkbox has been added to Storage ‣ Periodic Snapshot Tasks ‣ Add Periodic Snapshot .
  • The /usr/local/bin/test_ssh.py script has been added for testing the SSH connection for a defined replication task.
  • The “Encryption Mode” and “Certificate” drop-down menus have been added to Directory Service ‣ Active Directory .
  • A pop-up warning will appear if you go to change Directory Service ‣ Active Directory ‣ Advanced Mode -> Idmap backend as selecting the wrong backend will break Active Directory integration.
  • The “Schema” drop-down menu has been added to Directory Service ‣ LDAP .
  • The “Kerberos Settings” tab as been added to Directory Service .
  • The ability to “Online” a previously offlined disk has been added to Storage ‣ Volumes ‣ Volume Status .
  • The “Periodic Snapshot Task” drop-down menu has been added to Sharing ‣ Windows (CIFS) ‣ Add Windows (CIFS) Share .
  • All available VFS objects have been added to Sharing ‣ Windows (CIFS) ‣ Add Windows (CIFS) Share ‣ Advanced Mode ‣ VFS Objects and the “aio_pthread” and “streams_xattr” VFS objects are enabled by default.
  • The “Pool Available Size Threshold” field has been renamed to “Pool Available Space Threshold” in Sharing ‣ Block (iSCSI) ‣ Target Global Configuration .
  • The “Logical Block Size” field has been moved from Sharing ‣ Block (iSCSI) ‣ Targets ‣ Add Target to Sharing ‣ Block (iSCSI) ‣ Extents ‣ Add Extent
  • The “Disable Physical Block Size Reporting” checkbox, “Available Space Threshold” field, and “LUN RPM” drop-down menu have been added to Sharing ‣ Block (iSCSI) ‣ Extents ‣ Add Extent .
  • The “Home share name” field has been added to Services ‣ AFP .
  • The “DNS Backend” field has been removed from Services ‣ Domain Controller as BIND is not included in FreeNAS®.
  • The “Require Kerberos for NFSv4” checkbox has been added to Services ‣ NFS .
  • The “SNMP v3 Support” checkbox and the “Username” and “Password” fields have been added to Services ‣ SNMP so that SNMPv3 can be configured.
  • The MediaBrowser Plugin has been renamed to Emby.
  • The Jails ‣ Add Jails button has been renamed to “Add Jail”.
  • A “Restart” button is now available when you click the entry for an installed jail.
  • The “Mtree” field and “Read-only” checkbox have been added to Jails ‣ Templates ‣ Add Jail Templates .
  • The “Mtree” field has been added to the “Edit” options for existing jail templates.
  • The -C , -D and -j options have been added to freenas-debug .
  • A Support Icon has been added to the top menubar, providing a convenient method for reporting a bug or requesting a new feature.
  • The “Help” icon has been replaced by the Guide icon, providing an offline version of the FreeNAS® User Guide (this documentation).
  • A warning message now occurs if you stop the iSCSI service when initiators are connected. Type ctladm islist to determine the names of the connected initiators.
  • An alert will be generated when a new update becomes available.
  • An alert will be generated when a S.M.A.R.T. error occurs.

1.3. Hardware Recommendations

Since FreeNAS® 9.3 is based on FreeBSD 9.3, it supports the same hardware found in the FreeBSD Hardware Compatibility List . Supported processors are listed in section 2.1 amd64 . Beginning with version 9.3, FreeNAS® is only available for 64-bit (also known as amd64) processors.

Note

beginning with version 9.3, FreeNAS® boots from a GPT partition. This means that the system BIOS must be able to boot using either the legacy BIOS firmware interface or EFI.

Actual hardware requirements will vary depending upon what you are using your FreeNAS® system for. This section provides some guidelines to get you started. You can also skim through the FreeNAS® Hardware Forum for performance tips from other FreeNAS® users or to post questions regarding the hardware best suited to meet your requirements. This forum post provides some specific recommendations if you are planning on purchasing hardware. Refer to Building, Burn-In, and Testing your FreeNAS system for detailed instructions on how to test new hardware.

1.3.1. RAM

The best way to get the most out of your FreeNAS® system is to install as much RAM as possible. The recommended minimum is 8 GB of RAM. The more RAM, the better the performance, and the FreeNAS® Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM.

Depending upon your use case, your system may require more RAM. Here are some general rules of thumb:

  • If you plan to use ZFS deduplication, ensure you have at least 5 GB RAM per TB of storage to be deduplicated.
  • If you plan to use Active Directory with a lot of users, add an additional 2 GB of RAM for winbind’s internal cache.
  • If you plan on Using the phpVirtualBox Template , increase the minimum RAM size by the amount of virtual memory you configure for the virtual machines. For example, if you plan to install two virtual machines, each with 4GB of virtual memory, the system will need at least 16GB of RAM.
  • If you plan to use iSCSI, install at least 16GB of RAM, if performance is not critical, or at least 32GB of RAM if performance is a requirement.
  • If you are installing FreeNAS® on a headless system, disable the shared memory settings for the video card in the BIOS.

If your system supports it and your budget allows for it, install ECC RAM. While more expensive, ECC RAM is highly recommended as it prevents in-flight corruption of data before the error-correcting properties of ZFS come into play, thus providing consistency for the checksumming and parity calculations performed by ZFS. If you consider your data to be important, use ECC RAM. This Case Study describes the risks associated with memory corruption.

If you don’t have at least 8GB of RAM, you should consider getting more powerful hardware before using FreeNAS® to store your data. Plenty of users expect FreeNAS® to function with less than these requirements, just at reduced performance. The bottom line is that these minimums are based on the feedback of many users. Users that do not meet these requirements and who ask for help in the forums or IRC will likely be ignored because of the abundance of information that FreeNAS® may not behave properly with less than 8GB of RAM.

1.3.2. Compact or USB Flash

The FreeNAS® operating system is installed to at least one device that is separate from the storage disks. The device can be a USB stick, compact flash, or SSD. Technically, it can also be installed onto a hard drive, but this is discouraged as that drive will then become unavailable for data storage.

Note

if you will be burning the installation file to a USB stick, you will need two USB slots, each with an inserted USB device, where one USB stick contains the installer and the other USB stick is selected to install into. When performing the installation, be sure to select the correct USB device to install to. In other words, you can not install FreeNAS® into the same USB stick that you boot the installer from. After installation, remove the USB stick containing the installer, and if necessary, configure the BIOS to boot from the remaining USB stick.

When determining the type and size of device to install the operating system to, keep the following points in mind:

  • the bare minimum size is 4GB. This provides room for the operating system and two boot environments. Since each update creates a boot environment, the recommended minimum is at least 8GB or 16GB as this provides room for more boot environments.
  • if you plan to make your own boot environments, budget about 1GB of storage per boot environment. Consider deleting older boot environments once you are sure that a boot environment is no longer needed. Boot environments can be created and deleted using System ‣ Boot .
  • when using a USB stick, it is recommended to use a name brand USB stick as ZFS will quickly find errors on cheap, not well made sticks.
  • when using a USB stick, USB 3.0 support is disabled by default as it currently is not compatible with some hardware, including Haswell (Lynx point) chipsets. If you receive a “failed with error 19” message when trying to boot FreeNAS®, make sure that xHCI/USB3 is disabled in the system BIOS. While this will downclock the USB ports to 2.0, the bootup and shutdown times will not be significantly different. To see if USB 3.0 support works with your hardware, follow the instructions in Tunables to create a “Tunable” named xhci_load , set its value to YES , and reboot the system.
  • if a reliable boot disk is required, use two identical devices and select them both during the installation. Doing so will create a mirrored boot device.

1.3.3. Storage Disks and Controllers

The Disk section of the FreeBSD Hardware List lists the supported disk controllers. In addition, support for 3ware 6gbps RAID controllers has been added along with the CLI utility tw_cli for managing 3ware RAID controllers.

FreeNAS® supports hot pluggable drives. To use this feature, make sure that AHCI is enabled in the BIOS.

If you need reliable disk alerting and immediate reporting of a failed drive, use an HBA such as a LSI MegaRAID controller or a 3Ware twa-compatible controller. More information about LSI cards and FreeNAS® can be found in this forum post .

Suggestions for testing disks before adding them to a RAID array can be found in this forum post .

This article provides a good overview of hard drives which are well suited for a NAS.

If you have some money to spend and wish to optimize your disk subsystem, consider your read/write needs, your budget, and your RAID requirements:

  • If you have steady, non-contiguous writes, use disks with low seek times. Examples are 10K or 15K SAS drives which cost about $1/GB. An example configuration would be six 600 GB 15K SAS drives in a RAID 10 which would yield 1.8 TB of usable space or eight 600 GB 15K SAS drives in a RAID 10 which would yield 2.4 TB of usable space.
  • 7200 RPM SATA disks are designed for single-user sequential I/O and are not a good choice for multi-user writes.

If you have the budget and high performance is a key requirement, consider a Fusion-I/O card which is optimized for massive random access. These cards are expensive and are suited for high-end systems that demand performance. A Fusion-I/O card can be formatted with a filesystem and used as direct storage; when used this way, it does not have the write issues typically associated with a flash device. A Fusion-I/O card can also be used as a cache device when your ZFS dataset size is bigger than your RAM. Due to the increased throughput, systems running these cards typically use multiple 10 GigE network interfaces.

If you will be using ZFS, Disk Space Requirements for ZFS Storage Pools recommends a minimum of 16 GB of disk space. Due to the way that ZFS creates swap, you can not format less than 3 GB of space with ZFS . However, on a drive that is below the minimum recommended size you lose a fair amount of storage space to swap: for example, on a 4 GB drive, 2 GB will be reserved for swap.

If you are new to ZFS and are purchasing hardware, read through ZFS Storage Pools Recommendations first.

ZFS uses dynamic block sizing, meaning that it is capable of striping different sized disks. However, if you care about performance, use disks of the same size. Further, when creating a RAIDZ*, only the size of the smallest disk will be used on each disk.

1.3.4. Network Interfaces

The Ethernet section of the FreeBSD Hardware Notes indicates which interfaces are supported by each driver. While many interfaces are supported, FreeNAS® users have seen the best performance from Intel and Chelsio interfaces, so consider these brands if you are purchasing a new NIC. Realteks will perform poorly under CPU load as interfaces with these chipsets do not provide their own processors.

At a minimum, a GigE interface is recommended. While GigE interfaces and switches are affordable for home use, modern disks can easily saturate 110 MB/s. If you require higher network throughput, you can bond multiple GigE cards together using the LACP type of Link Aggregations . However, the switch will need to support LACP which means you will need a more expensive managed switch.

If network performance is a requirement and you have some money to spend, use 10 GigE interfaces and a managed switch. If you are purchasing a managed switch, consider one that supports LACP and jumbo frames as both can be used to increase network throughput. Refer to the 10 Gig Networking Primer for more information.

Note

at this time the following are not supported: InfiniBand, FibreChannel over Ethernet, or wireless interfaces.

If network speed is a requirement, consider both your hardware and the type of shares that you create. On the same hardware, CIFS will be slower than FTP or NFS as Samba is single-threaded . If you will be using CIFS, use a fast CPU.

Wake on LAN (WOL) support is dependent upon the FreeBSD driver for the interface. If the driver supports WOL, it can be enabled using ifconfig(8) . To determine if WOL is supported on a particular interface, specify the interface name to the following command. In this example, the capabilities line indicates that WOL is supported for the re0 interface:

ifconfig -m re0
re0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=42098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWTSO>
capabilities=5399b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_UCAST,WOL_MCAST, WOL_MAGIC,VLAN_HWFILTER,VLAN_H WTSO>

If you find that WOL support is indicated but not working for a particular interface, create a bug report using the instructions in Support .

1.4. ZFS Primer

ZFS is an advanced, modern filesystem that was specifically designed to provide features not available in traditional UNIX filesystems. It was originally developed at Sun with the intent to open source the filesystem so that it could be ported to other operating systems. After the Oracle acquisition of Sun, some of the original ZFS engineers founded OpenZFS in order to provided continued, collaborative development of the open source version. To differentiate itself from Oracle ZFS version numbers, OpenZFS uses feature flags. Feature flags are used to tag features with unique names in order to provide portability between OpenZFS implementations running on different platforms, as long as all of the feature flags enabled on the ZFS pool are supported by both platforms. FreeNAS® uses OpenZFS and each new version of FreeNAS® keeps up-to-date with the latest feature flags and OpenZFS bug fixes.

Here is an overview of the features provided by ZFS:

ZFS is a transactional, Copy-On-Write (COW) filesystem. For each write request, a copy is made of the associated disk block(s) and all changes are made to the copy rather than to the original block(s). Once the write is complete, all block pointers are changed to point to the new copy. This means that ZFS always writes to free space and most writes will be sequential. When ZFS has direct access to disks, it will bundle multiple read and write requests into transactions; most filesystems can not do this as they only have access to disk blocks. A transaction either completes or fails, meaning there will never be a write-hole and a filesystem checker utility is not necessary. Because of the transactional design, as additional storage capacity is added it becomes immediately available for writes; to rebalance the data, one can copy it to re-write the existing data across all available disks. As a 128-bit filesystem, the maximum filesystem or file size is 16 exabytes.

ZFS was designed to be a self-healing filesystem . As ZFS writes data, it creates a checksum for each disk block it writes. As ZFS reads data, it validates the checksum for each disk block it reads. If ZFS identifies a disk block checksum error on a pool that is mirrored or uses RAIDZ*, ZFS will fix the corrupted data with the correct data. Since some disk blocks are rarely read, regular scrubs should be scheduled so that ZFS can read all of the data blocks in order to validate their checksums and correct any corrupted blocks. While multiple disks are required in order to provide redundancy and data correction, ZFS will still provide data corruption detection to a system with one disk. FreeNAS® automatically schedules a monthly scrub for each ZFS pool and the results of the scrub will be displayed in View Volumes . Reading the scrub results can provide an early indication of possible disk failure.

Unlike traditional UNIX filesystems, you do not need to define partition sizes at filesystem creation time . Instead, you feed a certain number of disk(s) at a time (known as a vdev) to a ZFS pool and create filesystems from the pool as needed. As more capacity is needed, identical vdevs can be striped into the pool. In FreeNAS®, Volume Manager can be used to create or extend ZFS pools. Once a pool is created, it can be divided into dynamically-sized datasets or fixed-size zvols as needed. Datasets can be used to optimize storage for the type of data being stored as permissions and properties such as quotas and compression can be set on a per-dataset level. A zvol is essentially a raw, virtual block device which can be used for applications that need raw-device semantics such as iSCSI device extents.

ZFS supports real-time data compression . Compression happens when a block is written to disk, but only if the written data will benefit from compression. When a compressed block is accessed, it is automatically decompressed. Since compression happens at the block level, not the file level, it is transparent to any applications accessing the compressed data. By default, ZFS pools made using FreeNAS® version 9.2.1 or later will use the recommended LZ4 compression algorithm.

ZFS provides low-cost, instantaneous snapshots of the specified pool, dataset, or zvol. Due to COW, the initial size of a snapshot is 0 bytes and the size of the snapshot increases over time as changes to the files in the snapshot are written to disk. Snapshots can be used to provide a copy of data at the point in time the snapshot was created. When a file is deleted, its disk blocks are added to the free list; however, the blocks for that file in any existing snapshots are not added to the free list until all referencing snapshots are removed. This means that snapshots provide a clever way of keeping a history of files, should you need to recover an older copy of a file or a deleted file. For this reason, many administrators take snapshots often (e.g. every 15 minutes), store them for a period of time (e.g. for a month), and store them on another system. Such a strategy allows the administrator to roll the system back to a specific time or, if there is a catastrophic loss, an off-site snapshot can restore the system up to the last snapshot interval (e.g. within 15 minutes of the data loss). Snapshots are stored locally but can also be replicated to a remote ZFS pool. During replication, ZFS does not do a byte-for-byte copy but instead converts a snapshot into a stream of data. This design means that the ZFS pool on the receiving end does not need to be identical and can use a different RAIDZ level, volume size, compression settings, etc.

ZFS boot environments provide a method for recovering from a failed upgrade . Beginning with FreeNAS® version 9.3, a snapshot of the dataset the operating system resides on is automatically taken before an upgrade or a system update. This saved boot environment is automatically added to the GRUB boot loader. Should the upgrade or configuration change fail, simply reboot and select the previous boot environment from the boot menu. Users can also create their own boot environments in System ‣ Boot as needed, for example before making configuration changes. This way, the system can be rebooted into a snapshot of the system that did not include the new configuration changes.

ZFS provides a write cache in RAM as well as a ZFS Intent Log (ZIL). The ZIL is a temporary storage area for synchronous writes until they are written asynchronously to the ZFS pool. If the system has many synchronous writes where the integrity of the write matters, such as from a database server or when using NFS over ESXi, performance can be increased by adding a dedicated log device, or slog, using Volume Manager . More detailed explanations can be found in this forum post and in this blog post . A dedicated log device will have no affect on CIFS, AFP, or iSCSI as these protocols rarely use synchronous writes. When creating a dedicated log device, it is recommended to use a fast SSD with a supercapacitor or a bank of capacitors that can handle writing the contents of the SSD’s RAM to the SSD. The zilstat utility can be run from Shell to help determine if the system would benefit from a dedicated ZIL device. See this website for usage information. If you decide to create a dedicated log device to speed up NFS writes, the SSD can be half the size of system RAM as anything larger than that is unused capacity. The log device does not need to be mirrored on a pool running ZFSv28 or feature flags as the system will revert to using the ZIL if the log device fails and only the data in the device which had not been written to the pool will be lost (typically the last few seconds of writes). You can replace the lost log device in the View Volumes ‣ Volume Status screen. Note that a dedicated log device can not be shared between ZFS pools and that the same device cannot hold both a log and a cache device.

ZFS provides a read cache in RAM, known as the ARC, to reduce read latency. FreeNAS® adds ARC stats to top(1) and includes the arc_summary.py and arcstat.py tools for monitoring the efficiency of the ARC. If an SSD is dedicated as a cache device, it is known as an L2ARC and ZFS uses it to store more reads which can increase random read performance. However, adding an L2ARC is not a substitute for insufficient RAM as L2ARC needs RAM in order to function. If you do not have enough RAM for a good sized ARC, you will not be increasing performance, and in most cases you will actually hurt performance and could potentially cause system instability. RAM is always faster than disks, so always add as much RAM as possible before determining if the system would benefit from a L2ARC device. If you have a lot of applications that do large amounts of random reads, on a dataset small enough to fit into the L2ARC, read performance may be increased by adding a dedicated cache device using Volume Manager . SSD cache devices only help if your active data is larger than system RAM, but small enough that a significant percentage of it will fit on the SSD. As a general rule of thumb, an L2ARC should not be added to a system with less than 64 GB of RAM and the size of an L2ARC should not exceed 5x the amount of RAM. In some cases, it may be more efficient to have two separate pools: one on SSDs for active data and another on hard drives for rarely used content. After adding an L2ARC, monitor its effectiveness using tools such as arcstat . If you need to increase the size of an existing L2ARC, you can stripe another cache device using Volume Manager . The GUI will always stripe L2ARC, not mirror it, as the contents of L2ARC are recreated at boot. Losing an L2ARC device will not affect the integrity of the pool, but may have an impact on read performance, depending upon the workload and the ratio of dataset size to cache size. Note that a dedicated L2ARC device can not be shared between ZFS pools.

ZFS was designed to provide redundancy while addressing some of the inherent limitations of hardware RAID such as the write-hole and corrupt data written over time before the hardware controller provides an alert. ZFS provides three levels of redundancy, known as RAIDZ*, where the number after the RAIDZ indicates how many disks per vdev can be lost without losing data. ZFS also supports mirrors, with no restrictions on the number of disks in the mirror. ZFS was designed for commodity disks so no RAID controller is needed. While ZFS can also be used with a RAID controller, it is recommended that the controller be put into JBOD mode so that ZFS has full control of the disks. When determining the type of ZFS redundancy to use, consider whether your goal is to maximize disk space or performance:

  • RAIDZ1 maximizes disk space and generally performs well when data is written and read in large chunks (128K or more).
  • RAIDZ2 offers better data availability and significantly better mean time to data loss (MTTDL) than RAIDZ1.
  • A mirror consumes more disk space but generally performs better with small random reads. For better performance, a mirror is strongly favored over any RAIDZ, particularly for large, uncacheable, random read loads.
  • Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. If you have more disks, use multiple vdevs.
  • Some older ZFS documentation recommends that a certain number of disks is needed for each type of RAIDZ in order to achieve optimal performance. On systems using LZ4 compression, which is the default for FreeNAS® 9.2.1 and higher, this is no longer true. See ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ for details.

The following resources can also help you determine the RAID configuration best suited to your storage needs:

Warning

NO RAID SOLUTION PROVIDES A REPLACEMENT FOR A RELIABLE BACKUP STRATEGY. BAD STUFF CAN STILL HAPPEN AND YOU WILL BE GLAD THAT YOU BACKED UP YOUR DATA WHEN IT DOES. See Periodic Snapshot Tasks and Replication Tasks if you would like to use replicated ZFS snapshots as part of your backup strategy.

While ZFS provides many benefits, there are some caveats to be aware of:

  • At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues.
  • When considering the number of disks to use per vdev, consider the size of the disks and the amount of time required for resilvering, which is the process of rebuilding the vdev. The larger the size of the vdev, the longer the resilvering time. When replacing a disk in a RAIDZ*, it is possible that another disk will fail before the resilvering process completes. If the number of failed disks exceeds the number allowed per vdev for the type of RAIDZ, the data in the pool will be lost. For this reason, RAIDZ1 is not recommended for drives over 1 TB in size.
  • It is recommended to use drives of equal sizes. While ZFS can create a pool using disks of differing sizes, the capacity will be limited by the size of the smallest disk.

If you are new to ZFS, the Wikipedia entry on ZFS provides an excellent starting point to learn more about its features. These resources are also useful to bookmark and refer to as needed: