Well I wanted to see how this was done, so decided to play with the new release of openSUSE, so here are the details
Hardware:
HP Pavillion AMD Athlon ™ II X4 635 CPU, 8 GB RAM, GeForce 8800GT GPU
sda - OCZ Vertex 4 128GB (at 6Gb/s)
sdb - WD Caviar Blue 500GB (at 6Gb/s)
Operating System: openSUSE 13.2 x86_64 Gnome 3.14.1
Standard install with adding online repositories and updated during install. The bcache backing device and cache partitions are created (no filesystem or mount point) via expert partitioner during install.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 512M 0 part /boot
├─sda2 8:2 0 50G 0 part /
├─sda3 8:3 0 8G 0 part [SWAP]
└─sda4 8:4 0 60.8G 0 part
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part
By default the bcache tools and testing program are not installed, so you need to add;
zypper in bcache-tools fio
At this point you may want to test the disks with fio (See http://permalink.gmane.org/gmane.linux.kernel.bcache.devel/2456) to check before and after.
If your wanting to get to the nitty-gritty, then carry on
First create the backing device (hdd);
make-bcache -B /dev/sdb1
UUID: 88ec2a38-2c2c-44d3-aaab-29e4be7cfe35
Set UUID: 0ccc0cb9-12f2-43d5-be87-8841f2a5192b
version: 1
block_size: 1
data_offset: 16
Note: If it errors out about an old filesystem (It did for me), you may need to run;
wipefs -a /dev/sdXN
Now create the caching device (ssd);
make-bcache -C /dev/sda4
UUID: f30bfdec-d90c-471f-a305-34d3ca39a486
Set UUID: 8aba2b5a-4518-4f73-b95f-1a4ca1fce3d1
version: 0
nbuckets: 124398
block_size: 1
bucket_size: 1024
nr_in_set: 1
nr_this_dev: 0
first_bucket: 1
Now check all is good and attach the device via it’s UUID;
ls /sys/fs/bcache/
8aba2b5a-4518-4f73-b95f-1a4ca1fce3d1 register register_quiet
echo "8aba2b5a-4518-4f73-b95f-1a4ca1fce3d1" > /sys/block/bcache0/bcache/attach
Set to use writeback and verify it changed;
cat /sys/block/bcache0/bcache/cache_mode
[writethrough] writeback writearound none
echo writeback > /sys/block/bcache0/bcache/cache_mode
cat /sys/block/bcache0/bcache/cache_mode
writethrough [writeback] writearound none
Now we have the backing device and bcache all attached;
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 512M 0 part /boot
├─sda2 8:2 0 50G 0 part /
├─sda3 8:3 0 8G 0 part [SWAP]
└─sda4 8:4 0 60.8G 0 part
└─bcache0 253:0 0 465.8G 0 disk
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part
└─bcache0 253:0 0 465.8G 0 disk
Create the filesystem of your choice, in my case ext4;
mkfs.ext4 /dev/bcache0
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 122096388 4k blocks and 30531584 inodes
Filesystem UUID: ed708ca3-eb49-4d61-991a-947c8a6eb9cc
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Edit /etc/fstab and create mount point, in my case /data
cd /
mkdir data
vi /etc/fstab
(to add)
/dev/bcache0 /data ext4 defaults 1 2
Reboot or manually mount and enjoy
Summary;
This is the reference I used (with details on how to remove the bcache device ):
With my rotating disk, it was running around 7K iops, with the bcache it goes up to around 22K iops Watching with iostats, you see the stuff being written (and finishing) in bcache0, then see it being written to the disk, all very kewl
Both disks run at SATAIII via a pci-e controller, the four onboard ports are SATAII and unused. I think I will get another pci-e controller and some SATAIII disks for a RAID setup.
Additional Info (Added 11/08/2014);
Saw this post on the Mailing List;
http://permalink.gmane.org/gmane.linux.kernel.bcache.devel/2739
So ran the fio.bash test script from;
http://www.ansatt.hig.no/erikh/sysadm/fio.bash
Here is the bcache device
./fio.bash /data/scratch/
Baseline reads with hdparm
/dev/bcache0:
Timing cached reads: 4574 MB in 2.00 seconds = 2287.29 MB/sec
Timing buffered disk reads: 364 MB in 3.01 seconds = 120.80 MB/sec
Sequential read
read : io=1104.9MB, bw=112224KB/s, iops=28056, runt= 10081msec
Sequential write
write: io=1286.6MB, bw=131526KB/s, iops=32881, runt= 10016msec
Random read
read : io=18568KB, bw=1768.6KB/s, iops=442, runt= 10499msec
Random write
write: io=1150.3MB, bw=117707KB/s, iops=29426, runt= 10007msec
Mixed 70/30 random read and write with 8K block size
read : io=29144KB, bw=2742.8KB/s, iops=342, runt= 10626msec
write: io=12784KB, bw=1203.9KB/s, iops=150, runt= 10626msec
Here is the SSD;
./fio.bash /root/
Baseline reads with hdparm
/dev/sda2:
Timing cached reads: 4418 MB in 2.00 seconds = 2211.37 MB/sec
Timing buffered disk reads: 586 MB in 3.01 seconds = 194.80 MB/sec
Sequential read
read : io=1788.2MB, bw=182999KB/s, iops=45749, runt= 10006msec
Sequential write
write: io=1579.1MB, bw=161690KB/s, iops=40422, runt= 10006msec
Random read
read : io=1554.3MB, bw=159028KB/s, iops=39756, runt= 10008msec
Random write
write: io=1242.3MB, bw=126357KB/s, iops=31589, runt= 10067msec
Mixed 70/30 random read and write with 8K block size
read : io=1030.9MB, bw=105402KB/s, iops=13175, runt= 10015msec
write: io=453192KB, bw=45251KB/s, iops=5656, runt= 10015msec