Won't boot - /boot corrupted and out of space

I did an update through zypper tonight, and my /boot ran out of space due to a kernel update. Grub got thoroughly screwed up and won’t boot. Here’s my drive layout:

md/0 /boot mirror linux raid
md/1 / mirror linux raid
md/2 /home mirror linux raid

I was looking at this thread here: http://forums.opensuse.org/vbcms-comments/478290-article-re-install-grub2-dvd-rescue.html

I’m having problems adapting it to my /boot being a different partition. How can I repair grub2 in order to boot off of my old kernel files that I assume are still there and in good order?

And your openSUSE version?

openSUSE 12.2 - 64-bit with GRUB2.

What exact error message you get and at which stage of booting?

A lot of garbage:


Booting openSUSE 12.2

error: can't find command ''
error: ELF sections outside core
error: can't find command ''
error: can't find command 'echo'
error: can't find command 'print'
error: can't find command 'search'
error: can't find command 'echo'
error: can't find command 'initrd'

Failed to boot both default and fallback entries...

Then Grub ends up booting after 10 seconds into what looks like an MBR rescue menu… I can press ‘c’ for a command prompt.

Looks like I might have another option to try… I was just able to boot the harddisk install using “Super Grub2 Disk” (Super Grub2 Disk - Rescatux & SG2D)… so I’m in the install, but still not sure how to fix it, considering /boot is out of space.

On 10/24/2012 08:36 AM, PsychoGTI wrote:
>
> openSUSE 12.2 - 64-bit with GRUB2.

Assuming that the CD you can boot has terminal mode, use


fdisk -l

That will show you what disks you have. Note which partitions have ID 83. If
your system disk is /dev/sda (most likely), then mount (in turn) each of those
ID = 83 Linux partitions on that disk and get some details of the contents using


mount /dev/sdaX /mnt
df | grep mnt
umount /mnt

Repeat the above with X of 1, 2, … for each partition with ID of 83. Post the
output of the df line.

What filesystem type are you using?

Here’s the output of fdisk -l:


Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002e904

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      208895      103424   fd  Linux raid autodetect
/dev/sda2          208896     8595455     4193280   82  Linux swap / Solaris
/dev/sda3         8595456    50540543    20972544   fd  Linux raid autodetect
/dev/sda4        50540544  1953519615   951489536   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002e904

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048      208895      103424   fd  Linux raid autodetect
/dev/sdb2          208896     8595455     4193280   82  Linux swap / Solaris
/dev/sdb3         8595456    50540543    20972544   fd  Linux raid autodetect
/dev/sdb4        50540544  1953519615   951489536   fd  Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b3e7e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  3907028991  1953513472   fd  Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00011596

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  3907028991  1953513472   fd  Linux raid autodetect

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000e856

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048  3907028991  1953513472   fd  Linux raid autodetect

Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001af81

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1            2048  3907028991  1953513472   fd  Linux raid autodetect

Disk /dev/md1: 21.5 GB, 21475811328 bytes
2 heads, 4 sectors/track, 5243118 cylinders, total 41944944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md3: 6001.2 GB, 6001192599552 bytes
2 heads, 4 sectors/track, 1465134912 cylinders, total 11721079296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 393216 bytes


Disk /dev/md2: 974.3 GB, 974325145600 bytes
2 heads, 4 sectors/track, 237872350 cylinders, total 1902978800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 105 MB, 105893888 bytes
2 heads, 4 sectors/track, 25853 cylinders, total 206824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

No partition are 83 as they are all linux RAID (fd). Here’s the output for df -h:


Filesystem      Size  Used Avail Use% Mounted on
rootfs           20G   14G  5.3G  73% /
devtmpfs        7.9G  8.0K  7.9G   1% /dev
tmpfs           7.9G  700K  7.9G   1% /dev/shm
tmpfs           7.9G  816K  7.9G   1% /run
/dev/md1         20G   14G  5.3G  73% /
/dev/md0        100M  100M     0 100% /boot
/dev/md2        907G  343G  563G  38% /home
/dev/md3        5.5T  2.2T  3.3T  41% /data
tmpfs           7.9G  816K  7.9G   1% /var/lock
tmpfs           7.9G  816K  7.9G   1% /var/run

The pair of /dev/sda1 and /dev/sdb1 are the /dev/md0 (/boot) while /dev/sda3 and /dev/sdb3 make up /dev/md1 (/).

Since I can boot the old OS using the 3rd party boot loader, I’m going to try to re-install grub and see if I can get it at least booting on its own… but I still have the issue that it’s out of space. I can’t seem to re-size the partition… I was thinking of using something like parted magic, but it yells pretty good as I attempt to re-size the /boot partition.

Is there a way to just abandon the current /boot (/dev/md0) mapping and just make and use a new /boot on my root (/) partition which has plenty of room?

On 10/24/2012 11:46 PM, PsychoGTI wrote:
>
> Here’s the output of fdisk -l:
>
>
> Code:
> --------------------
>
> Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0002e904
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 2048 208895 103424 fd Linux raid autodetect
> /dev/sda2 208896 8595455 4193280 82 Linux swap / Solaris
> /dev/sda3 8595456 50540543 20972544 fd Linux raid autodetect
> /dev/sda4 50540544 1953519615 951489536 fd Linux raid autodetect
>
> Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0002e904
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 2048 208895 103424 fd Linux raid autodetect
> /dev/sdb2 208896 8595455 4193280 82 Linux swap / Solaris
> /dev/sdb3 8595456 50540543 20972544 fd Linux raid autodetect
> /dev/sdb4 50540544 1953519615 951489536 fd Linux raid autodetect
>
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x000b3e7e
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 2048 3907028991 1953513472 fd Linux raid autodetect
>
> Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00011596
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 2048 3907028991 1953513472 fd Linux raid autodetect
>
> Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0000e856
>
> Device Boot Start End Blocks Id System
> /dev/sde1 2048 3907028991 1953513472 fd Linux raid autodetect
>
> Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0001af81
>
> Device Boot Start End Blocks Id System
> /dev/sdf1 2048 3907028991 1953513472 fd Linux raid autodetect
>
> Disk /dev/md1: 21.5 GB, 21475811328 bytes
> 2 heads, 4 sectors/track, 5243118 cylinders, total 41944944 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
> Disk /dev/md3: 6001.2 GB, 6001192599552 bytes
> 2 heads, 4 sectors/track, 1465134912 cylinders, total 11721079296 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 393216 bytes
>
>
> Disk /dev/md2: 974.3 GB, 974325145600 bytes
> 2 heads, 4 sectors/track, 237872350 cylinders, total 1902978800 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
> Disk /dev/md0: 105 MB, 105893888 bytes
> 2 heads, 4 sectors/track, 25853 cylinders, total 206824 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> --------------------
>
>
> No partition are 83 as they are all linux RAID (fd). Here’s the output
> for df -h:
>
>
> Code:
> --------------------
>
> Filesystem Size Used Avail Use% Mounted on
> rootfs 20G 14G 5.3G 73% /
> devtmpfs 7.9G 8.0K 7.9G 1% /dev
> tmpfs 7.9G 700K 7.9G 1% /dev/shm
> tmpfs 7.9G 816K 7.9G 1% /run
> /dev/md1 20G 14G 5.3G 73% /
> /dev/md0 100M 100M 0 100% /boot
> /dev/md2 907G 343G 563G 38% /home
> /dev/md3 5.5T 2.2T 3.3T 41% /data
> tmpfs 7.9G 816K 7.9G 1% /var/lock
> tmpfs 7.9G 816K 7.9G 1% /var/run
>
> --------------------
>
>
> The pair of /dev/sda1 and /dev/sdb1 are the /dev/md0 (/boot) while
> /dev/sda3 and /dev/sdb3 make up /dev/md1 (/).
>
> Since I can boot the old OS using the 3rd party boot loader, I’m going
> to try to re-install grub and see if I can get it at least booting on
> its own… but I still have the issue that it’s out of space. I
> can’t seem to re-size the partition… I was thinking of using something
> like parted magic, but it yells pretty good as I attempt to re-size the
> /boot partition.
>
> Is there a way to just abandon the current /boot (/dev/md0) mapping
> and just make and use a new /boot on my root (/) partition which has
> plenty of room?

I did not know you had RAID. From the configuration that you used to do the
‘df’, use ‘cd /boot’ and try to determine why the partition is full. How many
kernels are you keeping?

I’m just looking into that. :slight_smile: I’ve made some progress in the last 10 minutes! I’ve got GRUB2 reloaded and working now on boot. I reverted my kernels back to 3.4.6-2.10 (pre-update that screwed this whole thing up)… seems like the system has two kernels: 3.4.6-2.10-default and 3.4.6-2.10-desktop. Here’s the current content of my /boot now:


-rw-r--r-- 1 root root  2401229 Aug  4 01:21 System.map-3.4.6-2.10-default
-rw-r--r-- 1 root root  2483003 Oct 23 22:12 System.map-3.4.6-2.10-desktop
-rw------- 1 root root      512 Oct 23 22:12 backup_mbr
-rw-r--r-- 1 root root     1236 Oct 23 22:12 boot.readme
-rw-r--r-- 1 root root   131121 Aug  4 00:12 config-3.4.6-2.10-default
-rw-r--r-- 1 root root   131322 Oct 23 22:12 config-3.4.6-2.10-desktop
drwxr-xr-x 2 root root     1024 Oct 23 22:12 grub
drwxr-xr-x 6 root root     1024 Oct 24 22:09 grub2
lrwxrwxrwx 1 root root       25 Oct 24 21:48 initrd -> initrd-3.4.6-2.10-default
-rw-r--r-- 1 root root 16790492 Oct 24 21:48 initrd-3.4.6-2.10-default
-rw-r--r-- 1 root root 16722310 Oct 23 22:12 initrd-3.4.6-2.10-desktop
drwx------ 2 root root    12288 Oct 23 22:12 lost+found
-rw-r--r-- 1 root root   581632 Oct 23 22:12 message
-rw-r--r-- 1 root root   222006 Aug  4 01:40 symvers-3.4.6-2.10-default.gz
-rw-r--r-- 1 root root   221977 Oct 23 22:12 symvers-3.4.6-2.10-desktop.gz
-rw-r--r-- 1 root root      409 Aug  4 01:40 sysctl.conf-3.4.6-2.10-default
-rw-r--r-- 1 root root      520 Oct 23 22:12 sysctl.conf-3.4.6-2.10-desktop
-rw-r--r-- 1 root root  5462240 Aug  4 01:39 vmlinux-3.4.6-2.10-default.gz
-rw-r--r-- 1 root root  5711925 Oct 23 22:12 vmlinux-3.4.6-2.10-desktop.gz
lrwxrwxrwx 1 root root       26 Oct 24 21:48 vmlinuz -> vmlinuz-3.4.6-2.10-default
-rw-r--r-- 1 root root  4689872 Aug  4 01:21 vmlinuz-3.4.6-2.10-default
-rw-r--r-- 1 root root  4912784 Oct 23 22:12 vmlinuz-3.4.6-2.10-desktop

I’ve been trying to uninstall the default kernel, as I boot and only run the desktop kernel (to my current knowledge at least). However, the system won’t let me. I confirmed that I am running the desktop kernel via uname -a:

Linux 3.4.6-2.10-desktop #1 SMP PREEMPT Thu Jul 26 09:36:26 UTC 2012 (641c197) x86_64 x86_64 x86_64 GNU/Linux

I try to remove kernel-default-base (current installed version) but both YaST and zypper want to install kernel-default instead… even though I have installed and am running kernel-desktop. Any thoughts? :-/

Awesome. Solved it. I had the exact same issue as outlined in this thread: http://forums.opensuse.org/applications/430533-remove-default-kernel.html

Now I have a fair amount of room on the /boot partition:

Filesystem      Size  Used Avail Use% Mounted on
rootfs           20G   13G  6.5G  66% /
devtmpfs        7.9G  8.0K  7.9G   1% /dev
tmpfs           7.9G  700K  7.9G   1% /dev/shm
tmpfs           7.9G  804K  7.9G   1% /run
/dev/md1         20G   13G  6.5G  66% /
/dev/md0        100M   44M   51M  47% /boot
/dev/md2        907G  343G  563G  38% /home
/dev/md3        5.5T  2.2T  3.3T  41% /data
tmpfs           7.9G  804K  7.9G   1% /var/lock
tmpfs           7.9G  804K  7.9G   1% /var/run
tmpfs           7.9G     0  7.9G   0% /media

Here’s my current /boot directory:

-rw-r--r-- 1 root root  2483003 Oct 23 22:12 System.map-3.4.6-2.10-desktop
-rw------- 1 root root      512 Oct 23 22:12 backup_mbr
-rw-r--r-- 1 root root     1236 Oct 23 22:12 boot.readme
-rw-r--r-- 1 root root   131322 Oct 23 22:12 config-3.4.6-2.10-desktop
drwxr-xr-x 2 root root     1024 Oct 23 22:12 grub
drwxr-xr-x 6 root root     1024 Oct 24 22:52 grub2
lrwxrwxrwx 1 root root       25 Oct 24 22:53 initrd -> initrd-3.4.6-2.10-desktop
-rw-r--r-- 1 root root 16722310 Oct 23 22:12 initrd-3.4.6-2.10-desktop
drwx------ 2 root root    12288 Oct 23 22:12 lost+found
-rw-r--r-- 1 root root   581632 Oct 23 22:12 message
-rw-r--r-- 1 root root   221977 Oct 23 22:12 symvers-3.4.6-2.10-desktop.gz
-rw-r--r-- 1 root root      520 Oct 23 22:12 sysctl.conf-3.4.6-2.10-desktop
-rw-r--r-- 1 root root  5711925 Oct 23 22:12 vmlinux-3.4.6-2.10-desktop.gz
lrwxrwxrwx 1 root root       26 Oct 24 22:53 vmlinuz -> vmlinuz-3.4.6-2.10-desktop
-rw-r--r-- 1 root root  4912784 Oct 23 22:12 vmlinuz-3.4.6-2.10-desktop

This should be just enough room for updates… barely. Still might be nice to know how to remap the /boot to another directory in case kernel sizes keep growing…

On 10/25/2012 12:26 AM, PsychoGTI wrote:
>
> I’ve been trying to uninstall the default kernel, as I boot and only
> run the desktop kernel (to my current knowledge at least). However, the
> system won’t let me. I confirmed that I am running the desktop kernel
> via uname -a:
>
>
> Code:
> --------------------
> Linux 3.4.6-2.10-desktop #1 SMP PREEMPT Thu Jul 26 09:36:26 UTC 2012 (641c197) x86_64 x86_64 x86_64 GNU/Linux
> --------------------
>
>
> I try to remove kernel-default-base (current installed version) but
> both YaST and zypper want to install kernel-default instead… even
> though I have installed and am running kernel-desktop. Any thoughts?
> :-/

I just did a little test and I was able to uninstall kernel-default and
kernel-default-devel without any trouble using YaST. My system did not have
kernel-default-base installed at all.

On 2012-10-25 08:06, PsychoGTI wrote:
> This should be just enough room for updates… barely. Still might be
> nice to know how to remap the /boot to another directory in case kernel
> sizes keep growing…

It means reinstalling grub.

Me, I would reinstall all with a bigger /boot partition.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” (Minas Tirith))

On 10/25/2012 08:57 AM, Carlos E. R. wrote:
> On 2012-10-25 08:06, PsychoGTI wrote:
>> This should be just enough room for updates… barely. Still might be
>> nice to know how to remap the /boot to another directory in case kernel
>> sizes keep growing…
>
> It means reinstalling grub.
>
> Me, I would reinstall all with a bigger /boot partition.

I agree. A /boot of 100MB is too small. I need to do some cleanups, but my
current /boot has 2.1 GB in it.

Yes, it has grown too small. These partition sizes were first decided a while ago when I was running openSUSE 11.2… If I re-did it, I would definitely restructure some of the partitions so as to run an LVM as well as the RAID.

Here’s a good question for you guys… and has probably been discussed before as it seems like a common question: If I went out and purchased 2 new SSD’s to use solely as a system drive in RAID 1, how would you partition and configure it? Redoing this system will be a weekend at least, given the number of customizations and upgrades over the years… so I’d like to give it some serious consideration as to what I’d do differently for the drive setup if I go through all this again.

My first thoughts on this are:

/boot is 2GB ext4 standard partition in RAID 1.
swap is 2GB (this machine has 16GB of RAM, so arguably it may never use the swap space, but may be good to have), in RAID 1.
/ is the remainder of the new drive pair and is an LVM on RAID 1.
I would LVM the two 1TB drives where the system is now, keeping them in RAID 1. As well as LVM the 2TB drives that are in RAID 5 at the moment, volume group them together and create a 500GB /home and the rest would be /data.

Any pluses/minuses to this setup? Open to your suggestions…

On 2012-10-25 18:06, PsychoGTI wrote:

> Any pluses/minuses to this setup? Open to your suggestions…

If you use all the space, I see no point in LVM. For LVM you should have space unassigned, so that
you can add it later on.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” (Minas Tirith))

LVM is mainly good for:

  • Resizing LVM partitions on a disk.
  • Having LVM partitions that span multiple drives (ie. not limited to one drive)

I’m thinking LVM just for flexibility of resizing the drives if I want to ever expand the number of drives (ie. go from 4 drives to 6 for the /data partition) in the future or resize my /home and/or /data partitions.

Thoughts?

On 2012-10-26 20:06, PsychoGTI wrote:
>
> LVM is mainly good for:
> - Resizing LVM partitions on a disk.
> - Having LVM partitions that span multiple drives (ie. not limited to
> one drive)
>
> I’m thinking LVM just for flexibility of resizing the drives if I want
> to ever expand the number of drives (ie. go from 4 drives to 6 for the
> /data partition) in the future or resize my /home and/or /data
> partitions.

Plus raid for safety. Yes, I see now.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” (Minas Tirith))

Exactly. :slight_smile: I just want to make sure that my idea here is sound, and that I’m not leaving anything out.