Monitoring Splunk

Is there a way to expand the Centos-root directory on the Splunk server without harming the VM?

bullockw
New Member

I am in need of assistance. The /dev/mapper/centos-root directory on my Splunk server is full. Is there a way that it can be expanded without causing harm to the VM? If not, are there unnecessary files that could be purged from that directory to free up space?

Thanks in advance for any information you can provide!

0 Karma

bullockw
New Member

Here are the outputs from my server using the commands you provided:

[root@splunk-s30 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 3 0 wz--n- <499.51g 64.00m

[root@splunk-s30 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 45G 5.4G 90% /
devtmpfs 3.0G 0 3.0G 0% /dev
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 3.0G 65M 2.9G 3% /run
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/mapper/centos-home 444G 246M 444G 1% /home
/dev/sda1 497M 217M 281M 44% /boot
tmpfs 597M 0 597M 0% /run/user/0

Is there anything that can be done to expand the centos-root volume under these circumstances?

0 Karma

codebuilder
Influencer

Unfortuantely, the only way to expand the centos-root VG in this case would be to have a new disk presented to the system.
Then you can add that disk to the VG with vgextend, then increase the root LV with lvextend.

Unless, by chance, you have unused disks already presented to the system.
You can check this by running: fdisk -l |grep sd
Then look for any disks that do not have a partition created. Or you can run pvs to find any disks with a physical volume created that are not part of any VG. Those could also be used.

Additionally, have you checked the size of the log files that Splunk is generating on your machine? Depending on how busy it is, and how long it's been running, those can add up quickly. Especially if you don't have logrotate configured.

Try checking those with these commands (assuming /opt/splunk is your $SPLUNK_HOME):
du -sh /opt/splunk/var/log/splunk/
du -sh /opt/splunk/var/log/introspection

ll -h /opt/splunk/var/log/splunk/
ll -h /opt/splunk/var/log/introspection

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

bullockw
New Member

Thank you for responses. I added a 160GB drive to my VM and now my output looks like the following:

[root@splunk-s30 ~]# fdisk -l|grep sd
Disk /dev/sda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1048575999 523774976 8e Linux LVM
Disk /dev/sdb: 171.8 GB, 171798691840 bytes, 335544320 sectors
/dev/sdb3 2048 335544319 167771136 83 Linux

Being that the centos-root (sda1) directory I am trying to extend and the new Volume (sdb3) are not Logical, how should I proceed? I apologize in advance for my lack of skill with these operations.

My goal is to have the centos-root directory to either reside solely on the "/dev/sdb3" or to expand /dev/sda1 to encompass /sdb3 as well (combine the drives). Whichever is easiest and requires the least amount of risk to the Splunk server.

Thank you so much!

0 Karma

codebuilder
Influencer

You won't be able to move the root filesystem to sdb, but you can extend your root VG using sdb.
Though, I'm a little confused that you have three partitions on a newly presented disk. I'm guessing sdb is already being used, and you are not yet seeing the new disk.

To know for certain I would need to see output from the following commands:
pvs
vgs
lvs

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

bullockw
New Member

Here is the output you requested:

~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <499.51g 64.00m
[root@splunk-s30 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 3 0 wz--n- <499.51g 64.00m
[root@splunk-s30 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 443.57g

root centos -wi-ao---- 50.00g

swap centos -wi-ao---- <5.88g

[root@splunk-s30 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 47G 3.6G 93% /
devtmpfs 3.0G 0 3.0G 0% /dev
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 3.0G 8.5M 3.0G 1% /run
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/mapper/centos-home 444G 246M 444G 1% /home
/dev/sda1 497M 217M 281M 44% /boot
tmpfs 597M 0 597M 0% /run/user/0
[root@splunk-s30 ~]# fdisk -l

Disk /dev/sda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009f8c2

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1048575999 523774976 8e Linux LVM

Disk /dev/sdb: 171.8 GB, 171798691840 bytes, 335544320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc399cc6e

Device Boot Start End Blocks Id System
/dev/sdb3 2048 335544319 167771136 83 Linux

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 6308 MB, 6308233216 bytes, 12320768 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-home: 476.3 GB, 476279996416 bytes, 930234368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Thanks!

0 Karma

codebuilder
Influencer

As I suspected, sdb is already in use. Though it does not have a physical volume created, it is part of the centos VG with a 443 home LV created on it and mapped to /home.

Just a heads up, the lack of a physical volume (PV) could be problematic, and your system may not survive a reboot. You may want to test that.

In any case, I do not see a new, unused 160GB disk that you described. You likely need to scan the SCSI bus. In order to do that you can run the following commands:

echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan

Generally on VM's you will have 3 scsi hosts (host0,host1,host2) but you may have more or fewer. After you run those commands you should see a new disk come in as /dev/sdc with no partition. Find it by re-running the fdisk command:

fdisk -l |grep sd

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

codebuilder
Influencer

Did this help you get the disk scanned in?

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

bullockw
New Member

Thank you for the suggestion but the scan did not bring the disk in for use. Should the partition be removed and added back again to see if that changes the status?

Here is the output I am seeing:

fdisk -l |grep sd
Disk /dev/sda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1048575999 523774976 8e Linux LVM
Disk /dev/sdb: 171.8 GB, 171798691840 bytes, 335544320 sectors
/dev/sdb3 2048 335544319 167771136 83 Linux

0 Karma

codebuilder
Influencer

Not without being absolutely certain that disk is not in use.
How did you come to end up with /dev/sdb3 ? Was that partition created after you presented the new disk? The size just doesn't match up.

Can you provide output from the pvs command as well?

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

bullockw
New Member

Here are the outputs you requested:

#pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <499.51g 64.00m

fdisk -l

Disk /dev/sdb: 171.8 GB, 171798691840 bytes, 335544320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc399cc6e

Device Boot Start End Blocks Id System
/dev/sdb3 2048 335544319 167771136 83 Linux

Disk /dev/sda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009f8c2

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1048575999 523774976 8e Linux LVM

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 6308 MB, 6308233216 bytes, 12320768 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-home: 476.3 GB, 476279996416 bytes, 930234368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I think that sdb3 was assigned when I first ran fdisk and it was partitioned after I presented the new disk.

Thank you!

0 Karma

codebuilder
Influencer

If you have unallocated space available on the root VG then you can extend the root LV and underlying filesystem.

You can check the VG allocation with the vgs command as such:

16:37:55 # vgs
VG #PV #LV #SN Attr VSize VFree
vg00 2 6 0 wz--n- 119.00g 90.00g

In the above example I have 90G free on vg00 (my root VG).

The current size of the root LV is 6GB:

16:40:16 # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-rootlv00 6.0G 2.7G 3.4G 45% /

This means I can extend the size of my root LV by any increment up to around ~90GB (size of unallocated space in vg00).

If I wanted to extend it by 10GB for example, I would use the following commands:

lvextend -L+10G /dev/mapper/vg00-rootlv00
xfs_growfs /dev/mapper/vg00-rootlv00

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...