Deployment Architecture

heavyforwarder not getting the data from F5 load balancer? What are the parameters to be validated in splunkheavyforwarder server?

Hemnaath
Motivator

Hi, currently We are facing an issue in our production Environment, that we have two HeavyFowarder placed before F5 load balancer to route the traffic. Problem is currently the traffic is routed to only one HeavyFowarder server. When checked with F5 team, they informed that server is in-active state, so kindly let me know what are the parameter should be validated from splunk perspective.

we could see the *physical servers is up and running fine and also the splunk service are running* fine but still in F5 load balancer its shows server is in-active mode. Since entire traffic is routed to this single server, it cause space issues "/opt aligned with swap memory utilization again this in turn aligned with Splunk process not running(splunk process getting killed off due to out of memory). Due to this we are getting nearly 100 + incident per month.

Kindly guide us in what are the parameter (Config /services) do we need to check from splunk perspective.

thanks in advance.

Tags (1)
0 Karma
1 Solution

lycollicott
Motivator

Yes, it is configured in an inputs.conf file. I'm going to suggest that you copy the necessary information from the working system by doing something like this:

On the working heavy forwarder.....

find /opt/splunk/etc -name inputs.conf -exec grep -l 514 {} \;

You should get results similar to this.....

lycollicott@LYCOLLICOTT1FR /cygdrive/c/splunk/etc
$ find . -name "*.conf" -exec grep -l 514 {} \;
./apps/launcher/local/inputs.conf
./system/default/sourcetypes.conf

Look at the inputs.conf file (if your search finds more than one you should look at all of them).....

[udp://514]
connection_host = ip
sourcetype = snort

Copy that information into the exact same file on the other heavy forwarder then restart splunk.

View solution in original post

0 Karma

lycollicott
Motivator

There must be 10 different questions in this thread. It is so long that the whole thing doesn't even display anymore. When you have any more questions, you should open a new one, so other people can see all of the details.

Look at this: https://answers.splunk.com/answers/2887/4-1-2-upgrade-tailingprocessor-file-descriptor-cache-is-full...

0 Karma

lycollicott
Motivator

Yes, it is configured in an inputs.conf file. I'm going to suggest that you copy the necessary information from the working system by doing something like this:

On the working heavy forwarder.....

find /opt/splunk/etc -name inputs.conf -exec grep -l 514 {} \;

You should get results similar to this.....

lycollicott@LYCOLLICOTT1FR /cygdrive/c/splunk/etc
$ find . -name "*.conf" -exec grep -l 514 {} \;
./apps/launcher/local/inputs.conf
./system/default/sourcetypes.conf

Look at the inputs.conf file (if your search finds more than one you should look at all of them).....

[udp://514]
connection_host = ip
sourcetype = snort

Copy that information into the exact same file on the other heavy forwarder then restart splunk.

0 Karma

Hemnaath
Motivator

thanks for your help , as you told I had looked into the inputs.conf file and outputs.conf file in Heavy forwarder and found in our environment we got 30 inputs.conf / 10 outputs.conf file. In all of these configuration files I could not see the stanza as mentioned in your comments [UDP = 514], instead I could see the stanza like this

path /opt/splunk/etc/apps/launcher/local/inputs.conf

[splunktcp://9998]
connection_host = ip

[splunktcp://9999]
connection_host = ip

/opt/splunk/etc/system/local/inputs.conf
[splunktcp://9998]
connection_host = ip

[splunktcp://9999]
connection_host = ip

Instead of find command, used locate command and filtered out “udp | 514 | syslog” using the below script
for i in locate inputs.conf;do echo $i;echo "================================";egrep -i "udp|514|syslog" $i;done >>/var/tmp/aaa

Similarly in outputs.conf, I could see the below stanza in both the HF.

/opt/splunk/etc/apps/HVY-ADMIN-all_fwd_outputs/default/outputs.conf
[tcpout]
defaultGroup = all_indexers
maxQueueSize = 1GB

[tcpout:all_indexers]
server = splunkp01.XXXX.com:9997,splunkp02.XXXX.com:9997,splunkp03.XXXX.com:9997,splunkp04.XXXX.com:9997,splunkp05.XXXX.com:9997
autoLB = true

/opt/splunk/etc/apps/HVY-ADMIN-hvy_forwarders/default/outputs.conf

[tcpout]
indexAndForward = false
forwardedindex.filter.disable = true
forceTimebasedAutoLB = true

/opt/splunk/etc/apps/all_fwd_outputs/local/outputs.conf
[tcpout]
defaultGroup = all_indexers

[tcpout:all_indexers]
server = splunkp01.XXXX.com:9997,splunkp02.XXXX.com:9997,splunkp03.XXXX.com:9997,splunkp04.XXXX.com:9997,splunkp05.XXXX.com:9997
autoLB = true

/opt/splunk/etc/system/local/outputs.conf
[tcpout]
indexAndForward = false
forwardedindex.filter.disable = true

[tcpout:all_indexers]
server = splunkp01.XXXX.com:9999,splunkp02.XXXX.com:9999,splunkp03.XXXX.com:9999,splunkp04.XXXX.com:9999,splunkp05.XXXX.com:9999
autoLB = true

As said earlier both syslog /splunk HF instance are running in the same server and noticed that syslog data itself not getting updated in the server from F5 load balancer, I doubt whether this might be the cause for the issue.
I verified by executing this command and found no data.
[root@splunkhvy01 ~]# netstat -a | grep syslog

Where us when executed from another active HF which is capturing the logs.
[root@splunkhvy02 ~]# netstat -a | grep syslog
tcp 0 0 splunkhvy02.xxxx:shell vmzscaler01-syslog.he:ppsms ESTABLISHED
tcp 0 0 splunkhvy02.xxxx:shell vmzscalerpoc01-syslog:23999 ESTABLISHED
udp 0 0 :syslog *:

unix 2 [ ACC ] STREAM LISTENING 10795 /var/lib/syslog-ng/syslog-ng.ctl

Kindly guide me on how to fix this issue and also how to open the port for syslog-ng to listen data from F5 load balancer. thanks in advance

0 Karma

lycollicott
Motivator

So let me make sure I understand...

On server splunkhvy01 sysglog-ng IS NOT running

On server splunkhvy02 syslog-ng IS running.

That's not really a Splunk issue and I've never used syslog-ng, but check this documentation out. https://www.balabit.com/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/chap...

0 Karma

Hemnaath
Motivator

thanks lycollicott, yes you are right, we could see On server splunkhvy01 sysglog-ng IS NOT running and also it was not listening to the port. After restarting the syslog-ng, the syslog-ng status is running and its listening to the port.

Restarted the syslog-ng service by executing the below script and verified the syslog-ng status
/etc/init.d/syslog-ng restart

[root@splunk01 init.d]# service syslog-ng status
syslog-ng (pid 8892) is running...

[root@splunk01 ~]# netstat -a | grep syslog-ng
unix 2 [ ACC ] STREAM LISTENING 98712418 /var/lib/syslog-ng/syslog-ng.ctl

But I am not sure what this number 98712418 mean, it like a mobile number.

Now we have two challenges :

1) how to check whether the syslogs logs are being captured from this server01 in splunk portal --- Is there any query which we can execute and filter out the data that being captured from this server 01.

As said this the inputs.conf files that is configured to monitor the syslog data from both the servers.

/opt/splunk/etc/apps/HVY-ADMIN-hvy_forwarders/default/inputs.conf
[monitor:///opt/syslogs/web_access/.../Common/.log]
[monitor:///opt/syslogs/symantec/SymantecServer/...]
[monitor:///opt/syslogs/symantec/semp/...]
[monitor:///opt/syslogs/symantec/.../ID.log]
[monitor:///opt/syslogs/proxy/...]
sourcetype = bluecoat_syslog
[monitor:///opt/syslogs/dns/.../
.log]
sourcetype = syslog
[monitor:///opt/syslogs/webops_security/.../.log]
sourcetype = syslog
[monitor:///opt/syslogs/firewall/.../
.log]
sourcetype = syslog
[monitor:///opt/syslogs/esx/.../*.log]
sourcetype = syslog

2) How to check the amount of data being injected into both server 01 & 02 under the path /opt/syslogs/ per hour. So that we can monitor and clear the logs once the /opt mount point reached 80 %

thanks in advance ..

0 Karma

lycollicott
Motivator

98712418 is an I-Node. For example, here is some netstat -a output...

Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     13164    @ISCSIADM_ABSTRACT_NAMESPACE

1) Try this: index=_internal sourcetype=splunkd host="splunkhvy01" "<one_of_your_syslog_sourcetypes_like bluecoat_syslog_above"
2) That is more cumbersome, but perhaps something like this: grep -i <a_string_something_like_"^Aug 1 2016 17:"_that_matches_your_date_format> /opt/syslog/one_of_your_files | wc -c

0 Karma

Hemnaath
Motivator

thanks for putting your effort on this. I had tried to execute the query as you said and I could see the below data in the splunk portal.

SPL query host= splunkhvy01* index=_internal sourcetype=splunkd

Search Result
8/5/16
2:37:16.455 PM

08-05-2016 14:37:16.455 -0400 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='/opt/syslogs/generic/txxxx25bap/8804.log'.
host = splunkhvy01.xxxx.com source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd
8/5/16
2:37:16.218 PM

08-05-2016 14:37:16.218 -0400 INFO TailReader - File descriptor cache is full (100), trimming...

But when I tried this query sourcetype=syslog source="/opt/syslogs/webops_security//.log" hostname = splunkhvy01 - I am not getting any value.

What I am trying to do here, I want to know whether the syslog data are being captured in this server (splunkhvy01). Please let me know how to check that whether syslog data are being captured in this HF instances.

And also how to measure the data load between two HF server 01 & 02.

thanks in advance.

0 Karma

lycollicott
Motivator

Do not specify source=.

0 Karma

Hemnaath
Motivator

thanks lycollicott, I think data are being monitored from this server 01.
Tried this query and got this result.

host =splunk01* sourcetype=splunkd series="*" index=_internal

08-06-2016 10:20:42.391 -0400 INFO Metrics - group=per_index_thruput, series="webops_security", kbps=0.067068, eps=0.419353, kb=2.079102, ev=13, avg_age=1.153846, max_age=3
host = splunk01.xxxx.com source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
8/6/16
10:20:42.391 AM
08-06-2016 10:20:42.391 -0400 INFO Metrics - group=per_index_thruput, series="unix_svrs", kbps=62.673401, eps=371.063295, kb=1942.881836, ev=11503, avg_age=-9247.172477, max_age=5000
host = splunk01.xxxx.com source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
8/6/16
10:20:42.391 AM
08-06-2016 10:20:42.391 -0400 INFO Metrics - group=per_index_thruput, series="sos", kbps=0.052734, eps=0.387096, kb=1.634766, ev=12, avg_age=0.000000, max_age=0
host = splunk01.xxxx.com source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
8/6/16
10:20:42.391 AM
08-06-2016 10:20:42.391 -0400 INFO Metrics - group=per_index_thruput, series="sec_sym_wg", kbps=0.359720, eps=2.096767, kb=11.151367, ev=65, avg_age=54499.676923, max_age=604801
host = splunk01.xxxx.com source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
8/6/16
10:20:42.391 AM
08-06-2016 10:20:42.391 -0400 INFO Metrics - group=per_index_thruput, series="net_proxy", kbps=8.613284, eps=25.516045, kb=267.012695, ev=791, avg_age=0.716814, max_age=3
host = splunk01.xxxx.com source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd

In filed series=* all the index details are listed and I am not sure what is the purpose of series filed, tried to research in google about the series filed in SPL but could not get information, do you have any idea why this filed is used.

How to monitor the amount of being injected under the path /opt/syslog/* in both the servers 01 & 02. To have proactive monitoring on disk size /opt.

thanks in advance.

0 Karma

lycollicott
Motivator

You're doing it wrong. Remember that you asked:

1) how to check whether the syslogs
logs are being captured from this
server01 in splunk portal --- Is there
any query which we can execute and
filter out the data that being
captured from this server 01.

Do NOT do this: host =splunk01* sourcetype=splunkd series="" index=_internal

DO this: host =splunk01* sourcetype=splunkd index=_internal "*syslog*"

0 Karma

Hemnaath
Motivator

thanks lycollicott, we got this output after executing above query and we hope syslog data are being monitored from both the servers 01 and 02.

8/9/16
10:29:31.392 AM
08-09-2016 10:29:31.392 -0400 INFO Metrics - group=per_sourcetype_thruput, series="bluecoat_syslog", kbps=18.937373, eps=61.547462, kb=587.067383, ev=1908, avg_age=2.618973, max_age=6
host = splunk01.xxxx.comsource = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd

8/9/16
10:29:31.392 AM
08-09-2016 10:29:31.392 -0400 INFO Metrics - group=per_source_thruput, series="/opt/syslogs/webops_security/hostname/ssg.log", kbps=4.225995, eps=21.741609, kb=131.007812, ev=674, avg_age=1.587537, max_age=3
host = splunk01.xxxx.comsource = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd

How to monitor the amount of being injected under the path /opt/syslog/* in both the servers 01 & 02. To have proactive monitoring on disk size /opt.

0 Karma

lycollicott
Motivator

host=splunk01* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series

Hemnaath
Motivator

lycollicott, Once again I could see syslogs data are not being captured from the server 01. By executing the host=splunk01* sourcetype=splunkd index=_internal "syslog" with the time frame for last 60 min, I got no result found. I have noticed that swap memory are almost 100% utilized by splunkd process. Kindly help me on this to fix the issue.

INFO from splunkd.log

08-11-2016 13:24:57.261 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:24:57.261 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:00.434 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:05.790 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:10.407 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:15.343 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:20.427 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:24.591 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:27.262 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:25:27.263 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:29.224 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:34.030 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:38.908 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:43.324 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:47.946 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:53.707 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:57.262 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:25:57.262 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:58.585 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:03.290 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:08.387 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:13.522 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:17.714 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:22.073 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:26.530 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:31.304 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:36.384 -0400 INFO TailReader - File descriptor cache is full (100), trimming...

thanks in advance .

0 Karma

Hemnaath
Motivator

lycollicott, Once again I could see syslogs data are not being captured from the server 01. By executing the host=splunk01* sourcetype=splunkd index=_internal "syslog" with the time frame for last 60 min, I got no result found. I have noticed that swap memory are almost 100% utilized by splunkd process. Kindly help me on this to fix the issue.

INFO from splunkd.log

08-11-2016 13:24:57.261 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:24:57.261 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:00.434 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:05.790 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:10.407 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:15.343 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:20.427 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:24.591 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:27.262 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:25:27.263 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:29.224 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:34.030 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:38.908 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:43.324 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:47.946 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:53.707 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:25:57.262 -0400 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9997
08-11-2016 13:25:57.262 -0400 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997
08-11-2016 13:25:58.585 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:03.290 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:08.387 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:13.522 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:17.714 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:22.073 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:26.530 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:31.304 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 13:26:36.384 -0400 INFO TailReader - File descriptor cache is full (100), trimming...

thanks in advance

0 Karma

Hemnaath
Motivator

thank lycollicott, the query worked and I got the required output.

host=splunk0* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series, host | rename sum(kb) as KB | table series, host, KB | sort host, series | appendpipe [stats sum(KB) as Total by host ] with Time frame set to last 24 hours

Today again data was not getting captured in splunk01 server for more than 3 hours, when executed the below query, I got no data found with time frame set to last 15 min.

host=splunk0* sourcetype=splunkd index=_internal "syslog

when verified splunk status, syslog-ng services and port are fine but still there was not result,
but I had noticed below info in splunkd.log and also swap memory was 0.

08-11-2016 07:06:58.118 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:06:58.128 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:06:58.156 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_x.x.x.x_8089_splunk01.xxxx.com_splunk01.xxx.com_7xxxx1-XXXXX-XXX-XXX-XXXX
08-11-2016 07:07:45.496 -0400 INFO TailReader - File descriptor cache is full (100), trimming...
08-11-2016 07:07:48.220 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:07:48.220 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:17.406 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:08:17.406 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:47.566 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:08:47.566 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:08:52.863 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-nessus/bin/nessus2splunk.py" usage: nessus2splunk.py [-h] [-s SRCDIR] [-t TGTDIR]
08-11-2016 07:08:52.863 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-nessus/bin/nessus2splunk.py" nessus2splunk.py: error: argument -s/--srcdir: Invalid path specified ($SPLUNK_HOME may not be set).
08-11-2016 07:09:17.565 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.45:9997
08-11-2016 07:09:17.565 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:09:47.859 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:09:47.958 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997
08-11-2016 07:10:18.029 -0400 INFO TcpOutputProc - Closing stream for idx=X.X.X.X:9997
08-11-2016 07:10:18.029 -0400 INFO TcpOutputProc - Connected to idx=X.X.X.X:9997

so restarted the splunk service and it start working normally and now i am able to see the result by executing the query.

thanks in advance.

0 Karma

lycollicott
Motivator
host=splunk0* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series, host | rename sum(kb) as KB | table series, host, KB | sort host, series | appendpipe [stats sum(KB) as Total by host ]
0 Karma

lycollicott
Motivator
host=splunk0* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series, host
0 Karma

Hemnaath
Motivator

thank lycollicott. we are getting the amount of date injected in both the servers 01 and 02. I had included command addcoltotals labelfield=series to get the total amount of syslog data per day.

host=splunk01* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series | addcoltotals labelfield=series

from server01 the total amount of syslog data is about 15895692.557388 KB and its about 15.8 GB for last 24 hours

from server 02 the total amount of syslog data is about 18893695.258755 KB and its about 18 GB of data. It seems more traffic are routed to server 02 and it took some time to search this information not incase of HF splunk01.

Hey I tired the below query but it was difficult to identify from which server the data is being injected, is it possible to highlight the both server 01 and 02 in separate filed or column.

host=splunk0* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series | addcoltotals labelfield=series

thanks in advance.

0 Karma

lycollicott
Motivator

I got an email notification that you had replied, but I don't see it here.

You asked: Instead of running two separate is it possible to get both the result in a single query ?

Do this:

host=**splunk0*** sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series | addcoltotals labelfield=series 

Now....please accept my answer and have all of your friends and colleagues up vote it. 😉

0 Karma

Hemnaath
Motivator

thank lycollicott, we got the output in kb and I have addcoltotals to get the sum of all the syslogs data injected from the two servers.

host=splunk01* sourcetype=splunkd index=_internal "syslog" | stats sum(kb) by series | addcoltotals labelfield=series with time frame for last 24 hours

from server 01- total = 17805533.451029 KB of data which is approximately 17.8GB of data being injected.
from server 02 - total = 17546902.822219 KB of data which is approximately 17.5GB of data being injected.
I guess now the data are being properly distributed from F5 load balancer.

Instead of running two separate is it possible to get both the result in a single query ?

thanks in advance.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...