Archive

delay in adding a certain amount of log files?

paterler
Explorer

I followed the instructions here http://docs.splunk.com/Documentation/Storm/latest/User/Setupauniversalforwarderonnix to add quite a lot of logiles to monitor (on a virtual hosting platform). first: I'm still not sure if it is ../etc/apps/SplunkUniversalForwarder/local to add the inputs.conf to or ../etc/apps/search/local.

anyways, I ended up putting it into ../etc/apps/SplunkUniversalForwarder/local but nothing happened. I now tried to add the source by using the command line interface

bin/splunk add monitor "location/to/logfiles/*_log" 

but I get the answer "Cannot create another input with the name" so the config file seems to be at the right place.

does it take some time to see the real time log entries in splunk or what is wrong here?

this is the content of inputs.conf:

[monitor:///var/log/virtualmin/*_log]
Tags (1)
1 Solution

lguinn2
Legend

First, it is possible that the SplunkUniversalForwarder app is not enabled. And you would be better off putting your config files in $SPLUNK_HOME/etc/apps/search/local. I am using $SPLUNK_HOME to refer to the location where the Splunk forwarder is installed.

Also, if you are already monitoring /var/log, you should not be also monitoring any of its subdirectories, because that would overlap.

bin/splunk list monitor

will tell you what files Splunk is currently monitoring. If the connection between the forwarder and indexer is good, you should see data in only a few seconds more than is required for network transport.
On the forwarder,

bin/splunk list forward-server

will tell you about the forwarder-to-indexer connection. On the indexer side, you can run this search to see if any forwarders have transmitted data:

index=_internal source=*metrics.log group=tcpin_connections 

Or my personal favorite, which summarizes the forwarded data metrics by hour and host:

index=_internal source=*metrics.log group=tcpin_connections 
| eval sourceHost=if(isnull(hostname), sourceHost,hostname) 
| rename connectionType as connectType
| eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
| eval version=if(isnull(version),"pre 4.2",version)
| rename version as Ver  arch as MachType
| fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver MachType
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) as avg_TCP_KBps avg(tcp_eps) as avg_TCP_eps sum(kb) as total_KB by Hour connectType sourceIp sourceHost MachType destPort Indexer Ver
| eval avg_TCP_KBps=round(avg_TCP_KBps,3) | eval avg_TCP_eps=round(avg_TCP_eps,3)
| fieldformat Hour=strftime(Hour,"%x %H") | fieldformat total_KB=tostring(total_KB,"commas")

View solution in original post

lguinn2
Legend

First, it is possible that the SplunkUniversalForwarder app is not enabled. And you would be better off putting your config files in $SPLUNK_HOME/etc/apps/search/local. I am using $SPLUNK_HOME to refer to the location where the Splunk forwarder is installed.

Also, if you are already monitoring /var/log, you should not be also monitoring any of its subdirectories, because that would overlap.

bin/splunk list monitor

will tell you what files Splunk is currently monitoring. If the connection between the forwarder and indexer is good, you should see data in only a few seconds more than is required for network transport.
On the forwarder,

bin/splunk list forward-server

will tell you about the forwarder-to-indexer connection. On the indexer side, you can run this search to see if any forwarders have transmitted data:

index=_internal source=*metrics.log group=tcpin_connections 

Or my personal favorite, which summarizes the forwarded data metrics by hour and host:

index=_internal source=*metrics.log group=tcpin_connections 
| eval sourceHost=if(isnull(hostname), sourceHost,hostname) 
| rename connectionType as connectType
| eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")
| eval version=if(isnull(version),"pre 4.2",version)
| rename version as Ver  arch as MachType
| fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver MachType
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) as avg_TCP_KBps avg(tcp_eps) as avg_TCP_eps sum(kb) as total_KB by Hour connectType sourceIp sourceHost MachType destPort Indexer Ver
| eval avg_TCP_KBps=round(avg_TCP_KBps,3) | eval avg_TCP_eps=round(avg_TCP_eps,3)
| fieldformat Hour=strftime(Hour,"%x %H") | fieldformat total_KB=tostring(total_KB,"commas")

View solution in original post

paterler
Explorer

thanks. your suspicion was right, the forwarder wasn't running because of an outgoing firewall rule preventing sending data over port 9997. I didn't see this mentioned in the manual. I had splunk storm auto configured and had to open "my" splunk storm port for sending syslog data, there was no mention of another port to open..

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!