Activity Feed
- Karma Re: In sort command I want to avoid limit of 10,000 rows without using attribute "count=0". Is there any setting for this in any conf file ? for woodcock. 09-15-2021 06:55 AM
- Posted Re: how to change timezone for splunk-add-onn for google cloud platform on Getting Data In. 09-14-2021 02:27 PM
- Karma Re: how to change timezone for splunk-add-onn for google cloud platform for jjofret. 09-14-2021 02:27 PM
- Posted Re: Transaction command is not closing transactions and is hitting the max open transaction limit (maxopentxn). on Splunk Search. 09-30-2020 08:36 AM
- Karma Re: Transaction command is not closing transactions and is hitting the max open transaction limit (maxopentxn). for niketn. 09-30-2020 08:32 AM
- Got Karma for Re: Transaction command is not closing transactions and is hitting the max open transaction limit (maxopentxn).. 06-05-2020 12:51 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrade to splunk-6.4.0-f2c836328108. Receiving Error Message: "WARNING: web interface does not seem to be available!". 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
- Got Karma for Re: After upgrading to 6.4, why are our signed certs no longer accepted in server.conf?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
4 | |||
0 | |||
1 | |||
2 | |||
0 |
09-14-2021
02:27 PM
We are in EST and all data from Google in in UTC, so all of our data was four hours off: index=<your gcp index> |
eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") |
eval delta = _indextime - _time |
table sourcetype, _time, _indextime, indextime, delta |
sort indextime desc sourcetype _time indextime delta google:gcp:pubsub:message 2021-09-14 21:13:45.381 2021-09-14 17:13:48 -14397.381004 google:gcp:pubsub:message 2021-09-14 21:13:47.272 2021-09-14 17:13:47 -14400.272801 google:gcp:pubsub:message 2021-09-14 21:13:46.430 2021-09-14 17:13:47 -14399.43 jofret, is this the change you made? Edit file /opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/local/props.conf [google:billing:json]
TZ = UTC
[google:billing:csv]
TZ = UTC
[google:gcp:billing:report]
TZ = UTC
[google:gcp:pubsub:message]
TZ = UTC
[google:gcp:pubsub:audit:auth]
TZ = UTC
[google:gsuite:pubsub:audit:auth]
TZ = UTC
[google:gcp:gsuite:admin:directory:users]
TZ = UTC
[google:gcp:buckets:xmldata]
TZ = UTC
[google:gcp:buckets:jsondata]
TZ = UTC
[google:gcp:buckets:*data]
TZ = UTC
[google:gcp:compute:instance]
TZ = UTC
[google:gcp:compute:vpc_flows]
TZ = UTC After refreshing the heavy forwarder this Splunk add is running on the the issue seems to be resolved. sourcetype _time indextime delta google:gcp:pubsub:message 2021-09-14 17:20:38.147 2021-09-14 17:20:39 0.852020 google:gcp:pubsub:message 2021-09-14 17:20:38.146 2021-09-14 17:20:39 0.853950 google:gcp:pubsub:message 2021-09-14 17:20:38.097 2021-09-14 17:20:39 0.902150
... View more
09-30-2020
08:36 AM
I totally forgot to follow up on this issue, thank you for commenting on this issue. Using `stats` was the recommendation from the Splunk support. Not only does using `stats` result in all the entries being reflected in the results, it is so much faster.
... View more
05-27-2020
07:37 AM
1 Karma
Worked perfectly for me. I could not picture in my mind how the transaction command could be duplicated with a stats command based on the Splunk documentation, your example illustrated it for me. I now have a faster and more importantly functional search. Thank you.
I was able to drop the _time bin part as the session_id is unique enough to not be duplicated in a day and I was only using the maxspan to close open transactions. My search went from:
transaction session_id maxspan=30s
to
| stats count as eventcount list(mac) AS mac, list(hostname) AS hostname, list(nas_ip) AS nas_ip, list(nas_port) AS nas_port, list(nas_id) AS nas_id, list(attr_value) AS attr_value, list(profiles) AS profiles by session_id
That allowed me to keep all the rest of the search on either side the same.
... View more
05-26-2020
10:00 AM
We recently upgraded to from 7.1.2 to 8.0.3 on on-prem Splunk Enterprise. A previously working saved search is no longer returning the correct results.
| transaction session_id maxspan=30s
Looking into it looks like the transaction command is no longer closing connections when the maxspan (30s) value is hit. This leaves all transactions open and then the search ends when it hits the default of 5000.
I need to create transactions out of 650000 entries (two or three lines each), so needless to say this search no longer functions. I can confirm this behavior, by:
| stats count by closed_txn shows all the transactions returned as closed_txn=0
adding maxopentxn=5500 to the transaction command causes the number of returned results to go from 5000 to 5500
adding maxevents=2 only closes some of the events
closed_txn , eventcount , count
0 1 1041
0 2 4458
1 2 1654
Transactions are supposed to close when:
The ' closed_txn ' field is set to ' 1 ' if one of the following conditions is met: maxevents , maxpause , maxspan , startswith .
https://docs.splunk.com/Documentation/Splunk/8.0.3/SearchReference/Transaction -> Memory control options -> keepevicted
... View more
05-19-2020
08:19 AM
As you said, this is a really old issue. inputs.conf is created if it is missing, and it is configured to the actual local host value on first start after an install or upgrade. If you set your local/inputs.conf file to $decideOnStartup, it should not be overwritten by installs or upgrades. A simple restart does not create the inputs.conf file.
The original question was specifically in regards to packaging the install and deploying to servers, not about imaging a server with the Splunk install on it. As you point out, you will need a different method of configuring a server that was imaged with Splunk on it. We use that method or VDI deployments. Most of our deployments of Splunk are done using Ansible to make sure the proper config is pushed to whatever server needs it. Splunk is installed and configured on the server after first boot, not during the image creation process. Renaming servers is not allowed in our environment, so we do not have problems with renames. Your environment seems more challenging, so I expert you would have to come up with different solutions. Try using $decideOnStartup and see if that reduces some of your pain points.
... View more
05-19-2020
06:43 AM
Not sure what your deal is, insults have no place in support forums.
I am sorry I was not more clear in my answer. You said:
I want to disagree about "the better solution is to not set the host in inputs.conf".
Then you explain how setting the host in input.conf is a bad idea. I am not sure your point or if you made a mistake when reading the question or answer. I agree my sentence is confusingly written. This is the short version of my answer again:
Do not set the host name on inputs.conf unless you are overriding a system default.
Set a default across the server in local/server.conf (only is you want to override the Splunk default of $HOSTNAME)
Only set per input overrides in the local/inputs.conf file if you need them to be different then the default, for example on Heavy Forwarders
If you examine the default/server.conf file on Linux (Splunk 8.0.3) you see:
[general]
serverName=$HOSTNAME
This way out of the box Splunk will set host to the name of the server, a host name change will be captured by Splunk.
... View more
07-18-2019
10:59 AM
I would get this when trying to map LDAP groups. Try adjusting your limits in the authentication.conf:
[test_directory]
network_timeout = 20
sizelimit = 50
timelimit = 15
For me, I had to reduce sizelimit, I had it at 1000 before.
... View more
02-13-2018
08:10 AM
So it looks like the problem with this hack is that it only fixes the logging in:
web_access.log
The other logs are all unchanged:
splunkd_ui_access.log
splunkd_access.log
For example:
splunkd_ui_access.log - shows the proxy
var/log/splunk/splunkd_ui_access.log:127.0.0.1 - admin [13/Feb/2018:11:00:47.110 -0500] "GET /en-US/ HTTP/1.1" 303 105 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - c0027a96a8de667c7653660d00411b6e 18ms
var/log/splunk/splunkd_ui_access.log:127.0.0.1 - admin [13/Feb/2018:11:00:47.183 -0500] "GET /en-US/app/launcher HTTP/1.1" 303 110 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - c0027a96a8de667c7653660d00411b6e 181ms
var/log/splunk/splunkd_ui_access.log:127.0.0.1 - admin [13/Feb/2018:11:00:47.516 -0500] "GET /en-US/app/launcher/home HTTP/1.1" 200 1264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - c0027a96a8de667c7653660d00411b6e 161ms
web_access.log - shows the X-Forwarded-For
var/log/splunk/web_access.log:10.32.136.60 - admin [13/Feb/2018:11:00:47.111 -0500] "GET /en-US/ HTTP/1.1" 303 105 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - 5a830baf1c7ff401228990 17ms
var/log/splunk/web_access.log:10.32.136.60 - admin [13/Feb/2018:11:00:47.183 -0500] "GET /en-US/app/launcher HTTP/1.1" 303 110 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - 5a830baf2f7ff401228a10 180ms
var/log/splunk/web_access.log:10.32.136.60 - admin [13/Feb/2018:11:00:47.517 -0500] "GET /en-US/app/launcher/home HTTP/1.1" 200 1264 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" - 5a830baf847ff4012285d0 159ms
... View more
08-29-2016
03:10 PM
After upgrading to 6.4.1, these seem to have disappeared for us.
... View more
An additional note, verify the full.pem before trying to restart. All the certificate BEGIN and END lines must be on their own line. For some reason the cert end and key begin line were always combining in my full.pem
-----END CERTIFICATE----------BEGIN RSA PRIVATE KEY-----
Splunk will not start with this in the full.pem. Just separate the lines:
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
and make sure you have 5 dashes at the beginning and end.
... View more
I finally figured it out. You have to put the certs together in this order (Comodo uses two intermediates):
Certificate
Password protected Key file
Intermediate #1
Intermediate #2
CA
into a file, for example: /opt/splunk/etc/auth/full.pem
Then assign that file to sslKeysfile in server.conf. No need to mess with the caCertFile setting.
[sslConfig]
sslKeysfile = full.pem
sslKeysfilePassword = XXXXXXXXXXX
This is totally different than the Splunk web settings where you leave the key in its own file and then combine the SSL certificate and the CA certs into one file.
... View more
05-17-2016
03:15 PM
1 Karma
I am having the same issue. This is not the proxy 500 error. None of those fixes will resolve this isuse, I tried them all. For me, I found that, something has changed around the requirements for the the SSL certificate format that is used for the non Splunk Web SSL. Revert to the default SSL settings in the server.conf file and your instance should start up correctly again. The SSL cert setting in your web.conf should be able to remain the same. My issue is here:
https://answers.splunk.com/answers/402988/after-upgrading-to-64-why-are-our-signed-certs-no.html
... View more
After upgrading the to 6.4, Splunk web would no longer start:
Starting splunk server daemon (splunkd)
... Done [ OK ]
Waiting for web server at https://127.0.0.1:8443 to be available....
WARNING: web interface does not seem to be available!
I thought it was an issue with the web.conf, so I ripped that out and it still would not start. I then removed the SSL setting in the server.conf and the server started normally. I re-added the web.conf and it again restarted fine. It is a Comodo cert (with their weird chain). The certificate works find in the web interface. We use the same for both. I though maybe it wanted a password on the key file, so I added one, that did not help. Looking at splunkd.log I see:
05-17-2016 14:06:42.214 -0400 ERROR X509 - /opt/splunk/etc/auth/MYSERVER-01.key: unable to read X509 certificate file
05-17-2016 14:06:45.156 -0400 ERROR SSLCommon - Can't read key file /opt/splunk/etc/auth/MYSERVER-01.key errno=185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch.
When the certificate is configured with the web interface, it passes all the verification checks.
... View more
03-08-2016
10:59 AM
I was hoping this would not be the case. I could either update rsyslog to use something like this:
http://www.rsyslog.com/doc/v8-stable/configuration/property_replacer.html
$template doublequotelastfield,"%rawmsg:R,ERE,1,ZERO:(.*)=([^=,]+$)--end%=\"%rawmsg:R,ERE,2,ZERO:(.*)=([^=,]+$)--end%\"\n"
Which normal results in this:
<143>2016-03-08 14:58:30,800 136.167.0.15 CPPM_Proc_Stats 170 1 0 id=4540039,process_id=17,cpu_usage=0,res_mem_usage=4540,virt_mem_usage=185984,timestamp="2016-03-08 14:58:08.158684-05"
There are two side affects:
It will add ="0" to the end of any line that does not have an equals sign in it. This is extremely unlikely, I am searching for this kind of event in the old data.
Any log line that is truncated abnormally would have the
<143>2016-03-07 18:09:52,982 yyy.yyy.yyy.yyy CPPM_Proc_Stats 390387 1 0 id=4529414,process_id=17,cpu_usage=0,res_mem_usage=3888,virt_mem_usage=188044,times
Would become
<143>2016-03-07 18:09:52,982 yyy.yyy.yyy.yyy CPPM_Proc_Stats 390387 1 0 id=4529414,process_id=17,cpu_usage=0,res_mem_usage=3888,virt_mem_usage=188044,times="0"
but that is a garbage line anyway.
Or we could update the SQL that ClearPass uses to generate the syslog data. The rsyslog seems to be the better option as all the attempts to add the quotes with concat to the SQL statements failed.
... View more
03-07-2016
04:30 PM
We are ingesting Aruba CearPass logs. The ClearPass Appliances send their syslog to a syslog server that writes the logs to disk and then reads those log lines into Splunk. The log lines look like:
<143>2016-03-07 18:04:57,504 yyy.yyy.yyy.yyy CPPM_Dashboard_Summary 35249531 1 0 session_id=R022d8e6d-04-56de08db,req_source=RADIUS,user_name=user@local.domain,service_name=WIRELESS_LOCAL,alerts_present=0,nas_ip=xxx.xxx.xxx.xxx,nas_port=0,conn_status=Unknown,login_status=ACCEPT,error_code=0,mac_address=abcdef123456,timestamp=2016-03-07 18:03:55-05,write_timestamp=2016-03-07 18:03:56.93952-05
The last key value pair is "write_timestamp=2016-03-07 18:03:56.93952-05", but Splunk records it as "write_timestamp=2016-03-07" this affects other field as well:
<143>2016-03-07 18:09:52,982 yyy.yyy.yyy.yyy CPPM_Proc_Stats 390387 1 0 id=4529414,process_id=17,cpu_usage=0,res_mem_usage=3888,virt_mem_usage=188044,timestamp=2016-03-07 18:08:07.536779-05
"timestamp=2016-03-07 18:08:07.536779-05" becomes "timestamp=2016-03-07"
... View more
02-18-2016
10:35 AM
Any resolution? We have 2200 of these "jobs".
... View more
Wouldn't it be better to change line 38 from:
atoms = {'h': remote.name or remote.ip,
To:
atoms = {'h': inheaders.get('X-Forwarded-For', '') or remote.name or remote.ip,
This will put the X-Forwarded-For header in the h atom if it exists. That way any Splunk App, dashboard or search that is looking at these logs would not need to be updated to look of the client IP at the end of the log line.
Also the location of the file to edit is here, at least in Splunk 6.3 for 64 bit Linux:
$SPLUNK_HOME/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/customlogmanager.py
... View more
This seems like a critical feature for Search Head clusters. If you have a load balancer or if you offload your SSL you really need the XForward header in order to know where users are coming from.
... View more
06-29-2015
06:26 AM
Thanks, this is very helpful.
... View more
06-26-2015
05:39 PM
So am going to change the source to forwarder::hostname. I can easily do this by setting the default source to forwarder::hostname in the app/local/inputs.conf file and not setting a source on the individual folder monitors.
[default]
host = myserver
source = forwarder::myserver
I really would like to use the environment variable, but this works for now.
... View more
06-26-2015
03:36 PM
1 Karma
This is on a forwarder. We have two forwarders receiving syslog from some appliances. The forwarders write the syslog to disk and then the Splunk forwarder monitors for the files. The input stanza is:
[monitor:///opt/inboundlogs/10.10.10.10/*_syslog.log]
host = 10.10.10.10
disabled = false
source = $HOSTNAME 10.10.10.10
sourcetype = vm_app
index = app_foo
The file name is /opt/inboundlogs/10.10.10.10/YYYY-MM-DD-HH_10.10.10.10_syslog.log and now our Splunk server is getting full of sourcetypes.
I have set HOSTNAME in splunk-launch.conf and can see that Splunk sees it:
/opt/splunkforwarder/bin/splunk envvars
HOSTNAME=FORWARDER-01 ; export HOSTNAME ; PATH=/opt/splunkforwarder/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/splunk/bin ; export PATH ; SPLUNK_HOME=/opt/splunkforwarder ; export SPLUNK_HOME ; SPLUNK_DB=/opt/splunkforwarder/var/lib/splunk ; export SPLUNK_DB ; SPLUNK_SERVER_NAME=SplunkForwarder ; export SPLUNK_SERVER_NAME ; SPLUNK_WEB_NAME=splunkweb ; export SPLUNK_WEB_NAME ; LD_LIBRARY_PATH=/opt/splunkforwarder/lib ; export LD_LIBRARY_PATH ; OPENSSL_CONF=/opt/splunkforwarder/openssl/openssl.cnf ; export OPENSSL_CONF ; LDAPCONF=/opt/splunkforwarder/etc/openldap/ldap.conf ; export LDAPCONF
And updated the stanza to:
[monitor:///opt/inboundlogs/10.10.10.10/*_syslog.log]
host = 10.10.10.10
disabled = false
source = $HOSTNAME 10.10.10.10
sourcetype = vm_app
index = app_foo
But on the Splunk indexer, the source is "$HOSTNAME 10.10.10.10" and not "FORWARDER-01 10.10.10.10".
I am planning on rolling this config into a Spunk App for easy management of the multiple forwarders receiving and forwarding on this syslog data, so I need the app/default/inputs.conf to be general and then I can set server specific settings with environment variables in the splunk-launch.conf. Using the app/local/inputs.conf to set this would suck, as there are currently eight incoming syslog streams, and more to come.
... View more
06-26-2015
02:17 PM
You are on the right track with $, just use $ with the Windows environment variable.
[general]
serverName=$COMPUTERNAME
... View more
06-26-2015
01:50 PM
1 Karma
Just use $env_var instead of %env_var%. Even though you are on a Windows server, the format of environment variables accepted in Spunk conf file will follow the Linux format of $envvar. So %COMPUTERNAME% becomes $COMPUTERNAME
The better solution is to not set the host in the inputs.conf file. If you do this it will default to the value in the server.conf file. Then you set it there:
[general]
serverName=$COMPUTERNAME
... View more
06-26-2015
12:45 PM
2 Karma
I have a problem with users putting just * in test search fields. I am fine with "bob*" but "*" is too much. I would like unset the token if someone put in * . I have changed the input token's name and then added a conditional change, but since * matches everything, I am trying to escape it. I tried \* , but that only matches when you actually input \*
<input type="text" token="tok_user_name_check" searchWhenChanged="false">
<label>The userid</label>
<default></default>
<change>
<condition value="\*">
<unset token="tok_user_name"></unset>
</condition>
<condition value="*">
<set token="tok_user_name">$tok_user_name_check$</set>
</condition>
</change>
</input>
... View more
06-11-2015
09:55 AM
1 Karma
Clearing my browser (the latest FireFox on Windows 7) cache resolved this issue for me. The "Loading..." message appeared while creating or editing objects, like event types or dashboards. The dashboard editor would never engage, the text was editable, but the color coding never turned on. After the object was saved, I would get the redirect xml code page, but the redirect would not occur. The issue started after a restart of our test Splunk server. The interface for the prod Splunk server was never affected, only the test one. The browser must have been holding onto some piece of info that changed after the restart.
... View more