All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Use TERM(stalled), as that will help filter the initial data volume retrieved The order of commands is important - you are doing the rex before the hour constraint - change the order. You may al... See more...
  Use TERM(stalled), as that will help filter the initial data volume retrieved The order of commands is important - you are doing the rex before the hour constraint - change the order. You may already have a field called date_hour (it is often extracted by default - check). If so you can put that in the search  index=abc host=def* stalled (date_hour < 3 OR date_hour > 4) | rex field=_raw "symbol (?<symbol>.*) /" | timechart dc(symbol) span=15m Replace the NOT with a positive constraint, i.e. check that the hour is < 3 or > 4 rather than NOT >2 AND <4 You can't convert it to tstats unless symbol becomes an indexed field.  
Hi @gcusello  I found this post as I am trying to solve the same issue. I followed your suggestion and copied all the monitor  stanzas from system\default\inputs.conf to my inputs file in system\lo... See more...
Hi @gcusello  I found this post as I am trying to solve the same issue. I followed your suggestion and copied all the monitor  stanzas from system\default\inputs.conf to my inputs file in system\local\inputs.conf; and inserted "disable = 1" to all of them. Then I restarted splunk. However, network capture from my Splunk Server still showing all the log entries being forwarded. Below is my inputs.conf file. Do you know what could be the issue? Thanks, Billy. [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk] disabled = 1 index = _internal [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\watchdog\watchdog.log*] disabled = 1 index = _internal [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\license_usage_summary.log] disabled = 1 index = _telemetry [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunk_instrumentation_cloud.log*] disabled = 1 index = _telemetry sourcetype = splunk_cloud_telemetry [monitor://C:\Program Files\SplunkUniversalForwarder\etc\splunk.version] disabled = 1 _TCP_ROUTING = * index = _internal sourcetype=splunk_version [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\configuration_change.log] disabled = 1 index = _configtracker [WinEventLog://Security] disabled = 0 renderXml = 1 whitelist = 4624, 4634
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I... See more...
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I am also trying to accomodate time constraints here, ex look for a user in main query if the time difference it was captured in sub query and main query is 120 secs. I am also using multiple eval commands and also tried appendcols
Not sure I understand the "single" bit of last(reserved) by host, as I assume there will be lots of different values for different hosts, however, you can do this ... | stats values(*-*) as *-* | fo... See more...
Not sure I understand the "single" bit of last(reserved) by host, as I assume there will be lots of different values for different hosts, however, you can do this ... | stats values(*-*) as *-* | foreach *-* [ eval <<MATCHSEG1>>=mvsort(mvdedup(mvappend(<<MATCHSEG1>>, '<<FIELD>>'))) | fields - <<FIELD>> ]
Preventing all time is a good idea because it effectively stops the time picker option from being used, so will stop less familiar users from making poor searches. As 'All Time' sets earliest=0, if ... See more...
Preventing all time is a good idea because it effectively stops the time picker option from being used, so will stop less familiar users from making poor searches. As 'All Time' sets earliest=0, if someone wants to do 'all time', it's still technically possible, as you can just search 'last 10 years' or something similar, e.g. earliest=10, which is almost all time, but not quite, so those who "know" can get around it.    
Could you share the search? It could be that it is using characters that work fine in the UI search bar, but which get interpreted wrong when passed to the REST API.
The word MemoryError is concerning. Usually that means Python thinks it will run out of memory. Do you have a comfortable amount of RAM on the machine?
Hi This should work. If you are looking spec file for props.conf you see that SHOULD_LINEMERGE = true for unknown reason? It should be false for almost 100% of cases. r. Ismo
You could/should set MC also in single node. Just like in distributed environment. The previous conf presentation shows you how to look those logs on OS level if those are not deliver into your splun... See more...
You could/should set MC also in single node. Just like in distributed environment. The previous conf presentation shows you how to look those logs on OS level if those are not deliver into your splunk server.
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I se... See more...
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I see there are metrics. Same if I run with mstats it is not returning any result. It was pulling data a week back but not now What could be the troubleshooting steps when there is issue like this ? What are the points i have to check on ? Summary : Data is  being pulled by SIM  Add on , So I am seeing metrics when using SIM command.  But when I try mstats on same metrics, it is not returning any result. Can anyone help me what could be the issue. From where I have to troubleshoot ? REgards, PNV
Yup, you can do this. However I recommend making sure that your field is a string containing a comma-separated list of email addresses.
Note that you only have to accept the license once (per newly installed version), not every time you run Splunkd using systemctl. You can thus run the "./splunk start --accept-license=yes"  command ... See more...
Note that you only have to accept the license once (per newly installed version), not every time you run Splunkd using systemctl. You can thus run the "./splunk start --accept-license=yes"  command once, then afterwards use "systemctl start Splunkd"
There does not seem to be such a package on PyPi. It seems to be only implemented in Splunk Enterprise. Is there a reason you are trying to use it outside of Splunk Enterprise?
Sure. When I remove that line, I get:   index=_internal | table _time sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true   When setting "Las... See more...
Sure. When I remove that line, I get:   index=_internal | table _time sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true   When setting "Last 30 days" with the time picker, it produces another 5 rows, but I only paste the first one for brevity: _time sourcetype _raw othertestfield 2024-03-13T21:13:38.999+01:00 splunkd 03/13/2024 21:13:38 +0100, info_min_time=1707692400.000, info_max_time=1710361482.000, info_search_time=1710361482.294, othertestfield=test2, orig_sourcetype=splunkd test2 It seems the _time field is not set automatically to info_min_time, or else it should show something like 02/12/2024 in the _time part of the _raw field
I dont have one as I didn't think I needed one for something this simple. I have tried just now though adding this to no avail   [my_sourcetype] SHOULD_LINEMERGE = FALSE LINE_BREAKER = ([\r\n]+)  
Once the app is installed, is there any more steps that need to be taken to ensure, that its applied to searches? Is there a common way to debug the app? Its hard to troubleshoot by simply editing pr... See more...
Once the app is installed, is there any more steps that need to be taken to ensure, that its applied to searches? Is there a common way to debug the app? Its hard to troubleshoot by simply editing props.conf, uninstalling and reinstalling over and over
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For thi... See more...
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For this, I want to know if I can use $result.fieldname$ to call that table in the 'to' field when configuring the recipients.     
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" ... See more...
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" to start splunk   Could you please let me know how to pass --accept-license=yes with "systemctl start Splunkd"
The search you provided isn't in the format where using Trellis makes sense.  Turn off Trellis for the Single Value visualization you are using and the "distinct_count" will disappear.  
The app with props.conf is separate from the app(s) you may be using on a UF to read data. Putting the app on the SH is my attempt to make it clear the app does not go on the UF.  It *can* be instal... See more...
The app with props.conf is separate from the app(s) you may be using on a UF to read data. Putting the app on the SH is my attempt to make it clear the app does not go on the UF.  It *can* be installed on the UF, but it won't have any effect there.  Yes, go to Apps->Manage apps->Uploaded Apps to install your app.