All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I also tried adding this to the below query but it still pickedup more users from the main query. While I only want it to take into account the users I am getting from sub query. |appendcols [ sea... See more...
I also tried adding this to the below query but it still pickedup more users from the main query. While I only want it to take into account the users I am getting from sub query. |appendcols [ search index="a" eventName="xxx" ***other conditions here** |rename principal{} as User | where firstime > _time | where maxTime < _time | stats count by User]
@bowesmana , any suggestions here would be great , it's like a loop I am stuck in
 now this is giving me result but I want it to pickup the user from subquery and only fetch details from main query for time greater than=120 secs, also there would be multiple users
I'm trying to write some test, but to import the Class we could not bypass from splunk.persistconn.application import PersistantServerApplication
Maybe you could share your query as there is not much anyone can suggest other than do not use "join" as it is not really the way to join things in Splunk.
and there is a basic problem with that search anyway, which is that you are using a field called "count", which does not exist - your timechart will produce a field called dc(symbol). I assume that i... See more...
and there is a basic problem with that search anyway, which is that you are using a field called "count", which does not exist - your timechart will produce a field called dc(symbol). I assume that is a typo and that your real search does dc(symbol) as count  
  Use TERM(stalled), as that will help filter the initial data volume retrieved The order of commands is important - you are doing the rex before the hour constraint - change the order. You may al... See more...
  Use TERM(stalled), as that will help filter the initial data volume retrieved The order of commands is important - you are doing the rex before the hour constraint - change the order. You may already have a field called date_hour (it is often extracted by default - check). If so you can put that in the search  index=abc host=def* stalled (date_hour < 3 OR date_hour > 4) | rex field=_raw "symbol (?<symbol>.*) /" | timechart dc(symbol) span=15m Replace the NOT with a positive constraint, i.e. check that the hour is < 3 or > 4 rather than NOT >2 AND <4 You can't convert it to tstats unless symbol becomes an indexed field.  
Hi @gcusello  I found this post as I am trying to solve the same issue. I followed your suggestion and copied all the monitor  stanzas from system\default\inputs.conf to my inputs file in system\lo... See more...
Hi @gcusello  I found this post as I am trying to solve the same issue. I followed your suggestion and copied all the monitor  stanzas from system\default\inputs.conf to my inputs file in system\local\inputs.conf; and inserted "disable = 1" to all of them. Then I restarted splunk. However, network capture from my Splunk Server still showing all the log entries being forwarded. Below is my inputs.conf file. Do you know what could be the issue? Thanks, Billy. [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk] disabled = 1 index = _internal [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\watchdog\watchdog.log*] disabled = 1 index = _internal [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\license_usage_summary.log] disabled = 1 index = _telemetry [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunk_instrumentation_cloud.log*] disabled = 1 index = _telemetry sourcetype = splunk_cloud_telemetry [monitor://C:\Program Files\SplunkUniversalForwarder\etc\splunk.version] disabled = 1 _TCP_ROUTING = * index = _internal sourcetype=splunk_version [monitor://C:\Program Files\SplunkUniversalForwarder\var\log\splunk\configuration_change.log] disabled = 1 index = _configtracker [WinEventLog://Security] disabled = 0 renderXml = 1 whitelist = 4624, 4634
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I... See more...
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I am also trying to accomodate time constraints here, ex look for a user in main query if the time difference it was captured in sub query and main query is 120 secs. I am also using multiple eval commands and also tried appendcols
Not sure I understand the "single" bit of last(reserved) by host, as I assume there will be lots of different values for different hosts, however, you can do this ... | stats values(*-*) as *-* | fo... See more...
Not sure I understand the "single" bit of last(reserved) by host, as I assume there will be lots of different values for different hosts, however, you can do this ... | stats values(*-*) as *-* | foreach *-* [ eval <<MATCHSEG1>>=mvsort(mvdedup(mvappend(<<MATCHSEG1>>, '<<FIELD>>'))) | fields - <<FIELD>> ]
Preventing all time is a good idea because it effectively stops the time picker option from being used, so will stop less familiar users from making poor searches. As 'All Time' sets earliest=0, if ... See more...
Preventing all time is a good idea because it effectively stops the time picker option from being used, so will stop less familiar users from making poor searches. As 'All Time' sets earliest=0, if someone wants to do 'all time', it's still technically possible, as you can just search 'last 10 years' or something similar, e.g. earliest=10, which is almost all time, but not quite, so those who "know" can get around it.    
Could you share the search? It could be that it is using characters that work fine in the UI search bar, but which get interpreted wrong when passed to the REST API.
The word MemoryError is concerning. Usually that means Python thinks it will run out of memory. Do you have a comfortable amount of RAM on the machine?
Hi This should work. If you are looking spec file for props.conf you see that SHOULD_LINEMERGE = true for unknown reason? It should be false for almost 100% of cases. r. Ismo
You could/should set MC also in single node. Just like in distributed environment. The previous conf presentation shows you how to look those logs on OS level if those are not deliver into your splun... See more...
You could/should set MC also in single node. Just like in distributed environment. The previous conf presentation shows you how to look those logs on OS level if those are not deliver into your splunk server.
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I se... See more...
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I see there are metrics. Same if I run with mstats it is not returning any result. It was pulling data a week back but not now What could be the troubleshooting steps when there is issue like this ? What are the points i have to check on ? Summary : Data is  being pulled by SIM  Add on , So I am seeing metrics when using SIM command.  But when I try mstats on same metrics, it is not returning any result. Can anyone help me what could be the issue. From where I have to troubleshoot ? REgards, PNV
Yup, you can do this. However I recommend making sure that your field is a string containing a comma-separated list of email addresses.
Note that you only have to accept the license once (per newly installed version), not every time you run Splunkd using systemctl. You can thus run the "./splunk start --accept-license=yes"  command ... See more...
Note that you only have to accept the license once (per newly installed version), not every time you run Splunkd using systemctl. You can thus run the "./splunk start --accept-license=yes"  command once, then afterwards use "systemctl start Splunkd"
There does not seem to be such a package on PyPi. It seems to be only implemented in Splunk Enterprise. Is there a reason you are trying to use it outside of Splunk Enterprise?
Sure. When I remove that line, I get:   index=_internal | table _time sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true   When setting "Las... See more...
Sure. When I remove that line, I get:   index=_internal | table _time sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true   When setting "Last 30 days" with the time picker, it produces another 5 rows, but I only paste the first one for brevity: _time sourcetype _raw othertestfield 2024-03-13T21:13:38.999+01:00 splunkd 03/13/2024 21:13:38 +0100, info_min_time=1707692400.000, info_max_time=1710361482.000, info_search_time=1710361482.294, othertestfield=test2, orig_sourcetype=splunkd test2 It seems the _time field is not set automatically to info_min_time, or else it should show something like 02/12/2024 in the _time part of the _raw field