All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data on... See more...
I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data one hour before the timepicker search (so this should be dynamic) for msg="*First *" I'm not sure if this is possible. I'm comparing these two searches and the initial log msg="*First*" can occur several minutes before the msg=*Completed*" log. So when I compare some of these log messages get cut off depending on when I select my timepicker. I would like to search for these message 1 hour before my timepicker selection.  Long term this search will go into a splunk dashboard.    (index=color name IN ("green","blue") msg="*First *" ```earliest="11/09/2023:09:00:00" latest="11/09/2023:12:59:59"```) OR (index=color name IN ("blue2","green2") msg="*Completed *")    
When asking a data analytics question, data should be the first thing to describe.  It would help a lot if you can illustrate relevant sample data (anonymize as needed but preserve necessary characte... See more...
When asking a data analytics question, data should be the first thing to describe.  It would help a lot if you can illustrate relevant sample data (anonymize as needed but preserve necessary characteristics) in text to support your implied conclusion that Splunk is not giving you to rows.  How do you convince yourself that dsname0 must have value "write" in addition to "read" that your screenshot shows? (Pro tip: Whenever possible, illustrate result in text as well.) Ultimately, what are you trying to achieve by spliting values into values0 and values1, dsnames into dsnames0 and dsnames1?  Your stats only uses values0 and dsnames0.  Is it possible that only dsname1 contain the value "write"? Again, a clear (text) illustration of input data would clarify many of these, and not force volunteers to read your mind.
OK. Look at this: props.conf: [test_sourcetype_to_recast] #Order of transform classes i crucial! You can do the same casting multiple transforms from #one class TRANSFORMS-0_extract_host = pic... See more...
OK. Look at this: props.conf: [test_sourcetype_to_recast] #Order of transform classes i crucial! You can do the same casting multiple transforms from #one class TRANSFORMS-0_extract_host = picklerick_extract_host TRANSFORMS-1_extract_source = picklerick_extract_source TRANSFORMS-2_recast_sourcetype = picklerick_recast_sourcetype TRANSFORMS-3_drop_dead = picklerick_drop_dead [destination_sourcetype] TRANSFORMS-conditional_host_overwrite = conditional_host_overwrite TRANSFORMS-cut_most_of_the_event = cut_most_of_the_event transforms.conf: [picklerick_drop_dead] REGEX = sourcetype:destination_sourcetype DEST_KEY = queue FORMAT = nullQueue [picklerick_recast_sourcetype] REGEX = sourcetype:destination_sourcetype CLONE_SOURCETYPE = destination_sourcetype [picklerick_extract_source] REGEX = source:(\w*) FORMAT = source::$1 DEST_KEY = MetaData:Source WRITE_META = true [picklerick_extract_host] REGEX = host:(\w*) FORMAT = host::$1 DEST_KEY = MetaData:Host WRITE_META = true [conditional_host_overwrite] REGEX = desthost=(\w*) FORMAT = host::$1 DEST_KEY = MetaData:Host WRITE_META = true [cut_most_of_the_event] REGEX = .*:([^:]*)$ FORMAT = $1 DEST_KEY = _raw WRITE_META = true   Now if I do this: curl -H "Authorization: Splunk my_token" http://my_host:8088/services/collector/event -d '{"index":"test1","host":"original_host","source":"original_host","sourcetype":"test_sourcetype_to_recast","event":"sourcetype:destination_sourcetype,source:destination_source,host:host1,source:source1:event:desthost=destination_host:This should be left at the end"}'  I will get this in my index As you can see, the original values of source and host fields posted with the event to HEC were completely discarded and were rewritten from the contents of the event. Then the event was cloned as the "destination_sourcetype" sourcetype when the host field was again rewritten, this time with the value provided in the "desthost=" part of the event. And finally the original event was discarded and the cloned event was cut short to leave only the part from the last semicolon till the end. Kinda ugly, kinda complicated. I'm not sure if you couldn't do similar thing with ingest actions by the way. But I'm no expert with those. One thing though - there is something not entirely right with the cloning transform because it seems to be cloning all events, not just those matching regex. I suppose that transform is lacking something. EDIT: Argh. I can point myself to my own post: https://community.splunk.com/t5/Getting-Data-In/Use-CLONE-SOURCETYPE-only-for-matching-events/m-p/667300/highlight/true#M111919 CLONE_SOURCETYPE is always applied to all events. So you'd have to further filter cloned/non-cloned events to only leave matching ones in the proper pipeline. Getting more and more complicated. Maybe it's worth fighting to get the data in a reasonable format?
Thanks for your response:  I am using c count since each user may have multiple success or failure events. When I choose "Visualization", "pie", the pie renders as one metric. However, below the pie... See more...
Thanks for your response:  I am using c count since each user may have multiple success or failure events. When I choose "Visualization", "pie", the pie renders as one metric. However, below the pie, I can see the numbers for both events are correct. Somehow the pie does not reflect the numbers.  
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTit... See more...
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTitle= However, Splunk is only pulling the first occurrence from the log line and ignoring the rest. so i get; eventTitle=BooRadley in my fields, instead of eventTitle=BooRadley eventTitle=REGGAE-2 eventTitle=CHRISTIAN MISSION   I have tried using regex and | kv pairdelim="=", kvdelim="," I am unsure if a line break would work as they are referenced to SArts - This is a field extracted via regex and changes. 2) The log line is about 9999 characters long with spaces, and not all the log line is ingested - I think i need to create a limits.conf file?  Below is an abridged extract of the log line   20231117154211 [18080-exec-9] INFO EventConversionService () - SArts: VUpdate(system=GRP1-VIPE, channelCode=UH, type=NextEvents, events=[Event(onAir=true, eventNumber=725538339, utcStartDateTime=2023-11-17T15:42:10.160Z, duration=00:00:05.000, eventTitle=BooRadley, contentType=Prog ), Event(onAir=false, eventNumber=725538313, utcStartDateTime=2023-11-17T15:42:15.160Z, duration=00:00:02.000, eventTitle= REGGAE-2, contentType=Bumper), Event(onAir=false, eventNumber=725538320, utcStartDateTime=2023-11-17T15:42:17.160Z, duration=00:01:30.000, eventTitle=CHRISITAN MISSION , contentType=Commercial), Event…   This is my code so far;   | rex "\-\s+(?<channel_name>.+)\:\sVUpdate" | stats values(eventNumber) by channel_name channelCode utcStartDateTime eventTitle duration  
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I se... See more...
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I see the following:     11-17-2023 11:03:02.381 +0000 ERROR AdminManager - Argument "app" is not supported by this handler.     I investigated all of this after seeing these warnings in the scheduler.log:     11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy NGINX Errors Alert" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy issue" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Failed linux logins Clone8"     I also saw the license manager, sometimes we are exceding the quota, but as far as i investigated, this doesnt remove the alerting functionalities...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk... See more...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk well enough to use the other set of spath variables to display the other row.  This is my search and output. index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values0 path=values{0} | spath output=values1 path=values{1} | spath output=dsnames0 path=dsnames{0} | spath output=dsnames1 path=dsnames{1} | stats min(values0) as min max(values0) as max avg(values0) as avg by dsnames0 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)            
Can anyone help on my request.
Turns out the required approach was different from what I had imagined, and in fact rather simpler. What I needed to do was: 1. Load my data file (in this case a sample log file) 2. Set up my ind... See more...
Turns out the required approach was different from what I had imagined, and in fact rather simpler. What I needed to do was: 1. Load my data file (in this case a sample log file) 2. Set up my index: curl -k -u <user>:<password> https://localhost:8089/servicesNS/admin/search/data/indexes -d name=<index-name> 3. Monitor the log directory, assigning to it the required source type: curl -k -u <user>:<password> https://localhost:8089/servicesNS/nobody/search/data/inputs/monitor -d name="/path/to/my/logs" -d index=<index-name> -d host=<host-name> -d sourcetype=<required-source-type> All events from that source will be assigned the required source type.
have you already managed to get it?  I need to do the same for a client Thank you in advanced
None of those.  The SEDCMD setting must be on the indexer(s) and/or heavy forwarders.  It should go in the stanza where the sourcetype it goes with resides (if the file is in a default stanza then pu... See more...
None of those.  The SEDCMD setting must be on the indexer(s) and/or heavy forwarders.  It should go in the stanza where the sourcetype it goes with resides (if the file is in a default stanza then put the setting in the associated local directory).
A [monitor] stanza reads a file and indexes new data written to that file. A [script] stanza runs a script and indexes the output of it.
I wouldn't say that at all.  One of the features of KVStore is to replace large lookup files.
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Dev... See more...
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Device_ID : 2 E.txt 2016-03-02 13:56:06.9283|** 2016-03-02 13:56:07.3333|ff Device_ID : 2 A.txt 2020-03-02 13:42:30.0111| Started! 2020-03-02 13:42:30.0111|444|-0.051|0.051|0.008|0.016 Query: index="xx" source="*A.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | table Device_ID _raw | rex field=_raw "(?<timestamp>[^|]+)\|(?<Probe_ID>[^|]+)" | table Device_ID timestamp Probe_ID | rex mode=sed field=timestamp "s/\\\\x00/ /g" | table Device_ID timestamp Probe_ID | eval time=strptime(timestamp,"%F %T.%4N") | streamstats global=f max(time) as latest_time by Device_ID | where time >= latest_time | eval _time=strptime(timestamp,"%Y-%m-%d %H:%M:%S.%4N") | table Device_ID _time Probe_ID |join type=left Device_ID [ search index="xx" source="*E.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | rex field=_raw "(?<timestamp>[^|]+)" | stats first(timestamp) as earliesttime last(timestamp) as latesttime by Device_ID |table Device_ID earliesttime latesttime ] |where _time >= strptime(earliesttime, "%Y-%m-%d %H:%M:%S.%4N") AND _time <= strptime(latesttime, "%Y-%m-%d %H:%M:%S.%4N") |search Device_ID="1"   Filtering events based on E.txt earliest timestamp on A.txt. It is working for Device_ID 1 and not for Device_ID 2. Both logs are same format. It is not generating earliest and latest timestamp for device_ID 2. If i run subsearch alone, it is generating.
So I changed the search with your suggestion and also added another array that its sorting by, but its giving me the same numbers for both read and write. I am looking to show the value of min, max, ... See more...
So I changed the search with your suggestion and also added another array that its sorting by, but its giving me the same numbers for both read and write. I am looking to show the value of min, max, and avg for read and then the same for write and it should be different. This is my current search. index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(values{}) as min max(values{}) as max avg(values{}) as avg by dsnames{} | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2) This is the current output. And this is the JSON format events.      
@AL3Z  $SPLUNK_HOME$/bin/splunk btool inputs list --debug  
I did not understand the difference between the two stanzas can you please explain 
which one should I move to /opt/splunkforwarder/etc/system/local , and edit: /opt/splunkforwarder/etc/system/default/props.conf /opt/splunkforwarder/etc/apps/search/default/props.conf /opt/splunkf... See more...
which one should I move to /opt/splunkforwarder/etc/system/local , and edit: /opt/splunkforwarder/etc/system/default/props.conf /opt/splunkforwarder/etc/apps/search/default/props.conf /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf /opt/splunkforwarder/etc/apps/learned/local/props.conf /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_local/apps/learned/local/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/system/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/search/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/splunk_internal_metrics/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/SplunkUniversalForwarder/default/props.conf
The monitor stanza in inputs.conf is looking for updates the abc.sh file - something unlikely to happen often. To run a scripted input, use a script stanza   [script://./bin/abc.sh] interval = 500 ... See more...
The monitor stanza in inputs.conf is looking for updates the abc.sh file - something unlikely to happen often. To run a scripted input, use a script stanza   [script://./bin/abc.sh] interval = 500 index = xyz sourcetype = script:abc  
Thanks Rich! Is it a bad practice to use a KVStore for automatic lookups since they can get very large?