All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm using the splunk-otel-collector, and attempting to get multi-line java exceptions into a standardly formatted event. Using the example, my values file contains         multilineConfigs:... See more...
I'm using the splunk-otel-collector, and attempting to get multi-line java exceptions into a standardly formatted event. Using the example, my values file contains         multilineConfigs: - namespaceName: value: example useRegexp: true firstEntryRegex: ^[^\s].* combineWith: ""     The rendered configMap contains   - combine_field: attributes.log combine_with: "" id: example is_first_entry: (attributes.log) matches "^[^\\s].*" max_log_size: 1048576 output: clean-up-log-record source_identifier: resource["com.splunk.source"] type: recombine   With that config, the logs continue to split . Then I change the value to       combineWith: "\t"         the following happens with the logs:     Has anyone experienced this and worked around it?
Hello, I am looking to calculate how long it takes to refresh the view using the time of the events "End View Refresh" and "Start View Refresh" i.e. find the difference in time for each of these e... See more...
Hello, I am looking to calculate how long it takes to refresh the view using the time of the events "End View Refresh" and "Start View Refresh" i.e. find the difference in time for each of these events whenever these 2 events occur. Tried number of things using streamstat and range, but it does provide me the desired result. Any assistance would be appreciated. Regards  
The queries can be combined like this. index=test1 sourcetype=teams ("osversion=" OR "host=12*") | rex field=_raw "\s+(?<osVersion>.*?)$" | rex field=_raw "\w+(?<host>*)$" | table Time(utc) "OSVersi... See more...
The queries can be combined like this. index=test1 sourcetype=teams ("osversion=" OR "host=12*") | rex field=_raw "\s+(?<osVersion>.*?)$" | rex field=_raw "\w+(?<host>*)$" | table Time(utc) "OSVersion" host That will give you lists of OSVersions and hosts separately, but in a single table.  Then you should compare the time values to see if OSVersion and host are in events with the timestamp so they can be merged.  If so, then this query will do it. index=test1 sourcetype=teams ("osversion=" OR "host=12*") | rex field=_raw "\s+(?<osVersion>.*?)$" | rex field=_raw "\w+(?<host>*)$" | stats values(*) as * by "Time(utc)" | table "Time(utc)" "OSVersion" host
I have two rex queries and want know how to combine Query : 1 index=test1 sourcetype=teams | search "osversion=" | rex field=_raw "\s+(?<osVersion>.*?)$" | table Time(utc) "OSVersion" output ... See more...
I have two rex queries and want know how to combine Query : 1 index=test1 sourcetype=teams | search "osversion=" | rex field=_raw "\s+(?<osVersion>.*?)$" | table Time(utc) "OSVersion" output :        time      osversion 1.1 123 1.2 1234 1.3 12345 1.4 123456 Query : 2 index=test1 sourcetype=teams | search "host=12* | rex field=_raw "\w+(?<host>*)$" | table Time(utc) "OSVersion" output :        time      host 1.1 abc 1.2 abcd 1.3 abcde Pls help me how to combine above queries and should show table like below time      osversion        host 1.1          123                    abc 1.2          1234                abcd 1.3           12345            abcde   
I'm happy to hear you find it useful. Thank you for the kind words. Julio
Question with regards to "Default value change for the 'max_documents_per_batch_save' setting causes restore from KV store backups made using versions earlier than Splunk Enterprise 9.3.0 to fail". ... See more...
Question with regards to "Default value change for the 'max_documents_per_batch_save' setting causes restore from KV store backups made using versions earlier than Splunk Enterprise 9.3.0 to fail".  The "9.3 READ THIS FIRST" documentation says that I must restore KV backups made using Splunk Enterprise 9.2.2 and earlier versions before upgrading to Splunk Enterprise version 9.3.0. I am new to Splunk administration and would appreciate steps (with detailed explanation) for hot to accomplish this task and get to the point of upgrading Splunk from 9.2.2 to 9.3.1. This is a single-instance (one server) environment, no distributed components, no clusters . Not running ES, ITSI, or ITE Work Thanks
| bin span=10m _time | stat count by _time | stats count(eval(count>=10)) as count10plus count as total | eval percent=100*count10plus/total
"doesnot works" (sic) is not very informative. What exactly have you tried, what are you trying to achieve, and what are you getting that does not match your expectations?
Hi all, New to splunk, running out of ideas, please help! I have created a search to show: | bin span=10m _time | stat count by _time This gives me two columns - the time interval in 10 minu... See more...
Hi all, New to splunk, running out of ideas, please help! I have created a search to show: | bin span=10m _time | stat count by _time This gives me two columns - the time interval in 10 minutes bins, and the number of results within that bin. What I would like to do is expand on this search and show the % of bins over a time range that have > =10 results    cheers
Hi  Is it possible to use same input with the 2 different panels :  It works fine with the 1 panel as below :  <panel depends= "$tokShowPanelB$ "> But i want to use the same input with the... See more...
Hi  Is it possible to use same input with the 2 different panels :  It works fine with the 1 panel as below :  <panel depends= "$tokShowPanelB$ "> But i want to use the same input with the panelC too. But below command doesnot works:  <panel depends= "$tokShowPanelB$ , $tokShowPanelC$"> Can someone please help. 
Hi Team,   I'm trying to trigger a autosys job based on alert we recieved in splunk.   Any idea how to acheive it ?
I can't really see anything wrong but I dislike the following. /opt/splunk/etc/system/local/props.conf KV_MODE = json Since I do see it in several of the various splunkd* stanzas it makes me th... See more...
I can't really see anything wrong but I dislike the following. /opt/splunk/etc/system/local/props.conf KV_MODE = json Since I do see it in several of the various splunkd* stanzas it makes me think it was set in local under a default stanza.  I personally would look to remove that but keep in mind if this fixes the internal log extraction it will break something else that needs the json configuration.  I've always tried to create custom apps and place any default overrides in the custom app rather than allow anything to fall into the ./splunk/etc/system/local/*.conf.
The owner field is who is the current owner of a knowledge object and used for enforcing permissions and capabilities.  Unless the index is created via the GUI the value is likely to default to 'syst... See more...
The owner field is who is the current owner of a knowledge object and used for enforcing permissions and capabilities.  Unless the index is created via the GUI the value is likely to default to 'system' or such generic terms.  Even if created via the GUI once the user departs the organization the user name should be disabled/deleted which risks leaving the object unavailable and the object should be migrated to a generic ID or a different user. I don't see any automated method of pulling the information you desire from a rest call given that owner can change and creation date is likely just listed as earliest event in the index which is not reliable. Previously I would have an app just to define indexes pushed to IDX tier from the CM.  After the index stanza you can comment in the information you want to record but you wouldn't be able to view that from a rest call.
Please try to remove the " (double quotes) from the TIME_FORMAT. TIME_FORMAT=%d/%m/%Y %H:%M:%S   If this isn't working checkout the btool on this source/host/sourcetype for any DATETIME_CONFIG... See more...
Please try to remove the " (double quotes) from the TIME_FORMAT. TIME_FORMAT=%d/%m/%Y %H:%M:%S   If this isn't working checkout the btool on this source/host/sourcetype for any DATETIME_CONFIG setting on your props.conf. Hope this helps.
It's not clear if you are speaking about patching Splunk application servers or just other servers in your environment.  Any server hosting a Splunk function will report into the DMC and that should ... See more...
It's not clear if you are speaking about patching Splunk application servers or just other servers in your environment.  Any server hosting a Splunk function will report into the DMC and that should be your source of truth about how the Splunk application is functioning after a server patch. Other servers in your environment should be monitored based upon your own desires and concepts of critical functions.  It really lays outside the topics on this community answer board.
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV f... See more...
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV file stored in S3, but unfortunately the sourcetype aws:s3:csv is not indexing the file "properly" (meaning it is not extracting any fields - check left screenshot from the attached file). I've modified the sourcetype aws:s3:csv (under the Splunk Addon for AWS application) and configured it exactly like the default CSV sourcetype (under system/default/proprs.conf). After doing this if I index a file manually via "Settings/Add data" it is being indexed properly (fields are being extracted), but if the very same file is indexed by the app Splunk Addon for AWS,again  configured with the same sourcetype, there are no extracted fields. Check attached screenshot for reference. I've also tried to add other different configurations to the not-modified aws:s3:csv sourcetype like INDEXED_EXTRACTIONS = CSV; HEADER_FIELD_LINE_NUMBER = 1; FIELD_NAMES = field1,field2,field3 and various other configurations in props.conf (under Splunk Addon for AWS) but without success. The only "workaround" is if I use REPORT-extract_fields in props.conf for that sourcetype and in transforms.conf configure it, but this is not ideal. Additionally I've set the sourcetype to csv  (default Splunk sourcetype) in the inputs.conf but this also seems to not work. Splunk 9.2.1 Splunk Add-on for AWS 7.7.0 Similar questions without proper answer: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-Amazon-Web-Services-How-to-get-a-CSV-file/m-p/131725 https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Add-on-for-AWS-Ingesting-csv-files-and-he-fields-are-not/td-p/656923 https://community.splunk.com/t5/All-Apps-and-Add-ons/S3-bucket-with-CSV-files-not-extracting-fields-at-index-time/m-p/458671 https://community.splunk.com/t5/Getting-Data-In/No-fields-or-timestamps-extracted-when-indexing-TSV-from-S3/m-p/660436 https://community.splunk.com/t5/Getting-Data-In/Why-is-CSV-data-not-getting-parsed-while-being-monitored-on/td-p/275515
That's what i'm finding as well.  I'm curious if there's a round-about way to do this.  Maybe using that string as a token in a dashboard?
Hi I modified the props.conf as recommended and no change, time is still being taken as ingest time: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time\=\" TIME_FORMAT = "%d/%m/%Y... See more...
Hi I modified the props.conf as recommended and no change, time is still being taken as ingest time: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time\=\" TIME_FORMAT = "%d/%m/%Y %H:%M:%S" MAX_TIMESTAMP_LOOKAHEAD = 27 CHARSET = UTF-8 KV_MODE = none DISABLED = false Any other ideas?
TIME_PREFIX is a regex match and they can get touchy sometimes.  I would force the = and the " to be escaped so: TIME_PREFIX = time\=\".  Then I would take advantage of the MAX_TIMESTAMP_LOOKAHEAD, a... See more...
TIME_PREFIX is a regex match and they can get touchy sometimes.  I would force the = and the " to be escaped so: TIME_PREFIX = time\=\".  Then I would take advantage of the MAX_TIMESTAMP_LOOKAHEAD, although it should be inherited from the default I always like to put it in my app when I have multiple timestamps in the raw data.
Thank you for your response. I am uploading the btool output for splunkd.