All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/... See more...
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Forwarddatatothird-partysystemsd#Forward_all_data in few words put in outputs.conf [tcpout] [tcpout:fastlane] server = 10.1.1.35:6996 sendCookedData = false Ciao. Giuseppe
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure... See more...
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)... There can be many reasons.
I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue 
Since you are generating these events with a script, modify the script to include a real timestamp at the beginning of the event (and if necessary configure the sourcetype to extract it)
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="(... See more...
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="("is_match=1")" count as event_num | where event_num<10 | table _time user action event_num is_match | streamstats count(eval(is_match=1)) as n | where n>0 so here it will look in the _audit index and then line 2 sets is_match=1 if the event matches your criteria. streamstats will then count all events following (i.e. in earlier _time order) the match, but reset to 1 when there is a new match. The where clause will then keep the last 10 events prior to the match and then the final streamstats is simply to remove the initial set of events up to the first match. Not sure if this is what you're after doing, but hopefully it gives you some pointers.
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page       
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KE... See more...
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(time_temp), "aaa" . _raw, "bbb" . _raw) #INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp . " " . _raw, time_temp . subsecond_temp . " " . _raw) Both md_time and md_subsecond are in the list, and before md_fix_subsecond. If in md_fix_subsecond I check for the null-ness of either time_temp or subsecond_temp, then they are both reported as null. So for some reason they are not available in the EVAL_RAW for some reason. And as they are both null, referencing them resulted in an error, and so no log was output. How could we resolve this?
I think that this time for creating a support case to splunk.
Create a props.conf stanza for the sourcetype that tells Splunk where the timestamp is and what it looks like.
If possible add real timestamp in your logs, then define in props.conf its place and format. Another option is define in props.conf that splunk must use current time for indexing.
Another thing which may help you is adding parallelIngestionPipelines > 1 in server.conf. This is not helping with individual files, but if there are many files then it could help.
How can avoid it ? I need correct time stamp on each event.
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and s... See more...
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and some articles and answers.
Splunk recognize your account id as a timestamp. When you are taking it as epoch number and convert it to human readable those are matching.
@isoutamo but if give same props.conf with KV_MODE=json and distribute it to both indexers and search heads, will it lead to duplication of events or is it fine? 
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel... See more...
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel: log file: message: java.lang. throwable  iam getting this above type of data in logs when i search the index and this is logstash data coming in splunk in json format now iam confused with what to do with this data is this the data coming is fine or i can further filter anything from this  and get some other output out of it which can be meaningful for me. if there is any way of it then please share.
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and ... See more...
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and running. I had to manually go into: /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf ...and comment out each line with: tail_rising_column_init_ckpt_value checkpoint_key Then restart Splunk, then go into each INPUT config and manually reset the checkpoint value to what was recorded in the tail_rising_column_init_ckpt_value setting. Took forever, but after doing all that and another Splunk restart, only then did all the issues go away. Also noted that the 9.4.0 update removes the legacy tail_rising_column_init_ckpt_value from the db_inputs.conf file, as it is now stored in kvstore, and since kvstore has been updated with 9.4.0 update, that was the overall issue.   Just yet another mess that Splunk updates have caused, but at least support is aware, and they are working hard to properly fix it.
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the ... See more...
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the timestamp accordingly  
@sc_admin11  can you run btool and check props.conf  /opt/splunkforwarder/bin/splunk btool props list --debug