All Posts

Top

All Posts

I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I e... See more...
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I enter the Splunk query in quotes instead of the variable, it does work.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: splQuery });  
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I am trying to reformat fields, and in one particular place I would need to ensure that a space preceeds the _h= part in the transform stanza below. [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw However if I add multiple whitespaces in the FORMAT string, right after the equals sign in the above example, they will be ignored. Should put the whole thing betweem quotes? Wouldn't the quotes be included in the _raw string? What would be the right solution for this?  
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and ide... See more...
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and identify the metadata to use, then you have to create INGEST_EVAL transformations to assign the original metadata to your metadata, e.g something like this (do adapt to your log format): in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_set_sourcetype_by_regex TRANSFORMS-02 = securelog_override_raw the first calls the metadata assignment, the second one defines the correct sourcetype, the third overrid _raw. In transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host := coalesce( json_extract(_raw, "hostname"), json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")) [securelog_set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case( match(_raw, "\"path\":\"/var/log/audit/audit.log\""), "linux_audit", match(_raw, "\"path\":\"/var/log/secure\""), "linux_secure") [securelog_override_raw] INGEST_EVAL = _raw := if( sourcetype LIKE "linux%", json_extract(_raw, "application_log"), _raw ) The first one extract host from the json. the second one assign sourcetype based on information in the metadata (in the example linux sourcetypes). the third one takes one field of the json as _raw. It was't an easy work and it was a very long job, so I hint to engage a Splunk PS or a Core Consultant that already did it. Ciao. Giuseppe
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/... See more...
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Forwarddatatothird-partysystemsd#Forward_all_data in few words put in outputs.conf [tcpout] [tcpout:fastlane] server = 10.1.1.35:6996 sendCookedData = false Ciao. Giuseppe
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure... See more...
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)... There can be many reasons.
I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue 
Since you are generating these events with a script, modify the script to include a real timestamp at the beginning of the event (and if necessary configure the sourcetype to extract it)
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="(... See more...
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="("is_match=1")" count as event_num | where event_num<10 | table _time user action event_num is_match | streamstats count(eval(is_match=1)) as n | where n>0 so here it will look in the _audit index and then line 2 sets is_match=1 if the event matches your criteria. streamstats will then count all events following (i.e. in earlier _time order) the match, but reset to 1 when there is a new match. The where clause will then keep the last 10 events prior to the match and then the final streamstats is simply to remove the initial set of events up to the first match. Not sure if this is what you're after doing, but hopefully it gives you some pointers.
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page       
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KE... See more...
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(time_temp), "aaa" . _raw, "bbb" . _raw) #INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp . " " . _raw, time_temp . subsecond_temp . " " . _raw) Both md_time and md_subsecond are in the list, and before md_fix_subsecond. If in md_fix_subsecond I check for the null-ness of either time_temp or subsecond_temp, then they are both reported as null. So for some reason they are not available in the EVAL_RAW for some reason. And as they are both null, referencing them resulted in an error, and so no log was output. How could we resolve this?
I think that this time for creating a support case to splunk.
Create a props.conf stanza for the sourcetype that tells Splunk where the timestamp is and what it looks like.
If possible add real timestamp in your logs, then define in props.conf its place and format. Another option is define in props.conf that splunk must use current time for indexing.
Another thing which may help you is adding parallelIngestionPipelines > 1 in server.conf. This is not helping with individual files, but if there are many files then it could help.
How can avoid it ? I need correct time stamp on each event.
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and s... See more...
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and some articles and answers.
Splunk recognize your account id as a timestamp. When you are taking it as epoch number and convert it to human readable those are matching.
@isoutamo but if give same props.conf with KV_MODE=json and distribute it to both indexers and search heads, will it lead to duplication of events or is it fine? 
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel... See more...
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel: log file: message: java.lang. throwable  iam getting this above type of data in logs when i search the index and this is logstash data coming in splunk in json format now iam confused with what to do with this data is this the data coming is fine or i can further filter anything from this  and get some other output out of it which can be meaningful for me. if there is any way of it then please share.