All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could adjust your approach to list a time window instead of specific number of events <base search> | eval match_time=if(<match_conditions>,_time,null()) | filldown match_time | where _time-m... See more...
You could adjust your approach to list a time window instead of specific number of events <base search> | eval match_time=if(<match_conditions>,_time,null()) | filldown match_time | where _time-match_time<=<time_limit>  
This is a problem I have been struggling with for years. I don’t understand why the splint platform can’t do this itself. It’s even more complicated because the TSIX and the raw data both have compr... See more...
This is a problem I have been struggling with for years. I don’t understand why the splint platform can’t do this itself. It’s even more complicated because the TSIX and the raw data both have compression ratios which are individual to each index so to do this properly not only do you need to know the number of days you wish to keep the size of that data but also the compressor ratio for each of these indexes 
Take a look at this example, where it sets the search property outside the initial constructor https://dev.splunk.com/enterprise/docs/developapps/visualizedata/addsearches/searchproperties i.e. ... See more...
Take a look at this example, where it sets the search property outside the initial constructor https://dev.splunk.com/enterprise/docs/developapps/visualizedata/addsearches/searchproperties i.e. // Update the search query mysearch.settings.set("search", "index=_internal | head 2");
Hi @BrianLam, You can retrieve the search results using the search/v2/jobs/{search_id}/results endpoint. See https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fv2.2Fjobs... See more...
Hi @BrianLam, You can retrieve the search results using the search/v2/jobs/{search_id}/results endpoint. See https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2F.7Bsearch_id.7D.2Fresults. The search_id value is specific to the instance of the search that generated the alert. It's a simple GET request. The default output mode is XML. If you want JSON output, pass the output_mode query parameter as part of the GET request: https://splunk:8089/services/search/v2/jobs/scheduler__user__app__xxx_at_xxx_xxx/results?output_mode=json  
Hi @wowbaggerHU, You can use INGEST_EVAL as a workaround: # transforms.conf [md_host] INGEST_EVAL = _raw:=" h=\"".host."\" "._raw
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I e... See more...
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I enter the Splunk query in quotes instead of the variable, it does work.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: splQuery });  
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I am trying to reformat fields, and in one particular place I would need to ensure that a space preceeds the _h= part in the transform stanza below. [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw However if I add multiple whitespaces in the FORMAT string, right after the equals sign in the above example, they will be ignored. Should put the whole thing betweem quotes? Wouldn't the quotes be included in the _raw string? What would be the right solution for this?  
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and ide... See more...
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and identify the metadata to use, then you have to create INGEST_EVAL transformations to assign the original metadata to your metadata, e.g something like this (do adapt to your log format): in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_set_sourcetype_by_regex TRANSFORMS-02 = securelog_override_raw the first calls the metadata assignment, the second one defines the correct sourcetype, the third overrid _raw. In transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host := coalesce( json_extract(_raw, "hostname"), json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")) [securelog_set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case( match(_raw, "\"path\":\"/var/log/audit/audit.log\""), "linux_audit", match(_raw, "\"path\":\"/var/log/secure\""), "linux_secure") [securelog_override_raw] INGEST_EVAL = _raw := if( sourcetype LIKE "linux%", json_extract(_raw, "application_log"), _raw ) The first one extract host from the json. the second one assign sourcetype based on information in the metadata (in the example linux sourcetypes). the third one takes one field of the json as _raw. It was't an easy work and it was a very long job, so I hint to engage a Splunk PS or a Core Consultant that already did it. Ciao. Giuseppe
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/... See more...
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Forwarddatatothird-partysystemsd#Forward_all_data in few words put in outputs.conf [tcpout] [tcpout:fastlane] server = 10.1.1.35:6996 sendCookedData = false Ciao. Giuseppe
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure... See more...
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)... There can be many reasons.
I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue 
Since you are generating these events with a script, modify the script to include a real timestamp at the beginning of the event (and if necessary configure the sourcetype to extract it)
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="(... See more...
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="("is_match=1")" count as event_num | where event_num<10 | table _time user action event_num is_match | streamstats count(eval(is_match=1)) as n | where n>0 so here it will look in the _audit index and then line 2 sets is_match=1 if the event matches your criteria. streamstats will then count all events following (i.e. in earlier _time order) the match, but reset to 1 when there is a new match. The where clause will then keep the last 10 events prior to the match and then the final streamstats is simply to remove the initial set of events up to the first match. Not sure if this is what you're after doing, but hopefully it gives you some pointers.
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page       
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KE... See more...
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(time_temp), "aaa" . _raw, "bbb" . _raw) #INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp . " " . _raw, time_temp . subsecond_temp . " " . _raw) Both md_time and md_subsecond are in the list, and before md_fix_subsecond. If in md_fix_subsecond I check for the null-ness of either time_temp or subsecond_temp, then they are both reported as null. So for some reason they are not available in the EVAL_RAW for some reason. And as they are both null, referencing them resulted in an error, and so no log was output. How could we resolve this?
I think that this time for creating a support case to splunk.
Create a props.conf stanza for the sourcetype that tells Splunk where the timestamp is and what it looks like.
If possible add real timestamp in your logs, then define in props.conf its place and format. Another option is define in props.conf that splunk must use current time for indexing.
Another thing which may help you is adding parallelIngestionPipelines > 1 in server.conf. This is not helping with individual files, but if there are many files then it could help.