All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok… this question and the answers are a bit older, but maybe my post could help other Splunkers. You need up to two kinds of services: Splunk (with Splunk Web) as an SH Cluster Member and a Load Bal... See more...
Ok… this question and the answers are a bit older, but maybe my post could help other Splunkers. You need up to two kinds of services: Splunk (with Splunk Web) as an SH Cluster Member and a Load Balancer (optional). "Optional" because you can also configure it so that User A has to use SHC Node 1, User B has to use SHC Node 2, and User C has to use SHC Node 3, or keep the other nodes as a kind of hot spare. …If you choose a Load Balancer (which makes sense outside of Dev or Test environments), it does not necessarily need to be an external one for a Search Head Cluster. A customer used a 3-node SH Cluster in production. On 2 nodes, an additional Apache instance was installed as an LB and configured for high availability (HA) by swapping the Virtual IP for the SH Cluster. I just finished the Splunk Cluster Administration Course. There they use just 3 virtual machines for a multisite cluster and SH cluster with deployer and manager node. Kind Regards SierraX
Good idea, I tried it, but unfortunately it doesn't seem to work. I have this configured:   [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = \ _h=$1 $0 DEST_KEY = _raw [md_subsec... See more...
Good idea, I tried it, but unfortunately it doesn't seem to work. I have this configured:   [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = \ _h=$1 $0 DEST_KEY = _raw [md_subsecond_default] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1$0 DEST_KEY = _raw [md_time_default] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1$0 DEST_KEY = _raw   And I get this:   0x0040: e073 339e e073 339e 3232 3738 205f 7473 .s3..s3.2278._ts 0x0050: 3d31 3733 3732 3739 3038 315c 205f 683d =1737279081\._h= 0x0060: 7370 6c75 6e6b 2d68 6620 5f69 6478 3d5f splunk-hf._idx=_      But I agree, this would have been the most elegant solution.
Well... apart from the obvious cheap shot at your "splint" (but I suppose it might have been auto-correct), there is an issue of "how would you do it better"? Remember that there are many factors at ... See more...
Well... apart from the obvious cheap shot at your "splint" (but I suppose it might have been auto-correct), there is an issue of "how would you do it better"? Remember that there are many factors at play here - amount of available space, retention time requirements, different types of storage. The current bucket management machinery does allow for quite a bit of flexibility but you can't just produce storage out of thin air.
Apart from @tscroggins 's solution you could try escaping your initial space. It should show the config parser that there is a non-space character so your key-value pair in config is split properly b... See more...
Apart from @tscroggins 's solution you could try escaping your initial space. It should show the config parser that there is a non-space character so your key-value pair in config is split properly but since the space doesn't normally need escaping it shouldn't hurt.
OK. Let me add my three cents to what the guys already covered to some extent. There are two separate things here. One is "index and forward" setting. By default Splunk receives and processes data... See more...
OK. Let me add my three cents to what the guys already covered to some extent. There are two separate things here. One is "index and forward" setting. By default Splunk receives and processes data from inputs and indexes it and sends to outputs (if any are defined). If you disable "index and forward", it will still process and send data but it will not save the events to local indexes. So you disable this setting on any Splunk component which is not supposed to store data locally (in a well-enginered environment only an all-in-one server or an indexer stores indexes; all other components should forward their data to indexer tier). A Heavy Forwarder is just a fancy name for a Splunk Enterprise (not Universal Forwarder!) instance which does not do local indexing and doesn't have any other roles (actually if you were to nitpick, any other component like SH or DS could technically be called a HF as well since it processes at least its own logs, and forwards them). Another thing is the type of data. With Splunk there are three distinct "stages" of data. Firstly you have the raw data. That's the data you're receiving on simple TCP/UDP inputs, read from files, pull with modular inputs and so on. This is a completely unprocessed data as it is returned by the source. If raw data is processed at the UF, it's being "cooked" - a data stream is split into chunks (not single events yet!), each chunk is assigned some metadata (the default four - host, source, sourcetype, index) and that's it. This is the cooked data. If raw data or cooked data is processed at the HF or indexer, it's getting parsed - Splunk applies all props and transforms applicable at index time (splits the stream into separate events, parses out the timestamp from events, does all the fancy index-time mangling...). After this stage you get your data as "cooked and parsed" (often called just "parsed" for short). If the UF receives cooked or parsed data, it just forwards it. If a HF/indexer receives already parsed data it doesn't process it, just forwards/indexes it. So the data is cooked only once and parsed only once on its path to the destination index. There is one additional case - if you're using indexed extractions on a UF, it produces already cooked and parsed data. Sending uncooked data is a very special case when you're sending data to an external non-splunk receiver. In this case you're actually "de-cooking" your data. But this is a fairly uncommon case. So here you have it - a HF normally cooks and parses the data it receives (unless it's already parsed) and sends it to its outputs. So you don't need to do anything else by default to have your data cooked and parsed.
You could adjust your approach to list a time window instead of specific number of events <base search> | eval match_time=if(<match_conditions>,_time,null()) | filldown match_time | where _time-m... See more...
You could adjust your approach to list a time window instead of specific number of events <base search> | eval match_time=if(<match_conditions>,_time,null()) | filldown match_time | where _time-match_time<=<time_limit>  
This is a problem I have been struggling with for years. I don’t understand why the splint platform can’t do this itself. It’s even more complicated because the TSIX and the raw data both have compr... See more...
This is a problem I have been struggling with for years. I don’t understand why the splint platform can’t do this itself. It’s even more complicated because the TSIX and the raw data both have compression ratios which are individual to each index so to do this properly not only do you need to know the number of days you wish to keep the size of that data but also the compressor ratio for each of these indexes 
Take a look at this example, where it sets the search property outside the initial constructor https://dev.splunk.com/enterprise/docs/developapps/visualizedata/addsearches/searchproperties i.e. ... See more...
Take a look at this example, where it sets the search property outside the initial constructor https://dev.splunk.com/enterprise/docs/developapps/visualizedata/addsearches/searchproperties i.e. // Update the search query mysearch.settings.set("search", "index=_internal | head 2");
Hi @BrianLam, You can retrieve the search results using the search/v2/jobs/{search_id}/results endpoint. See https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fv2.2Fjobs... See more...
Hi @BrianLam, You can retrieve the search results using the search/v2/jobs/{search_id}/results endpoint. See https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2F.7Bsearch_id.7D.2Fresults. The search_id value is specific to the instance of the search that generated the alert. It's a simple GET request. The default output mode is XML. If you want JSON output, pass the output_mode query parameter as part of the GET request: https://splunk:8089/services/search/v2/jobs/scheduler__user__app__xxx_at_xxx_xxx/results?output_mode=json  
Hi @wowbaggerHU, You can use INGEST_EVAL as a workaround: # transforms.conf [md_host] INGEST_EVAL = _raw:=" h=\"".host."\" "._raw
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I e... See more...
I am using SplunkJS to display an HTML page with JavaScript. I have tried everything to try and get the SearchManager query to use a JavaScript variable (ex. using splQuery, +splQuery+, etc.). If I enter the Splunk query in quotes instead of the variable, it does work.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: splQuery });  
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My problem is that I am trying to reformat fields, and in one particular place I would need to ensure that a space preceeds the _h= part in the transform stanza below. [md_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw However if I add multiple whitespaces in the FORMAT string, right after the equals sign in the above example, they will be ignored. Should put the whole thing betweem quotes? Wouldn't the quotes be included in the _raw string? What would be the right solution for this?  
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and ide... See more...
Hi @rahulkumar , as I said, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. At first you have to analyze your json logstash log and identify the metadata to use, then you have to create INGEST_EVAL transformations to assign the original metadata to your metadata, e.g something like this (do adapt to your log format): in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_set_sourcetype_by_regex TRANSFORMS-02 = securelog_override_raw the first calls the metadata assignment, the second one defines the correct sourcetype, the third overrid _raw. In transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host := coalesce( json_extract(_raw, "hostname"), json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")) [securelog_set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case( match(_raw, "\"path\":\"/var/log/audit/audit.log\""), "linux_audit", match(_raw, "\"path\":\"/var/log/secure\""), "linux_secure") [securelog_override_raw] INGEST_EVAL = _raw := if( sourcetype LIKE "linux%", json_extract(_raw, "application_log"), _raw ) The first one extract host from the json. the second one assign sourcetype based on information in the metadata (in the example linux sourcetypes). the third one takes one field of the json as _raw. It was't an easy work and it was a very long job, so I hint to engage a Splunk PS or a Core Consultant that already did it. Ciao. Giuseppe
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/... See more...
Hi @danielbb , why do you want to send uncoocked data from SH?, there no reason for this! Anyway, if you want to apply this strange thing, see at https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Forwarddatatothird-partysystemsd#Forward_all_data in few words put in outputs.conf [tcpout] [tcpout:fastlane] server = 10.1.1.35:6996 sendCookedData = false Ciao. Giuseppe
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure... See more...
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)... There can be many reasons.
I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue 
Since you are generating these events with a script, modify the script to include a real timestamp at the beginning of the event (and if necessary configure the sourcetype to extract it)
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="(... See more...
Need a bit more detail to know exactly what you're after, but using streamstats can give you this type of output index=_audit | eval is_match=if(user="your_user", 1, 0) | streamstats reset_before="("is_match=1")" count as event_num | where event_num<10 | table _time user action event_num is_match | streamstats count(eval(is_match=1)) as n | where n>0 so here it will look in the _audit index and then line 2 sets is_match=1 if the event matches your criteria. streamstats will then count all events following (i.e. in earlier _time order) the match, but reset to 1 when there is a new match. The where clause will then keep the last 10 events prior to the match and then the final streamstats is simply to remove the initial set of events up to the first match. Not sure if this is what you're after doing, but hopefully it gives you some pointers.
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax... See more...
I'm calling the API from BTP IS and want to get the result of an alert that I created from before. My alert name is PRD - Daily CCS Integrations Error Report, not quite sure what's the correct syntax of the URL and command to get the result.
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on ... See more...
Splunk Transmit security Version: 1.2.8 Build: 1version inputs UI page is showing error any suggestions on this ? Splunk Enterprise DCN server Version:8.1.1  ERROR :This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page