All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for response. I will try it out.
Yes. I'm just pointing it out because it's a common use case - to find something that is (not) followed by another something and it's a bit unintuitive that Splunk by default returns results in rever... See more...
Yes. I'm just pointing it out because it's a common use case - to find something that is (not) followed by another something and it's a bit unintuitive that Splunk by default returns results in reverse chronological order. So you sometimes need to either manipulate the order of results so that "previous" in terms of carrying over values further down the stream means the desired way. Or you have to remember that you're returning the end of some interesting period, not its beginning. As I said - depending on the use case it can be sometimes confusing and it's worth remembering to always double check your results order when you're doing similar things.
Technically you can work with regexes defined in lookups by doing something like this | eval enabled=1 | lookup regex_list.csv enabled OUTPUT regex | eval match=mvmap(regex, if(match(path, regex), r... See more...
Technically you can work with regexes defined in lookups by doing something like this | eval enabled=1 | lookup regex_list.csv enabled OUTPUT regex | eval match=mvmap(regex, if(match(path, regex), regex, null())) where your csv contains 2 columns, the regex and a column called enabled with a value of 1. This will pull ALL regexes into each event and then using mvmap will map the path against each of the regexes individually - for each match it will add the matching regex to the match field. After the mvmap, you will have a potentially multivalue field 'match' with one or more matches. If match is null, then there were no matches, so | where isnotnull(match) will filter out non matching paths. This is not using a lookup as a lookup, but simply using the lookup as a repository of matches which you "load" to each event during the pipeline. Depending on how many regexes you have it may be an option or not.    
Yes, this one works because the connected AFTER the disconnected does not happen, resulting in the count=1 for a disconnect - normally you'd get them in reverse and in this case, that would be the or... See more...
Yes, this one works because the connected AFTER the disconnected does not happen, resulting in the count=1 for a disconnect - normally you'd get them in reverse and in this case, that would be the order needed. It rather trivialises the example, but without knowing the data, it's hard to know if it would work in all cases.
can anyone help me on it please
Thank you for the details, this will help me with current dashboard
If you mean Splunk Enterprise trial, you can configure TLS on any component. If you mean Splunk Cloud trial - no. The inputs are not encrypted and the webui uses self-signed certs as far as I remember.
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? ... See more...
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? Are there more efficient ways to detect suspicious command execution without relying solely on correlation searches? Your guidance on streamlining this process would be greatly appreciated.
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example d... See more...
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example data was in chronological order while the defaul result sorting is opposite. So it's all a bit confusing.
Hello @sainag_splunk, Thanks for sharing the information. Yes, I am currently using Splunk Cloud Trial version for my POC work. Thanks.
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expe... See more...
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expedite the issue.       If this Helps, Please Upvote.    
@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Spl... See more...
@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Splunk Support if this is happening in your production environment. If this helps, Please Upvote.  
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-raw... See more...
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat If this Helps, Please Upvote.
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https:/... See more...
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations#Use_the_CLI_to_validate_the_bundle_and_check_restart         if this Helps,  Please Upvote.
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply ... See more...
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply helps, Please Upvote.        
@SteveBowser  Checkout inputs.conf $decideOnStartup server.conf  hostnameOption = [ fullyqualifiedname | clustername | shortname ] If this reply helps, Please Upvote.
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, e... See more...
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, everything up to the last two lines is just setting up dummy data sets to model your example and then the search/stats does sort of what you are looking to do - you can just copy/paste this to a search window | makeresults count=1000 | fields - _time | eval SwitchID=printf("Switch%02d",random() % 5) | eval Mac=printf("00-B0-D0-63-C2-%02d", random() % 10) | eval index="A" | append [ | makeresults count=1000 | fields - _time | eval r=random() % 10 | eval Mac=printf("00-B0-D0-63-C2-%02d", r) | eval dhcp_host_name=printf("Host%02d", r) | eval index="B", source="/var/logs/devices.log" | fields - r ] | eval r=random() % 10 | sort r | fields - r ``` Now we have a bunch of rows from index A and B``` | search (index="A" SwitchID=switch01) OR (index="B" source="/var/logs/devices.log") | stats count values(dhcp_host_name) as dhcp_host_name values(SwitchID) as SwitchID by Mac Hope this helps
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field nam... See more...
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field name. | makeresults format=csv data="Row,Time,Log_text 1,7:00:00am,connected 2,7:30:50am,disconnected 3,7:31:30am,connected 4,8:00:10am,disconnected 5,8:10:30am,disconnected" | eval _time=strptime(Time, "%H:%M:%S") | sort - _time | streamstats time_window=120s reset_after="("Log_text=\"disconnected\"")" count | where count=1 AND Log_text="disconnected"  
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it rep... See more...
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it replicating when you do a forced resync of the SH nodes?
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volu... See more...
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volumes. With those your life is much easier. There are many (or at least some) answers where we have discussed those. Also you should read more about those from docs.