All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this index=* NOT index=_* host=* | stats count by host | append [| makeresults | eval host=split("host1,host2,host3....host36",",") | mvexpand host | eval count=1] | stats... See more...
Try something like this index=* NOT index=_* host=* | stats count by host | append [| makeresults | eval host=split("host1,host2,host3....host36",",") | mvexpand host | eval count=1] | stats sum(count) AS event_count by host | eval status=if(event_count = 1, "Missing", "Available") | table host status | sort status, host
Hi @Andras , if you haven't the issue about the Global Sharing of the Alerts, check the macros used in the dashboards, probably you have to specify the index where Notables are located. You can do ... See more...
Hi @Andras , if you haven't the issue about the Global Sharing of the Alerts, check the macros used in the dashboards, probably you have to specify the index where Notables are located. You can do it opening the dashboard. Ciao. Giuseppe
Hi Team, I am a newbie in Splunk. I am using a basic IN clause in a search command to pull out the 36 windows server integration available or not in splunk. index=* NOT index=_* host IN ("host1", "... See more...
Hi Team, I am a newbie in Splunk. I am using a basic IN clause in a search command to pull out the 36 windows server integration available or not in splunk. index=* NOT index=_* host IN ("host1", "host2", "host3", ...., "host36") | stats count by host It gives me the servers which are available in splunk. I want to find out the missing ones and the available ones in this format- host status host1 Available host2 Available host3 Missing I am using the below query but it only gives 6 statistics. | makeresults count=1 | eval all_hosts="host1,host2,host3....host36" | makemv delim="," all_hosts | mvexpand all_hosts | rename all_hosts AS host | append [ search index=* NOT index=_* host=* | stats count by host ] | stats values(count) AS event_count by host | where host IN ("host1", "host2",....,"host36") | eval status=if(isnull(event_count), "Missing", "Available") | table host status | sort status, host  Can anyone please help me out what I am doing wrong. Thanks in Advance!!
Make sure you are using the local/default admin account and that there are no other roles associated with that account. 
Thank you @yuanliu for your insightful response! The csv file contains 3 columns, "id", "rule", and "Boolean". The id column just identifies which "rule" fired. The "id" column is just a text string ... See more...
Thank you @yuanliu for your insightful response! The csv file contains 3 columns, "id", "rule", and "Boolean". The id column just identifies which "rule" fired. The "id" column is just a text string to identify the rule that fired. The "rule" column is a Splunk rule (IE: "/user:" AND pwd)  that either contains a Boolean operator (AND, OR, NOT) or does not contain a Boolean operator. The "boolean" column just says TRUE or FALSE as to whether the preceding "rules" column contains a boolean.  I agree with you that the "map" command may not be the best command for what I am trying to do. So far this search string does generate results:  index="index1" [ inputlookup rules.csv | eval search = if(boolean="FALSE","\""+rule+"\"",rule) | return 10000 $search] | head 5 | fields _time index | eval time_token = "_time=" + _time , index_token = "index=" + index | stats values(time_token) AS time_token values(index_token) AS index_token | eval time_token=mvjoin(time_token," OR ") , index_token=mvjoin(index_token," OR ") | append [ inputlookup rules.csv | eval rule = if(boolean="FALSE","\""+rule+"\"",rule)] | eventstats first(time_token) AS time_token first(index_token) AS index_token | search rule=*   And shows a "time_token" and "index_token" for each time and index that contains a match to one of the rules in the csv file. My attempt with the "map" command was to then map the rule to the event in Splunk to identify which rule fired on which event. Do you have a suggestion for something that could work better? 
Hi @secure  How about this? | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, GroupB=353648566] | append [ | makeresults | eval GroupA = ... See more...
Hi @secure  How about this? | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, GroupB=353648566] | append [ | makeresults | eval GroupA = 353649091, GroupB=353616829] | append [ | makeresults | eval GroupA = 353649033, GroupB=353638941] | append [ | makeresults | eval GroupA = 353648797] | append [ | makeresults | eval GroupA = 353648680] | append [ | makeresults | eval GroupA = 353648745] | append [ | makeresults | eval GroupA = 353648730] | append [ | makeresults | eval GroupA = 353638941] | fields - _time | eventstats values(GroupB) AS GroupB | eval match=IF(match(GroupB,GroupA),1,0) | where match=1 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I have the same issue I installed it locally on Splunk local host, and the page "home" is empty   any idea?
@jialiu907  Check this   
Yes it worked perfectly thank you. Are you able to explain the syntax of the rex if possible?
I already have ingest eval in place. I only need to extract fqdn from vs_name in order to match there. 
Hi @jialiu907  Have a look at the below, I've suggested 2 ways you can determine your Disconnect field based on that value, is this what you're after? | makeresults | eval _raw="<28>1 2025-02-19T... See more...
Hi @jialiu907  Have a look at the below, I've suggested 2 ways you can determine your Disconnect field based on that value, is this what you're after? | makeresults | eval _raw="<28>1 2025-02-19T15:14:00.968210+00:00 aleoweul0169x falcon-sensor-bpf 1152 - - CrowdStrike(4): SSLSocket Disconnected from Cloud." | rex "\)\:\s(?<Disconnect>SSLSocket Disconnected from Cloud)" | eval Disconnect2=IF(searchmatch("SSLSocket Disconnected from Cloud"),1,0)   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @splunklearner  This is quite complex to achieve in props/transforms but shouldnt be impossible - lets have a go.. This is what it would look like as SPL - use this to tweak your eval to match y... See more...
Hi @splunklearner  This is quite complex to achieve in props/transforms but shouldnt be impossible - lets have a go.. This is what it would look like as SPL - use this to tweak your eval to match your field names and config, then apply to the transforms as below.  | makeresults | eval _raw="something=v-jupiter-prd-cbc-us.sony-443-ipv6" | eval hostType=replace(_raw, ".*v\-(?<hostType>[^\.]+)\.sony.*", "\1") | eval yourIndex=json_extract(lookup("testlookup.csv",json_object("hostType",hostType), json_array(index)),"index") ``` as one line ``` | eval yourIndexNew=json_extract(lookup("testlookup.csv",json_object("hostType",replace(_raw, ".*v\-(?<hostType>[^\.]+)\.sony.*", "\1")), json_array(index)),"index") You will also need a lookup in $SPLUNK_HOME/system/lookups - in this example its testlookup.csv. For the purposes of testing in SPL you can create a temporary lookup with this: | makeresults | eval hostType="jupiter-prd-cbc-us", index="index1" | outputlookup testlookup.csv Props/transforms.conf == props.conf == [yourSourcetype] TRANSFORMS-defineIndex = defineIndex == transforms.conf == [defineIndex] INGEST_EVAL = index=json_extract(lookup("testlookup.csv",json_object("hostType",replace(_raw, ".*v\-(?<hostType>[^\.]+)\.sony.*", "\1")), json_array(index)),"index") For more info on how the lookup command works, have a look at https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi  i have data from two columns and using a third column to display the matches | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, Gro... See more...
Hi  i have data from two columns and using a third column to display the matches | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, GroupB=353648566] | append [ | makeresults | eval GroupA = 353649091, GroupB=353616829] | append [ | makeresults | eval GroupA = 353649033, GroupB=353638941] | append [ | makeresults | eval GroupA = 353648797] | append [ | makeresults | eval GroupA = 353648680] | append [ | makeresults | eval GroupA = 353648745] | append [ | makeresults | eval GroupA = 353648730] | append [ | makeresults | eval GroupA = 353638941] | fields - _time | foreach GroupA [eval match=if(GroupA=GroupB,GroupA ,NULL)] | stats values(GroupA) values(GroupB) values(match)   however nothing is getting displayed in values(match). is there something wrong in the logic or alternate way to do it 
I am looking to extract this section of an event and have it as a field that I am able to manipulate with. I am unfamiliar with regex and I am getting the wrong results.  Events   <28>1 2025-02-... See more...
I am looking to extract this section of an event and have it as a field that I am able to manipulate with. I am unfamiliar with regex and I am getting the wrong results.  Events   <28>1 2025-02-19T15:14:00.968210+00:00 aleoweul0169x falcon-sensor-bpf 1152 - - CrowdStrike(4): SSLSocket Disconnected from Cloud. <30>1 2025-02-19T15:14:16.104202+00:00 aleoweul0169x falcon-sensor-bpf 1152 - - CrowdStrike(4): SSLSocket connected successfully to ts01-lanner-lion.cloudsink.net:443    I am looking to have a field called Disconnect based on "SSLSocket Disconnected from Cloud"
I want to extract value from the following field while indexing the data to use it to map it with index. vs_name=v-jupiter-prd-cbc-us.sony-443-ipv6 I want to extract every field after v- and till s... See more...
I want to extract value from the following field while indexing the data to use it to map it with index. vs_name=v-jupiter-prd-cbc-us.sony-443-ipv6 I want to extract every field after v- and till sony. I.e., jupiter-prd-cbc-us.sony as fqdn so that this fqdn will check in lookup to map it to correct index. Please help me with props and transforms to extract fqdn correctly.
The reason I want to revert is because of this known issue: 2024-12-03 PSAAS-20901 supervisord failing to start on warm standby instance https://docs.splunk.com/Documentation/SOARonpr... See more...
The reason I want to revert is because of this known issue: 2024-12-03 PSAAS-20901 supervisord failing to start on warm standby instance https://docs.splunk.com/Documentation/SOARonprem/6.3.1/ReleaseNotes/KnownIssues When SOAR needs to be restarted on our warm standby it fails to start because supervisord can't start. The only workaround I've been able to find is disabling the warm standby so it's a primary. Then restarting SOAR after which I set the server as the warm standby again.
Thanks for the reply, I understand that the error is due to there being no results, but that is exactly what I require, that it does not throw an error when there are no results, since when saving my... See more...
Thanks for the reply, I understand that the error is due to there being no results, but that is exactly what I require, that it does not throw an error when there are no results, since when saving my correlation search it always throws an error and never completes a search. Is there any way to avoid this?
Thank you for you reply. I need completely different data source for Table depending on the dropdown selection. If value selected in dropdown is equal to "caddy", set Table datasouce to "ds_EHYzbg0g... See more...
Thank you for you reply. I need completely different data source for Table depending on the dropdown selection. If value selected in dropdown is equal to "caddy", set Table datasouce to "ds_EHYzbg0g", if value is "nginx", set Table datasouce to "ds_8xyubP1c":   "ds_EHYzbg0g": { "type": "ds.search", "options": { "query": "host=\"$select_hosts$\" program=\"$select_program$\" priority=\"$select_log_leel$\" | fields host,program,sourceip" }, "name": "logs_program_caddy" }    
Hi @livehybrid,   I've come to find out that monitoring the search itself is all I was able to find in the logs. I cannot seem to find a trace of an API sync or an API pull. I'm sure it exists, but... See more...
Hi @livehybrid,   I've come to find out that monitoring the search itself is all I was able to find in the logs. I cannot seem to find a trace of an API sync or an API pull. I'm sure it exists, but I can't find anything in the  _internal index related to it. Looking in there was also what was suggested by our technical representative.   I'll mark the monitor the sync as the solution as an alternative   Thanks!
Hi @gcusello ! Also interesting that the alerts in the index seems good: But the loading of the events in the Events dashboard never ending.