All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ashish0405 , at first you don't need dedup before stats: index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to ... See more...
Hi @Ashish0405 , at first you don't need dedup before stats: index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F"), secondarycolor=primarycolor then, what do you mean with flat time? if the time borders of your search, you can use addinfo command (https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Addinfo) that with the info_min_time and info_max_time fields gives you the time borders of your search. index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F"), secondarycolor=primarycolor | addinfo | table Device_name src_ip state_to count primarycolor secondarycolor info_min_time info_max_time Ciao. Giuseppe
OK. How did you edit the datamodel? Normally from the WebUI? Or did you fiddle with the jsons directly?
Yes, thank you for these details, I guess if I sort the output with time ( # sort _time) the result will be rearranged as per date & time, AM I CORRECT ? So if with the help of sort _time data get ... See more...
Yes, thank you for these details, I guess if I sort the output with time ( # sort _time) the result will be rearranged as per date & time, AM I CORRECT ? So if with the help of sort _time data get re-arranged then the latest one result will be either #UP or #DOWN, then the AIM is achieved
As far as I remember (that was some time ago) it happened when users' roles allowed them to grant roles but had no list of grantable roles specified whatsoever. So you have to make sure that there is... See more...
As far as I remember (that was some time ago) it happened when users' roles allowed them to grant roles but had no list of grantable roles specified whatsoever. So you have to make sure that there is at least one role listed as grantablRoles for those users.
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing... See more...
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing when the BGP flap on Number display Current Query : index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | dedup Device_name,src_ip | stats count by Device_name,src_ip,state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F") | eval secondarycolor=primarycolor     Is there something we can add to display flap time in the same number display  
Would you be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing wh... See more...
Would you be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing when the BGP flap on Number display Current Query : index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | dedup Device_name,src_ip | stats count by Device_name,src_ip,state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F") | eval secondarycolor=primarycolor   Is there something we can add to display flap time in the same number display      
I have just added a field in datamodel which i need to use in my searches. this field is duplicated in 2 SHs but in 1 SH that field is not available. Cluster health is Okay, and if I change any thin... See more...
I have just added a field in datamodel which i need to use in my searches. this field is duplicated in 2 SHs but in 1 SH that field is not available. Cluster health is Okay, and if I change any thing on dashboards or corelation search it is reflected in all SHs
Hi @PickleRick , I am facing the similar issue.  Exactly I need to do ?  shall I change 'grandableRoles' =admin in autoritize.conf file ?
This one? https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Enterprise-upgrade-to-9-1-0-1-all-users-disappeared/m-p/650560/highlight/true#M16825
REPORT and EXTRACT are search-time settings (they define what's being done when Splunk fetches indexed data from indexers). Therefore configuring them on a HF is pointless. If you really really want ... See more...
REPORT and EXTRACT are search-time settings (they define what's being done when Splunk fetches indexed data from indexers). Therefore configuring them on a HF is pointless. If you really really want to use index-time extraction to create an indexed field (which might not be the best idea), you should use TRANSFORM. BTW, you shouldn't use SHOULD_LINEMERGE=true (it's for a very very rare border cases and it incures big performance penalty), And your data looks as if it needed some external preprocessing step to extract the json object from within the log field string.
I think you did a little shortcut here. Data received on the /event HEC endpoint is normally parsed and processed it's just that line breaking is skipped (because we're explicitly receiving data alre... See more...
I think you did a little shortcut here. Data received on the /event HEC endpoint is normally parsed and processed it's just that line breaking is skipped (because we're explicitly receiving data already split into single events) and by default time parsing is skipped. Other than that normal index-time operations are performed.
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identi... See more...
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identify the root cause and to fix this scenario. Thanks in advance.
Again - it might be expected but is it the correct result? Consider this example run-anywhere search | makeresults format=csv data="a,b,c 2,3,3 1,2,3 2,2,2 1,3,2" | dedup c | stats count by a... See more...
Again - it might be expected but is it the correct result? Consider this example run-anywhere search | makeresults format=csv data="a,b,c 2,3,3 1,2,3 2,2,2 1,3,2" | dedup c | stats count by a b c Run it, write down the results. Now run the same search but with a reordered input mockup data | makeresults format=csv data="a,b,c 1,2,3 2,3,3 1,3,2 2,2,2" | dedup c | stats count by a b c As you can see, the data on which you're operating is the same, just in a different order and the results are completely different. So you might want to rethink your search logic.
Thank you !!! it worked !
What product/service are you talking about? Splunk Enterprise doesn't have the settings you describe. Is it Observability?
Wow, the expected result popped up !!!   Thanks !!!!, I will do  some testing
For duration? I'm all for strftime for formatting points in time. But for longer durations you'll get strange results (duration of year 1971?). Also timezone settings can wreak havoc with accuracy o... See more...
For duration? I'm all for strftime for formatting points in time. But for longer durations you'll get strange results (duration of year 1971?). Also timezone settings can wreak havoc with accuracy of the results.
Will the patched version of the MLTK work with ES 7.3.2?   https://advisory.splunk.com/advisories/SVD-2024-1102
Do you mean you want to concatenate host values from all events collectively, not just from each individual event?  If that's all you want, you can do   <your_search> | stats values(host) AS host |... See more...
Do you mean you want to concatenate host values from all events collectively, not just from each individual event?  If that's all you want, you can do   <your_search> | stats values(host) AS host | eval newfield = mvjoin(host, ",")   If you want a new field alongside other fields in events, use eventstats instead of stats   <your_search> | eventstats values(host) AS newfield | eval newfield = mvjoin(newfield, ",")    
Like @gcusello says, matching backslash is tricky.  This is because backslash is used as an escape character so special characters can be used as literal.  This applies to backslash itself as well.  ... See more...
Like @gcusello says, matching backslash is tricky.  This is because backslash is used as an escape character so special characters can be used as literal.  This applies to backslash itself as well.  This needs to be taken into consideration whenever an interpreter/compiler uses backslash as an escape character. When you run rex (or any function that uses regex) in a search command, two interpreters act on the string in between double quotes: the regex engine and SPL interpreter.  As such, to match two consecutive backslashes, you need 8 backslashes instead of 4.  Try this:   | makeresults format=csv data="myregex C:\\\\Windows\\\\System32\\\\test\\\\ C:\\\\\\\\Windows\\\\\\\\System32\\\\\\\\test\\\\\\\\" | eval parent = "C:\\\\Windows\\\\System32\\\\test\\\\" | eval match_or_not = if(match(parent, myregex), "yes", "no")   The result is match_or_not myregex parent no C:\\Windows\\System32\\test\\ C:\\Windows\\System32\\test\\ yes C:\\\\Windows\\\\System32\\\\test\\\\ C:\\Windows\\System32\\test\\ This test illustrates the same thing:   | makeresults format=csv data="parent C:\\\\Windows\\\\System32\\\\test\\\\" | eval match_or_not1 = if(match(parent, "C:\\\\\\\\Windows\\\\\\\\System32\\\\\\\\test\\\\\\\\"), "yes", "no") | eval match_or_not2 = if(match(parent, "C:\\\\Windows\\\\System32\\\\test\\\\"), "yes", "no")   match_or_not1 match_or_not2 parent yes no C:\\Windows\\System32\\test\\ If you look around, SPL is not the only interpreter that interprets strings in between double quotes.  For example, in order to produce your test string "C:\\Windows\\System32\\test\\" using echo command in shell, you use   % echo "C:\\\\\\Windows\\\\\\System32\\\\\\\\test\\\\\\" # ^6x ^6x ^7x ^6x C:\\Windows\\System32\\test\\   I will leave it as homework to figure out why one segment needs 7 backslashes.