All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and ni... See more...
@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and nice to know the other reply works perfectly as you are expecting.  As you were saying, you are trying to figure out why that solution works, let me try to explain,.. | eval type = split(Badge, "_") ``` Splitting the "Badge" field by the underscore, you get the "type" of the badge``` | eval level = mvfind(mvappend("Novice", "Capable", "Expert"), mvindex(type, -1)) + 1 ``` the mvappend, mvindex are multivalue commands, understanding them takes a looonger time. pls check the docs https://docs.splunk.com/Documentation/SCS/current/SearchReference/MultivalueEvalFunctions ``` | fillnull level | eval type = mvindex(type, -2) | eval expire_ts = strptime(ExpireDate, "%m/%d/%y") ``` to sort the ExpireDate, first you need to convert to epoch timeformat``` | sort - level, expire_ts, + "Last name" "First name" | dedup Domain, "First name", "Last name", Email, type ```sorting and dedup done nicely, you can table the output by below command``` | table Domain, "First name", "Last name", Email, Badge, ExpireDate
Pro tip: Post data and output in text.  It is much easier for volunteers. So, the fields do NOT have values "True" as your mock code has implied.  They have values "true".  If you haven't grasped th... See more...
Pro tip: Post data and output in text.  It is much easier for volunteers. So, the fields do NOT have values "True" as your mock code has implied.  They have values "true".  If you haven't grasped this, Splunk stores most data in string tokenized strings and numeric values.  Have you tried this? | eval ycw = strftime(_time, "%Y_%U") | stats count(eval('FieldA'="true")) as FieldA_True, count(eval('FieldB'="true")) as FieldB_True, count(eval('FieldC'="true")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True
Hi yuanliu, Thank you for answering my query.  I am still trying to figure out why your solution works.  I was able to modify it as you suggested to get the report I needed. I hope I correctly gav... See more...
Hi yuanliu, Thank you for answering my query.  I am still trying to figure out why your solution works.  I was able to modify it as you suggested to get the report I needed. I hope I correctly gave you credit for the solution.  I hope you not only get karma points in this community but also good karma points in life.  Cheers. 
Hi inventsekar, I sincerely appreciate your spending time to solve my issue.  I tested your suggestion, but still ended up with duplicate entries for people with the same type of badge (Sell or Depl... See more...
Hi inventsekar, I sincerely appreciate your spending time to solve my issue.  I tested your suggestion, but still ended up with duplicate entries for people with the same type of badge (Sell or Deploy).  For example, Brandy Duggan should have only one entry, the highest level of Badge type "Sell" which is Brandy Duggan, Sell_Expert, 9/5/24.  The results should look similar to this. Domain, First name, Last name, Email, Badge, ExpireDate mno.com, lisa edwards, lisa.edwards@mno.com, Sell_Expert, 12/6/23 (only show highest level badge of type "Sell") mno.com, lisa edwards, lisa.edwards@mno.com, Deploy_Capable, 8/1/24 (only show highest level badge of type "Deploy") abc.com, allen anderson, allen.anderson@abc.com, Sell_Novice, 10/3/24 (allen anderson renewed his badge and expiry date is updated to reflect that) def.com, andy braden, andy.braden@def.com, Deploy_Capable, 1/3/24 ghi.com, bill connors, bill.connors@ghi.com, Sell_Novice, 10/17/23 jkl.com, brandy duggan, brandy.duggan@jkl.com, Sell_Expert, 9/5/24   Thank you again for helping me.   Cheers.    
Sorry, now I put the correct result (Pl ignore the previous result)  
Thanks for the hint. Attached is the result I get. But want the total count of all TRUE cases listed per calendar week in numbers and also in % (I don't want the FALSE as result ). The attached .xls ... See more...
Thanks for the hint. Attached is the result I get. But want the total count of all TRUE cases listed per calendar week in numbers and also in % (I don't want the FALSE as result ). The attached .xls shows what I am looking for with example numbers    
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'sourc... See more...
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'source' has ss Ann ’or css. Could you please help me construct the correct Splunk query for this?"
Hi @vijreddy30 ...  From your question, what I understood is that...  In Zone 1, you have an indexer and search head in a single windows system. Zone 2 also the same.   >>> As per requirement ne... See more...
Hi @vijreddy30 ...  From your question, what I understood is that...  In Zone 1, you have an indexer and search head in a single windows system. Zone 2 also the same.   >>> As per requirement need to be implement High availability servers Zone1 and Zone2. as per my understanding, by "high availability", you mean, the UF agents should be able to send logs to either both or any one indexer...so you will not miss any logs at all. (This is not high availability, this is actually load balancing). Pls suggest me if this understanding was wrong. If you could provide us some more details about the requirements, we could help you better. Thanks.   
Hi, I am not sure of how the two where commands are working in your SPL. but, the mvindex second argument must be a "number".    mvindex(<mv>, <start>, <end>) This function returns a subset of ... See more...
Hi, I am not sure of how the two where commands are working in your SPL. but, the mvindex second argument must be a "number".    mvindex(<mv>, <start>, <end>) This function returns a subset of the multivalue field using the start and end index values. Usage.....The <mv> argument must be a multivalue field. The <start> and <end> indexes must be numbers. The <mv> and <start> arguments are required. The <end> argument is optional.   https://docs.splunk.com/Documentation/SCS/current/SearchReference/MultivalueEvalFunctions    
I suppose it depends on what you mean by "high availability".  In my book, Splunk doesn't do HA, but I come from a fault-tolerant computing background. The closest you'll get requires search head an... See more...
I suppose it depends on what you mean by "high availability".  In my book, Splunk doesn't do HA, but I come from a fault-tolerant computing background. The closest you'll get requires search head and indexer clusters, which is a bit more of an investment (both in servers and in management) than single instance Splunk servers.  Note that Splunk does not support HA for forwarders, Deployment Servers, or SHC Deployers.  See https://docs.splunk.com/Documentation/Splunk/9.1.1/Deploy/Useclusters and https://docs.splunk.com/Documentation/Splunk/9.1.1/Deploy/Indexercluster for more information.  
The second argument to mvindex must be an integer.  I think perhaps you want something like this: | where (mvindex(description, mvfind(description,"User login to Okta")) == 0) or, even better | wh... See more...
The second argument to mvindex must be an integer.  I think perhaps you want something like this: | where (mvindex(description, mvfind(description,"User login to Okta")) == 0) or, even better | where (isnotnull(mvfind(description, "User login to Okta")))
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log"... See more...
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log" outcome.result=SUCCESS description="User login to Okta" OR description="Authentication of user via MFA" | transaction maxspan=1h actor.alternateId, src_ip | where (mvcount(description) == 1) | where (mvindex(description, "User login to Okta") == 0)     I keep getting the error    Error in 'where' command: The arguments to the 'mvindex' function are invalid.     Please help me correct my search and explain what I am doing wrong.
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and al... See more...
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and all are implemented Windows System.   As per requirement need to be implement High availability servers Zone1 and Zone2.   please send me implemented steps for high availability servers.   Regards, Vijay
Not completely impossible.  But before discussing workarounds, I have the same question as @PickleRick does: Why?  Are they the same events (with the same timestamp, etc.)?  Does the CSV even represe... See more...
Not completely impossible.  But before discussing workarounds, I have the same question as @PickleRick does: Why?  Are they the same events (with the same timestamp, etc.)?  Does the CSV even represent time series events?  If they are the same events but with updates, why not delete previously loaded events before upload?  I use CSV upload regularly.  Each contains different events.  Even so, I name files differently in part for peace of mind.
As @PickleRick said, Splunk does not mimic modern spreadsheet's visualization.  The forte of Splunk is to turn unstructured data into relational tables.  Every grid in Splunk is fully rendered.  Text... See more...
As @PickleRick said, Splunk does not mimic modern spreadsheet's visualization.  The forte of Splunk is to turn unstructured data into relational tables.  Every grid in Splunk is fully rendered.  Text alignment is not articulated.  And cell coloring is generally unsupported. With these constraints, you can design your own visual vocabulary to render the cells with various elements.  For example, your spreadsheet visualization can be simulated with Note your illustrated Standby count of 250 is the sum of url and cleared_log, not the difference as you formulated.  I suspect that this is intended.  So, I added an additional visual element under breakdowns to highlight the url - cleared_log. The above is rendered with the following search   | tstats count as App_logs where index=app-logs TERM(Application) TERM(logs) TERM(received) | appendcols [|tstats count as Exception_logs where index=app-logs TERM(Exception) TERM(logs) TERM(received)] | appendcols [|tstats count as Canceled_logs where index=app-logs TERM(unpassed) TERM( logs) TERM(received)] | appendcols [|tstats count as 401_mess_logs where index=app-logs TERM(401) TERM( error) TERM(message)] | eval mess_type = "Error count", count = App_logs + Exception_logs + Canceled_logs + '401_mess_logs' | eval breakdowns = mvappend("App_logs: " . App_logs, "Exception_logs: " . Exception_logs, "Canceled_logs: " . Canceled_logs, "401_mess_logs: " . '401_mess_logs') | fields - *_logs | append [|tstats count as url where index=app-logs TERM(url) TERM( info) TERM(staged) |appendcols [|tstats count as cleared_log where index=app-logs TERM(Filtered) TERM(logs) TERM(arranged)] | eval mess_type = "Standby count", count = url + cleared_log | eval breakdowns = mvappend("url: " . url, "cleared_log: " . cleared_log, ":standby: " . (url - cleared_log)) | fields - url cleared_log] | addcoltotals labelfield=mess_type label="Total mess" | table mess_type count breakdowns   Note: I did not change your tstats searches.  If the TERM combinations give you the correct counts, great.  If not, you may need to use index searches.  In that scenario, append and appendcols are so inefficient you will need to use other methods to get individual counts.  But the visual tweaks remain the same. Hope this helps.
My suggestion would be don't do it! Questions to consider: Would these SPL searches all be run as part of one search? - Consider using the map command Would these SPL searches require separate rep... See more...
My suggestion would be don't do it! Questions to consider: Would these SPL searches all be run as part of one search? - Consider using the map command Would these SPL searches require separate report outputs or dashboard panels? How would you expect it to behave if there was an error in one of the SPL searches? Would there be a fixed / known number of entries in the lookup up file? Are the SPL entries complete searches or parts to be inserted into a larger search?
Try this way round | chart count by income | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$1... See more...
Try this way round | chart count by income | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$120,000 - $199,999",5,income=="$200,000 or more",6) | sort sort_order | fields - sort_order
Hi @PReynoldsBitsIO, if income field has fixed values (how it seems) you could use something like this: <your_search> | eval income=case(income="$24,000 and under","1$24,000 and under", inco... See more...
Hi @PReynoldsBitsIO, if income field has fixed values (how it seems) you could use something like this: <your_search> | eval income=case(income="$24,000 and under","1$24,000 and under", income="$25,000 - $39,999","2$25,000 - $39,999", income="$40,000 - $79,999","3$40,000 - $79,999", income="$80,000 - $119,999","4$80,000 - $119,999", income="$120,000 - $199,999","5$120,000 - $199,999", income="$200,000 or more","6$200,000 or more") | chart count by income | rename "1$24,000 and under" AS "$24,000 and under" "2$25,000 - $39,999" AS "$25,000 - $39,999" "3$40,000 - $79,999" AS "$40,000 - $79,999" "4$80,000 - $119,999" AS "$80,000 - $119,999" "5$120,000 - $199,999" AS "$120,000 - $199,999" "6$200,000 or more" AS "$200,000 or more" Ciao. Giuseppe
| makeresults format=csv data="group,log,count Error,App logs,100 Error,Exception logs,100 Error,Cancelled logs,25 Error,401 mess logs,25 Stand by,url,150 Stand by,cleared log,100" ``` The previous l... See more...
| makeresults format=csv data="group,log,count Error,App logs,100 Error,Exception logs,100 Error,Cancelled logs,25 Error,401 mess logs,25 Stand by,url,150 Stand by,cleared log,100" ``` The previous lines set up some sample data in line with your image ``` | appendpipe [| stats sum(count) as total by group] | sort 0 -group count log | addcoltotals labelfield=group | eval count=coalesce(count,total) | eval summary=if(isnull(log),group." count",null()) | eval group=if(isnull(log),null(),group) | reverse | table summary total group log count
Please raise a new question detailing your inputs events (examples), expected results and logic used to get the expected results.