All Posts

Top

All Posts

Try without the penultimate search |search MOP=* It isn't necessary as stats by MOP effectively does the same thing i.e. you will only get stats for non-null values of MOP which is what the search ... See more...
Try without the penultimate search |search MOP=* It isn't necessary as stats by MOP effectively does the same thing i.e. you will only get stats for non-null values of MOP which is what the search is doing as well
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14... See more...
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel  read returning -1 state 1 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel nrpacket: recv, Connection timed out, spid: 501, suid: 84
Mean to say not getting the required result.You can find the same in the snippet attached.
It isn't clear what you mean here, % of the total average? Do you mean the percentage of the total for that host that the count represents, or the percentage of the grand total for that host? Since y... See more...
It isn't clear what you mean here, % of the total average? Do you mean the percentage of the total for that host that the count represents, or the percentage of the grand total for that host? Since you have also used timechart, I guess you could also mean the percentage of the total for the time bin that the count for the host represents. It is probably best if you work out what it is that you are trying to show in your table/chart to clarify what the required calculation is.
Your makeresults isn't valid SPL so it is still a little unclear what you are working with. Having said that, if you make results has two fields, a key field and an expected results field, you could... See more...
Your makeresults isn't valid SPL so it is still a little unclear what you are working with. Having said that, if you make results has two fields, a key field and an expected results field, you could append your makeresults to your actual results and then use stats to combine the events by their key values and then you can compare whether they are different.
That is amazing, Thank you.  I am new to the Splunk world as you can see.  How about a field next to each host that calculating the %of the total average per count?
| bin _time span=1m | stats count by _time backend_service_url status_code | eval {status_code}=count | fields - status_code count | stats values(*) as * by _time backend_service_url
The base search is a hard-coded list of known values using makeresults, so I could certainly add a key (and it could match the field name being returned in the query). | makeresults | eval expecte... See more...
The base search is a hard-coded list of known values using makeresults, so I could certainly add a key (and it could match the field name being returned in the query). | makeresults | eval expectedResults=actualResults="My Item 1", actualResults="My Item 2", actualResults="My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults   I'm not concerned about a sort order, except maybe when I do a final presentation of the data.  It's more about determining which values are returned in the query matching (or not matching) the values in the base list.
Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend ... See more...
Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is agg... See more...
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is aggregate status codes by the minute per URL for each status code. So sample output would look like: time backend-service Status code 200 Status code 201 status code 202 10:00 app1.com 10   2 10:01 app1.com   10   10:01 app2.com 10     Columns would be dynamic based on the available status codes in the timeframe I am searching. I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this. Thanks!
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual r... See more...
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual results? Also, you should bear in mind that stats values() returns a multivalue field in dedup and sorted order, which may not necessarily be in the same order as your base search.
What do you mean by "breaking down"?
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResul... See more...
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResults"> <query> | makeresults | eval expectedResults="My Item 1", "My Item 2", "My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults </query> <done> <set token="expectedResults">$result.expectedResults$</set> </done> </search> Then I have multiple panels that will get results from different sources, pseudo-coded here: index="my_index_1"  query | table actualResults | stats values(actualResults) as actualResults Assume that the query returns "My Item 1" and "My Item 2". I am not sure how to compare the values returned from my query against the base list, to give something that reports whether it matches each value. My Item 1 True My Item 2 True My Item 3 False
I am not sure I understand where the tokens are being set and being used. Can you not just remove Output=$form.output$ from the search for the panel where it isn't available?
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am t... See more...
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am trying to run the command in SplunkDB connect. Below is the snippet for reference. Below is the query index=db_connect_dev_data |rename PROCESS_DT as Date | table OFFICE,Date,MOP,Total_Volume,Total_Value | search OFFICE=GB1 |eval _time=strptime(Date,"%Y-%m-%d") |addinfo |eval info_min_time=info_min_time-3600,info_max_time=info_max_time-3600 |where _time>=info_min_time AND _time<=info_max_time |table Date,MOP,OFFICE,Total_Volume,Total_Value | addcoltotals "Total_Volume" "Total_Value" label=Total_GB1 labelfield=MOP |filldown | eval Total_Value_USD=Total_Value/1000000 | eval Total_Value_USD=round(Total_Value_USD,5) | stats sum(Total_Volume) as "Total_Volume",sum("Total_Value_USD") as Total_Value(mn) by MOP |search MOP=* |table MOP,Total_Volume,Total_Value(mn) Let me know if anyone know why it is happening,
You could try something like this | appendpipe [| stats avg(*) as average_*] | addcoltotals | foreach average_* [| eval <<MATCHSEG1>>=if(isnull(<<MATCHSEG1>>),<<FIELD>>,<<MATCHSEG1>>)] | fi... See more...
You could try something like this | appendpipe [| stats avg(*) as average_*] | addcoltotals | foreach average_* [| eval <<MATCHSEG1>>=if(isnull(<<MATCHSEG1>>),<<FIELD>>,<<MATCHSEG1>>)] | fields - average_*
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm ... See more...
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, in my case what is the most recommended way to set the initial admin credentials, do I have to access every instance and define a "user-seed.conf" file under $SPLUNK_HOME/etc/system/local and then restart the instance, or is there an automated way to set the password across all instances by leveraging the helm chart.
Hi Team, Please help with above question. Thanks
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is... See more...
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is no field as Output due to which i get "no results found". Is there a logic to remove this token passed from the code. |search $form.app_tkn$ Category="A event" Type=$form.eventType$ Output=$form.output$
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for eac... See more...
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for each day