All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi something like TIME_PREFIX=^\d+:\d+:\d+:\d+: TIME_FORMAT=%Y/%m/%d %H:%M:%S.%2Q r. Ismo 
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event c... See more...
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event codes and reduce the transfer of log data over the WAN. Q:  Does a blacklist on inputs.conf for the HEC filter the events at the indexer, or does it stop those event from being transferred at the source? Q: If I install a Universal Forwarder, am I able to stop the blacklisted events from being sent across the WAN?
Not really,  4880961 (75%) means what?  4880961 isn't 75% of any of your other figures.
Splunk does not have an out-of-the-box REST input.  You have to install an app for that.  There are a number of apps on splunkbase that support REST so perhaps you can use one of them as a model for ... See more...
Splunk does not have an out-of-the-box REST input.  You have to install an app for that.  There are a number of apps on splunkbase that support REST so perhaps you can use one of them as a model for building your own app to the API in question.
This is what i am looking for: well maybe Date                                  S0100D                    S0400D Friday 2024-04-11    4880961 (75%)     5247555 (35%)   AVG                        ... See more...
This is what i am looking for: well maybe Date                                  S0100D                    S0400D Friday 2024-04-11    4880961 (75%)     5247555 (35%)   AVG                                     34509759             4750349554   If that makes sense
Try without the penultimate search |search MOP=* It isn't necessary as stats by MOP effectively does the same thing i.e. you will only get stats for non-null values of MOP which is what the search ... See more...
Try without the penultimate search |search MOP=* It isn't necessary as stats by MOP effectively does the same thing i.e. you will only get stats for non-null values of MOP which is what the search is doing as well
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14... See more...
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel  read returning -1 state 1 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel nrpacket: recv, Connection timed out, spid: 501, suid: 84
Mean to say not getting the required result.You can find the same in the snippet attached.
It isn't clear what you mean here, % of the total average? Do you mean the percentage of the total for that host that the count represents, or the percentage of the grand total for that host? Since y... See more...
It isn't clear what you mean here, % of the total average? Do you mean the percentage of the total for that host that the count represents, or the percentage of the grand total for that host? Since you have also used timechart, I guess you could also mean the percentage of the total for the time bin that the count for the host represents. It is probably best if you work out what it is that you are trying to show in your table/chart to clarify what the required calculation is.
Your makeresults isn't valid SPL so it is still a little unclear what you are working with. Having said that, if you make results has two fields, a key field and an expected results field, you could... See more...
Your makeresults isn't valid SPL so it is still a little unclear what you are working with. Having said that, if you make results has two fields, a key field and an expected results field, you could append your makeresults to your actual results and then use stats to combine the events by their key values and then you can compare whether they are different.
That is amazing, Thank you.  I am new to the Splunk world as you can see.  How about a field next to each host that calculating the %of the total average per count?
| bin _time span=1m | stats count by _time backend_service_url status_code | eval {status_code}=count | fields - status_code count | stats values(*) as * by _time backend_service_url
The base search is a hard-coded list of known values using makeresults, so I could certainly add a key (and it could match the field name being returned in the query). | makeresults | eval expecte... See more...
The base search is a hard-coded list of known values using makeresults, so I could certainly add a key (and it could match the field name being returned in the query). | makeresults | eval expectedResults=actualResults="My Item 1", actualResults="My Item 2", actualResults="My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults   I'm not concerned about a sort order, except maybe when I do a final presentation of the data.  It's more about determining which values are returned in the query matching (or not matching) the values in the base list.
Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend ... See more...
Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is agg... See more...
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is aggregate status codes by the minute per URL for each status code. So sample output would look like: time backend-service Status code 200 Status code 201 status code 202 10:00 app1.com 10   2 10:01 app1.com   10   10:01 app2.com 10     Columns would be dynamic based on the available status codes in the timeframe I am searching. I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this. Thanks!
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual r... See more...
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual results? Also, you should bear in mind that stats values() returns a multivalue field in dedup and sorted order, which may not necessarily be in the same order as your base search.
What do you mean by "breaking down"?
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResul... See more...
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResults"> <query> | makeresults | eval expectedResults="My Item 1", "My Item 2", "My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults </query> <done> <set token="expectedResults">$result.expectedResults$</set> </done> </search> Then I have multiple panels that will get results from different sources, pseudo-coded here: index="my_index_1"  query | table actualResults | stats values(actualResults) as actualResults Assume that the query returns "My Item 1" and "My Item 2". I am not sure how to compare the values returned from my query against the base list, to give something that reports whether it matches each value. My Item 1 True My Item 2 True My Item 3 False
I am not sure I understand where the tokens are being set and being used. Can you not just remove Output=$form.output$ from the search for the panel where it isn't available?
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am t... See more...
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am trying to run the command in SplunkDB connect. Below is the snippet for reference. Below is the query index=db_connect_dev_data |rename PROCESS_DT as Date | table OFFICE,Date,MOP,Total_Volume,Total_Value | search OFFICE=GB1 |eval _time=strptime(Date,"%Y-%m-%d") |addinfo |eval info_min_time=info_min_time-3600,info_max_time=info_max_time-3600 |where _time>=info_min_time AND _time<=info_max_time |table Date,MOP,OFFICE,Total_Volume,Total_Value | addcoltotals "Total_Volume" "Total_Value" label=Total_GB1 labelfield=MOP |filldown | eval Total_Value_USD=Total_Value/1000000 | eval Total_Value_USD=round(Total_Value_USD,5) | stats sum(Total_Volume) as "Total_Volume",sum("Total_Value_USD") as Total_Value(mn) by MOP |search MOP=* |table MOP,Total_Volume,Total_Value(mn) Let me know if anyone know why it is happening,