All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend ... See more...
Hi @splunkettes, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is agg... See more...
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is aggregate status codes by the minute per URL for each status code. So sample output would look like: time backend-service Status code 200 Status code 201 status code 202 10:00 app1.com 10   2 10:01 app1.com   10   10:01 app2.com 10     Columns would be dynamic based on the available status codes in the timeframe I am searching. I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this. Thanks!
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual r... See more...
You possibly need to expand on your usecase. Does your "base search" return your expected results on a particular order and do they have a key field which can be correlated with against your actual results? Also, you should bear in mind that stats values() returns a multivalue field in dedup and sorted order, which may not necessarily be in the same order as your base search.
What do you mean by "breaking down"?
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResul... See more...
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResults"> <query> | makeresults | eval expectedResults="My Item 1", "My Item 2", "My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults </query> <done> <set token="expectedResults">$result.expectedResults$</set> </done> </search> Then I have multiple panels that will get results from different sources, pseudo-coded here: index="my_index_1"  query | table actualResults | stats values(actualResults) as actualResults Assume that the query returns "My Item 1" and "My Item 2". I am not sure how to compare the values returned from my query against the base list, to give something that reports whether it matches each value. My Item 1 True My Item 2 True My Item 3 False
I am not sure I understand where the tokens are being set and being used. Can you not just remove Output=$form.output$ from the search for the panel where it isn't available?
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am t... See more...
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am trying to run the command in SplunkDB connect. Below is the snippet for reference. Below is the query index=db_connect_dev_data |rename PROCESS_DT as Date | table OFFICE,Date,MOP,Total_Volume,Total_Value | search OFFICE=GB1 |eval _time=strptime(Date,"%Y-%m-%d") |addinfo |eval info_min_time=info_min_time-3600,info_max_time=info_max_time-3600 |where _time>=info_min_time AND _time<=info_max_time |table Date,MOP,OFFICE,Total_Volume,Total_Value | addcoltotals "Total_Volume" "Total_Value" label=Total_GB1 labelfield=MOP |filldown | eval Total_Value_USD=Total_Value/1000000 | eval Total_Value_USD=round(Total_Value_USD,5) | stats sum(Total_Volume) as "Total_Volume",sum("Total_Value_USD") as Total_Value(mn) by MOP |search MOP=* |table MOP,Total_Volume,Total_Value(mn) Let me know if anyone know why it is happening,
You could try something like this | appendpipe [| stats avg(*) as average_*] | addcoltotals | foreach average_* [| eval <<MATCHSEG1>>=if(isnull(<<MATCHSEG1>>),<<FIELD>>,<<MATCHSEG1>>)] | fi... See more...
You could try something like this | appendpipe [| stats avg(*) as average_*] | addcoltotals | foreach average_* [| eval <<MATCHSEG1>>=if(isnull(<<MATCHSEG1>>),<<FIELD>>,<<MATCHSEG1>>)] | fields - average_*
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm ... See more...
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, in my case what is the most recommended way to set the initial admin credentials, do I have to access every instance and define a "user-seed.conf" file under $SPLUNK_HOME/etc/system/local and then restart the instance, or is there an automated way to set the password across all instances by leveraging the helm chart.
Hi Team, Please help with above question. Thanks
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is... See more...
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is no field as Output due to which i get "no results found". Is there a logic to remove this token passed from the code. |search $form.app_tkn$ Category="A event" Type=$form.eventType$ Output=$form.output$
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for eac... See more...
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for each day 
Apart from all that @richgalloway already mentioned, this document shows results of testing on some particular reference hardware. It's by no means a guarantee that an input will work with this perfo... See more...
Apart from all that @richgalloway already mentioned, this document shows results of testing on some particular reference hardware. It's by no means a guarantee that an input will work with this performance. Also remember that windows eventlog inputs get the logs by calling the system using winapi whereas file input just reads the file straight from the disk (most probably using memory-mapped files since it's most effective method). And last but definitely not least - as I already pointed out - UF typically doesn't break data into events!
Were you able to resolve the issue? I am getting the same error. 
That document is for a specific source where the event size is well-defined.  The information there cannot be generalized because the size of an "event" is unknown.  I've seen event sizes range from ... See more...
That document is for a specific source where the event size is well-defined.  The information there cannot be generalized because the size of an "event" is unknown.  I've seen event sizes range from <100 to >100,000 bytes so it is very difficult to produce an EPS number without knowing more about the data you wish to ingest. It's possible the documentation for other TAs provides the information you seek.  Have you looked at the TAs for your data?
I know there is Splunk Add-on for AWS, but I heard there is a simpler and easier way to read the buckets directly without using that Add-on. Is that true?  
Thank you @PickleRick for you concern, 1. I have tried to embed report on my website as per following this document Embed scheduled reports. But I was not able to embed Splunk data report on my webs... See more...
Thank you @PickleRick for you concern, 1. I have tried to embed report on my website as per following this document Embed scheduled reports. But I was not able to embed Splunk data report on my website, it was showing “Report not available” and in console showing 401 Unauthorized status code, Please check this image and reply. 2. I will follow this app well  Embedded Dashboards For Splunk (EDFS) 3.  Sure, i will try to use backend (server-side code) service to get Splunk data securely using the REST API.  Please let me explore more about REST API for backend service and where i have to request for REST API.
Unfortunately, all the fields required are present in the raw event. The tags were also produced correctly.  In this (quite old) thread https://community.splunk.com/t5/Splunk-Search/Define-user-fiel... See more...
Unfortunately, all the fields required are present in the raw event. The tags were also produced correctly.  In this (quite old) thread https://community.splunk.com/t5/Splunk-Search/Define-user-field-in-Security-Essentials/m-p/312738 I've found an issue somehow connected to mine, as mine problems are also connected to the user and src_user field extraction and my AD server is also in a non-english language.   Has anyone found the underlying issue? 
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEven... See more...
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEventLog" or some other way in inputs.conf as it is for Application/Security/System?  [WinEventLog://MyCustomLog] As suggested here I tried this configuration but no logs were onboarded and it returned no error also in _internal logs.  Has anyone found a custom solution for inserting these newly created custom views from the EventViewer to Splunk? Thanks
I am looking for something similar to this. https://docs.splunk.com/Documentation/WindowsAddOn/8.1.2/User/PerformancereferencefortheSplunkAdd-onforWindows