All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kp_pl , sorry but I don't understand your request: perc75(tt) is one of the calculated values, so why do you want to add a new column? Could you share how you are waiting for results? Ciao. ... See more...
Hi @kp_pl , sorry but I don't understand your request: perc75(tt) is one of the calculated values, so why do you want to add a new column? Could you share how you are waiting for results? Ciao. Giuseppe
Hi @Silah , yes you can create two different stanzas, one for each sender with different indexes. The only question is: why? usually index are choosen when you have different retentions or differe... See more...
Hi @Silah , yes you can create two different stanzas, one for each sender with different indexes. The only question is: why? usually index are choosen when you have different retentions or different access grants, not different sources or technologies. Different sources are recognized in the same index by host and different technologies are recognized by sourcetype. Ciao. Giuseppe
Hi @Shahnoor , are you sure that number of events and errors in slices of 5 minutes are different? because the search is correct. please try these two searches and manually compare results: index... See more...
Hi @Shahnoor , are you sure that number of events and errors in slices of 5 minutes are different? because the search is correct. please try these two searches and manually compare results: index=my_index CAT=B | timechart span=5m count(eval(RESULT="404")) AS Error_count and index=my_index CAT=B | timechart span=5m count Ciao. Giuseppe
(Hard sometimes to think of a good salutation that isn't boring or awkward, so fill in what you like here.), I accidentally deleted some of the data sets in my TA data model, I have a back-up from... See more...
(Hard sometimes to think of a good salutation that isn't boring or awkward, so fill in what you like here.), I accidentally deleted some of the data sets in my TA data model, I have a back-up from the original app but want to know if there is a quick and easy way to restore these. It would also be useful exercise to go over how the DM and its related data is stored in Splunk (from the backend perspective, not where to look for it in the GUI.) Thanks beforehand for any help/guidance.
Thanks a lot Giuseppe! Sincerely appreciate your quick response. I'm getting error percentage now. One small problem: for all the 5 minute spans throughout last 24 hour, I'm getting exactly same n... See more...
Thanks a lot Giuseppe! Sincerely appreciate your quick response. I'm getting error percentage now. One small problem: for all the 5 minute spans throughout last 24 hour, I'm getting exactly same number of both total event and error as well. So the error percentage is constant over time (Error count: 106, Event count: 1525, percentage: 6.95%). I know this is not correct. Number of events vary over peak and off-peak hour. Do you think it's calculating same data and plotting over different time? This is my current script looks like: index=my_index CAT=B | bin span=5m _time | stats count(eval(RESULT="404")) AS Error_count count BY _time | eval Error_Percentage=round(Error_count/count*100,4)  
Hi @RonWonkers , in Splunk Enterprise alerts it isn't possible to define fields for throttling as on Enterprise Security. If you're speaking of Enterprise Security you can use multiple fields, othe... See more...
Hi @RonWonkers , in Splunk Enterprise alerts it isn't possible to define fields for throttling as on Enterprise Security. If you're speaking of Enterprise Security you can use multiple fields, otherwise it isn't possible now, maybe in a next version. Ciao. Giuseppe
I extracted 2 fields called 'Resp_time' and 'Req_time'...Both these fields are integers. I also changed the values to epoch  How do I display the difference between the Resp_time and req_time?
Hi,  I have an alert that triggers when an employee opens a file. This alert runs every 30 minutes so we can see these alerts fast. When employee1 opens file1 we see the alert, and throttle based ... See more...
Hi,  I have an alert that triggers when an employee opens a file. This alert runs every 30 minutes so we can see these alerts fast. When employee1 opens file1 we see the alert, and throttle based on the field "employee", because if we dont throttle then this alert keeps repeating every 30 minutes. Problem is now that when employee1 opens file2, file3, or file4 we do not see this anymore since we have a throttle on employee.. Is there a way to throttle on a combination of employee and file so that when employee1 opens file1 we get an alert, when he opens file2 we get a different alert, but we dont keep seeing the same alerts repeating every 30 minutes?  
Hi , I have added the config details already  , still data is not coming
Hi @Shahnoor , you should try something like this: index=your_index CAT=B | bin span=5m _time | stats count(eval(RESULT="404")) AS 404_count count BY _time | eval perc=404_count/count*100 to adapt... See more...
Hi @Shahnoor , you should try something like this: index=your_index CAT=B | bin span=5m _time | stats count(eval(RESULT="404")) AS 404_count count BY _time | eval perc=404_count/count*100 to adapt to your conditons (e.g. CAT=B). Ciao. Giuseppe
We found lot of errors mentioned below. Queue Capacity seems be getting breached after metaspace limit is breached up. however need debug logs during the problematic time period to detect this, resta... See more...
We found lot of errors mentioned below. Queue Capacity seems be getting breached after metaspace limit is breached up. however need debug logs during the problematic time period to detect this, restart resolves the issue temporarily.  PRODCUSTOMXYZ01==> [AD Thread-Metric Reporter0] 24 Jul 2024 18:51:06,818 DEBUG ManagedMonitorDelegate - Adding metric to the Queue to publish [Custom Metrics|Log Monitor|InfraDB_Logs|Search String|ORA-0.1502|Occurrences] PRODCUSTOMXYZ01==> [extension-scheduler-pool-1] 24 Jul 2024 18:50:46,365 DEBUG SimMetricsService - Not reporting metric with name Hardware Resources|Process|ora_p00i_XYZApp|Faults|Minor Faults/sec, its value is unknown.
Data will not be indexed automatically after adding the add-on.  Inputs must be configured so the add-on knows where to find the data.  See https://docs.splunk.com/Documentation/AddOns/released/MSSec... See more...
Data will not be indexed automatically after adding the add-on.  Inputs must be configured so the add-on knows where to find the data.  See https://docs.splunk.com/Documentation/AddOns/released/MSSecurity/Configure
Hi All , I am getting  the logs  from this query , But I need a query to get deviation of error count in two time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-ser... See more...
Hi All , I am getting  the logs  from this query , But I need a query to get deviation of error count in two time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" for this I need to consider volume of logs as well .  depending on deviation percentage I will decide , Need to promote deployment or  stop the deployment   
Well, that's a very good news And IMHO it's a good solution to be found in the future - get your data in order first Just leave the thread be.
I have a number of events in 2 category (CAT A and CAT B). There are successful events and failed events with different RESULT value. I need to calculate error percentage of a specific failed event (... See more...
I have a number of events in 2 category (CAT A and CAT B). There are successful events and failed events with different RESULT value. I need to calculate error percentage of a specific failed event (RESULT = 404) that occurs in only CAT B.  I need to segregate CAT A from calculation. Then the final result result should be: ( count(RESULT = 404) / count(CAT B) * 100 ) and plot for every 5 minutes. Please suggest.  
We recently added a TOS to our deployment. We have smart card auth enabled as well. When both are enabled at the same time and we select "OK" to agree to the terms nothing happens; prior to getting t... See more...
We recently added a TOS to our deployment. We have smart card auth enabled as well. When both are enabled at the same time and we select "OK" to agree to the terms nothing happens; prior to getting to the TOS screen we have already authenticated using the smart card. If we disable smart card auth and just allow UN/PW sign in we are able to accept the terms and continue on our merry way. If we disable the TOS and just have smart card auth enabled we are allowed to continue. What is (or is not) happening to where we can't have both enabled simultaneously. 
Yes that is me, I am sorry for the using two channels for the same question, after asking in the Slack I searched again about the issue on the web but could not find any previous questions. Therefore... See more...
Yes that is me, I am sorry for the using two channels for the same question, after asking in the Slack I searched again about the issue on the web but could not find any previous questions. Therefore I realized it could be better to ask here for future Splunk explorer. However eventually I was able to resolve the issue by editing my third party source code (not my Splunk UF) to produce valid formatted JSON messages. So problem is solved but not in conventional ways. For this reason I this think the question should be completely deleted in order to avoid future confusion. How can I remove this question completely?
Hi All, Data is not getting indexed after adding the conf
After rebuilding the docker image of DBConnect it requires a restart of the container in order to start showing data flowing.  Is there something I'm missing that is making me need to perform a resta... See more...
After rebuilding the docker image of DBConnect it requires a restart of the container in order to start showing data flowing.  Is there something I'm missing that is making me need to perform a restart of the app for it to work properly?
Hello Splunkers, I've created a custom role with very basic capabilities enabled. The capability "edit_own_objects" has been disabled. For some reason the user is able to clone reports as well as sa... See more...
Hello Splunkers, I've created a custom role with very basic capabilities enabled. The capability "edit_own_objects" has been disabled. For some reason the user is able to clone reports as well as save searches as reports. Also the user is able to access "New Report" button when clicking Settings->Searches,Reports,Alerts. I thought disabling edit_own_object capability would prevent the user from creating any objects, but it is not the case. I've made sure the user only has read access to the app as well. Any help of suggestions would be appreciated!   Thanks!