All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am calculating several metrics (such as counts and rates) for the combination of time and usually at least one other dimension, with the intention of using the anomalydetection function to generate... See more...
I am calculating several metrics (such as counts and rates) for the combination of time and usually at least one other dimension, with the intention of using the anomalydetection function to generate alerts on unexpected values. The issue I am running into is that the function sometimes generates an anomaly based on one of the grouping fields instead of on the metrics. When I try to use the fields-list parameter and only provide the metric columns it no longer filters down to the anomalies, instead it returns the entire dataset. It does this even if I explicitly set action=filter. I don't want to remove the grouping columns from the dataset completely because they are helpful for investigating the anomaly. The documentation doesn't include any examples of using the fields-list parameter. I was wondering if anyone has an example of successfully narrowing down to a subset of fields.
Trying to start on-prem controller, getting this error: [root bin]# ./controller.sh start-appserver [INFO] Path to mysql is not defined in system environment (MYSQL_BIN). Using mysql from /data/app... See more...
Trying to start on-prem controller, getting this error: [root bin]# ./controller.sh start-appserver [INFO] Path to mysql is not defined in system environment (MYSQL_BIN). Using mysql from /data/appdynamics/controller/db/bin [INFO] Mysql configuration file is not defined in system environment (AD_DB_CNF). Using mysql configuration file: /data/appdynamics/controller/db/db.cnf [INFO] Mysql datadir is /data/appdynamics/controller/db/data [#|2020-05-18T16:18:54.496-0400|WARNING|5.1.1|imq.log.Logger|_ThreadID=1;_ThreadName=main;|[S2004]: Log output channel com.sun.messaging.jmq.util.log.SysLogHandler is disabled: no imqutil in java.library.path|#] [#|2020-05-18T16:18:54.508-0400|SEVERE|5.1.1|imq.log.Logger|_ThreadID=1;_ThreadName=main;|ERROR [B3103]: imq.persist.store=file is not jdbc type: com.sun.messaging.jmq.jmsserver.util.BrokerException: [B3103]: imq.persist.store=file is not jdbc type at com.sun.messaging.jmq.jmsserver.persist.jdbc.comm.CommDBManager.initDBManagerProps(CommDBManager.java:365) at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBManager.<init>(DBManager.java:253) at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBManager.getDBManager(DBManager.java:181) at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBTool.doCommand(DBTool.java:2396) at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBTool.main(DBTool.java:2506) |#] Starting controller application server in default mode... nohup: ignoring input and redirecting stderr to stdout ***** Controller application server started. Login URL is http://localhost:8090/controller ***** But the controller UI never loads. 
Doing a search that has a wide range of return values... and the largest one will not display on my chart! I have 7 evals with values, but the chart only display 6 of them! I'm going crazy. SEARCH... See more...
Doing a search that has a wide range of return values... and the largest one will not display on my chart! I have 7 evals with values, but the chart only display 6 of them! I'm going crazy. SEARCH: index=tomcat time_taken_ms=* sourcetype=access_common |eval Sub1s=if(time_taken_ms>=200 AND time_taken_ms<1000,1,0) |eval Sub2s=if(time_taken_ms>=1000 AND time_taken_ms<2000,1,0) |eval Sub3s=if(time_taken_ms>=2000 AND time_taken_ms<3000,1,0) |eval Sub4s=if(time_taken_ms>=3000 AND time_taken_ms<4000,1,0) |eval Sub5s=if(time_taken_ms>=4000 AND time_taken_ms<5000,1,0) |eval Over5s=if(time_taken_ms>=5000 AND time_taken_ms<10000,1,0) |eval Over10s=if(time_taken_ms>=10000,1,0) |chart sum(Sub1s) sum(Sub2s) sum(Sub3s) sum(Sub4s) sum(Sub5s) sum(Over5s) sum(Over10s) PICTURE OUT OUTPUT:
Hi Team, I have created a couple of alerts. The first one is running every 30 minutes from Mon-Friday, and another one is running at 00.00 to 06.00 every 2 hours. What I want is for the 30-minute... See more...
Hi Team, I have created a couple of alerts. The first one is running every 30 minutes from Mon-Friday, and another one is running at 00.00 to 06.00 every 2 hours. What I want is for the 30-minute job to pause or stop while the 2-hour one is running, then have the 30-minute job resume after the 2-hour one is complete. Or, is there another way to put the 30-minute job on hold? Here is the schedule I used for 30M: - 0,30 0-6,12-23 * * 1-5 And this one for 2H: 120M - 0 2-6/2 * * * But I still I see some alerts were triggered during 120 minutes. Kindly help - it is a bit urgent.
Hi, I'm not exactly sure what is the best way to approach this issue. I have a list of external IP address along with descriptive names for each of those IP addresses. I want to make it so that all... See more...
Hi, I'm not exactly sure what is the best way to approach this issue. I have a list of external IP address along with descriptive names for each of those IP addresses. I want to make it so that all Splunk Enterprise Security dashboards show that descriptive name every time one of those known external IP addresses appear. What is the best way to associate each name to each of my known external IP addresses? Example: My external IP = 54.123.123.123 My descriptive name = MyExternalMailServer Thanks, Grant L
I have read other articles but haven't found an answer. I recently pushed the universal forwarder to Windows clients to upgrade from 6.5.1 to 7.2.6 and did not set the user password. The forwarde... See more...
I have read other articles but haven't found an answer. I recently pushed the universal forwarder to Windows clients to upgrade from 6.5.1 to 7.2.6 and did not set the user password. The forwarder is functioning, but on start throws the error for "No user configured". I created a user-seed.conf using the instructions with a hashed password and pushed it from the Windows SCCM with a restart after placing the file into system local as described in the user-seed.conf.spec page. However, the forwarder does not seem to be reading it on restart since I am still seeing the same error. Has anyone else found a reason for this? We have checked file permissions and don't see a problem. The forwarder is running as a local system service and is seen as a local administrator for the user. The local admin has permissions to the user-seed.conf file.
I have successfully installed the Git Version Control for Splunk app on my local environment. However, when I set it up in our prod environment I'm getting the following error: "No results found". ... See more...
I have successfully installed the Git Version Control for Splunk app on my local environment. However, when I set it up in our prod environment I'm getting the following error: "No results found". Can someone help me resolve the error?
I need to get a report of search windows used in historical search activity. For example, we need to determine how far back are users typically searching against (last 24 hours, last 7 days, more tha... See more...
I need to get a report of search windows used in historical search activity. For example, we need to determine how far back are users typically searching against (last 24 hours, last 7 days, more than 90 days ago, etc). I'm familiar with the DMC dashboards, which don't provide the search window of the searches it reports on, and familiar with the logs in the _audit index which might be my best bet. Is there documentation explaining what the different time fields are in index=_audit action=search ? Or is there another way to get a report of how often each search window/time frame is used over a given period?
I have Below Splunk query to get some data from my logs index=myindex sourcetype=mysourcetype "search string" | stats sum(TotalRecords) As "Total Records", sum(TotalTime_Taken) As "Total Time Ta... See more...
I have Below Splunk query to get some data from my logs index=myindex sourcetype=mysourcetype "search string" | stats sum(TotalRecords) As "Total Records", sum(TotalTime_Taken) As "Total Time Taken" by Content Below is the result of the above query Content_Type Total Records Total Time Taken ========== ============= =========== Documents 13 25 Blogs 25 120 Events 2 5 I want another column in my result Average_time_taken Average_time_taken = Total Time Taken / Total Records The result should be as below Content_Type Total Records Total Time Taken Average_time_taken ========== ============= =========== ==================== Documents 13 25 1.9230 Blogs 25 120 4.8 Events 2 5 2.5
I need help with the following JSON format which is coming from HTTP Event Collector. I want to extract Status , Severity , Id and PatchState from the following JSON format: { "relatedEv... See more...
I need help with the following JSON format which is coming from HTTP Event Collector. I want to extract Status , Severity , Id and PatchState from the following JSON format: { "relatedEvents": [], "relationships": [ { "resourceId": "REDACTED "resourceType": "AWS::SSM::ManagedInstanceInventory", "name": "Is associated with " } ], "configuration": { "AWS:ComplianceItem": { "SchemaVersion": "1.0", "Content": { "Patch": { "SomeValue": { "Status": "NON_COMPLIANT", "InstalledTime": "", "ExecutionType": "Command", "PatchSeverity": "", "Title": "AAAAAAAA", "Severity": "UNSPECIFIED", "ComplianceType": "Patch", "Classification": "", "DocumentVersion": "", "Id": "BBBBB", "PatchState": "Missing", "PatchBaselineId": "pb-xxxxxxxxxxxxxxxx", "DocumentName": "", "PatchGroup": "" }, "SomeOtherValue": { "Status": "NON_COMPLIANT", "InstalledTime": "", "ExecutionType": "Command", "PatchSeverity": "", "Title": "CCCCCCCC", "Severity": "UNSPECIFIED", "ComplianceType": "Patch", "Classification": "", "DocumentVersion": "", "Id": "AAAAAAA", "PatchState": "Missing", "PatchBaselineId": "pb-xxxxxxx", "DocumentName": "", "PatchGroup": "" }, Please note that the embedded nesting's 4th element is a variable (usually a package name) so it is hard to parse using spath and I do not have a fixed number of the 4th nested JSON objects I receive. Please help and thanks in advance.
I have set up an alert for when logging has stopped on a Windows endpoint using event code 1100, but want to avoid results where logging has restarted soon after being stopped. Current query: sour... See more...
I have set up an alert for when logging has stopped on a Windows endpoint using event code 1100, but want to avoid results where logging has restarted soon after being stopped. Current query: sourcetype=WinEventLog | where ('EventCode'="1100" OR 'EventCode'="6005") However events where a restart has occurred are still alerting. What I am trying to do is alert on events when logging has stopped (EventCode=1100) and NOT restarted (EventCode=6005) for the same endpoint.
I am trying to create a dashboard to monitor the file system for two different hosts. My ultimate aim is to create a time chart dashboard which shows any variations in file system percentage. My requ... See more...
I am trying to create a dashboard to monitor the file system for two different hosts. My ultimate aim is to create a time chart dashboard which shows any variations in file system percentage. My requirements are as follows: Hosts as legends Timestamp in x-axis File system usage in y-axis Note: I have to search for any required file system in the query. Sample query I created: index= index1 sourcetype=stype Mon=path1 OR path2 OR path3 |eval UPT=replace(UPT,"%","") | chart values(UPT) as UPT by Mon, host Please do let me know what can be done to set up these requirements in Splunk.
SITUATION: - I use indices "A" and "B" to come to answer the same question but for different environments. - Each index has unique fields names but said fields represent the same data in each index... See more...
SITUATION: - I use indices "A" and "B" to come to answer the same question but for different environments. - Each index has unique fields names but said fields represent the same data in each index. - I want a table to answer the same question in one table using both indices. - I'm starting with SPL similar to this: (index=A AND indexA_unqiue_field=x) OR (index=B AND sourcetype=v indexB_unique_field=y) | rename indexA_unqiue_field AS field | rename indexB_unique_field AS field | table _time, field, index PROBLEM: - With the above SPL I am failing to answer the same question using the 2 indices. It appears to work fine for index B but "field" does not populate for Index A. QUESTION: - How do I get all fields to populate? - Or how may I achieve the goal?
Hi, Is there a simple query to calculate the average and peak day count for last 3 months? For example let's say 3 months are Feb, March, April what i am looking for is - Average count per day for... See more...
Hi, Is there a simple query to calculate the average and peak day count for last 3 months? For example let's say 3 months are Feb, March, April what i am looking for is - Average count per day for 3 months. I mean what is the average and peak in Feb then what is the average and peak in March etc. index=temp_env sourcetype=access_combined | bucket _time span=1d | stats count by _time | stats avg(count) as AverageCountPerDay by date_month The above query is not giving any results. Any ideas?
I've been looking around how to measure and scale a Splunk HEC Collector cluster, but I cant seem to find direct answers. I am trying to find a way to measure the performance of a HEC Collector and... See more...
I've been looking around how to measure and scale a Splunk HEC Collector cluster, but I cant seem to find direct answers. I am trying to find a way to measure the performance of a HEC Collector and how to determine when a HEC Collector cluster needs to be scaled to accommodate more/less HEC requests. I understand that EPS ( events per second) can be measured but how does one determine if that number is too high or too low? Currently, i have two Heavyforwarders that are acting as HEC Collectors behind a load balancer. I am trying to find out a good way to determine if this is enough or if another member needs to be added. Appreciate the help!
I just installed the NetApp StorageGRID app for Splunk. Installation was easy but configuring it I have hit a wall. How do I point it to the audit logs and the REST API instance I want to monitor?
Hi, I built a report that list daily maximums and averages of counts per hour on several days. (difficult to put it in phrase..) Here is the query I use: <search string> | bucket _time span=... See more...
Hi, I built a report that list daily maximums and averages of counts per hour on several days. (difficult to put it in phrase..) Here is the query I use: <search string> | bucket _time span=1h | stats dc(Serial) as dcSerial, dc(otherserial) as dcOtherSerial by _time | bucket _time span=1d | eval Processedtime=strptime(_time,"%s") | stats max(dcSerial) avg(dcSerial) max(dcOtherSerial) avg(dcOtherSerial) by Processedtime So this query returns maximums of distinct counts of each fields, which is good. But what I need is to return the dcOtherSerial has when dcSerial is at the maximum during the day (because I don't care about the maximum of dcOtherSerial, I want to correlate both of them). Is it understandable ? any thoughts? thanks
Hello, I am looking to join 2 lookups and match the field "AccountName" from lookup1 with user field in lookup 2. I have 269 results in lookup 1 and 250 results in lookup 2. When I match the... See more...
Hello, I am looking to join 2 lookups and match the field "AccountName" from lookup1 with user field in lookup 2. I have 269 results in lookup 1 and 250 results in lookup 2. When I match the fields and join the lookups I lose the 19 results that dont have a match. How can I do this a keep the 19 results so I can manually update these TIA
Hi Team, I have performed ./splunk restart on one of the search head ..after that when try to add the member to the cluster i am unable to add . when i check the status in CLI it was generati... See more...
Hi Team, I have performed ./splunk restart on one of the search head ..after that when try to add the member to the cluster i am unable to add . when i check the status in CLI it was generating a message in CLI """search head cluster not enabled on this node. REST end point is not available. for adding :./splunk add shcluster-member current_member_uri="......":8089 to check status./splunk show shcluster-status i have verified the server.conf i can see stanza for the host which i restarted like [shclusring] ....... ....... mgmt_uri=my host along with :8089 ...... .... could you please let me know how to resolve it.
Hello, I don't understand the following behaviour and am looking for a solution. The following example is somewhat simplified, but still contains the "error" (admission-controller.cc AND pool_n... See more...
Hello, I don't understand the following behaviour and am looking for a solution. The following example is somewhat simplified, but still contains the "error" (admission-controller.cc AND pool_name="*") | stats dc(id) > 28.635 The above code selects some events and counts the unique ids. In this case 28.635. (admission-controller.cc AND pool_name="*") | transaction id | stats dc(id) > 4.999 This code now uses transaction on id. In my understanding, as there are 28.635 different ids, the result of the second statement should be the same as the first one. But it isn't, it's less with 4999 instead. Could someone please explain why and offer a solution? Best regards Nikolas