All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, When I ran       index=_audit NOT user="splunk-system-user" |stats count by action       I find that accelerate_search and search is fairy high. So I was wandering which is... See more...
Hello, When I ran       index=_audit NOT user="splunk-system-user" |stats count by action       I find that accelerate_search and search is fairy high. So I was wandering which is panel search and which is adhoc search, which is adhoc search, which is alert (schedule_search). We need to create a report about performance.
Hi All,  I have one dashboard in that I am fetching the results from a input look up file. I am getting the results but result are getting combined (multiple field values into single event) But I... See more...
Hi All,  I have one dashboard in that I am fetching the results from a input look up file. I am getting the results but result are getting combined (multiple field values into single event) But I want them separately like how we have updated in our look up file In the same way I want to see in splunk. Lookup file data: Storage_name     type.     Quality  Abcd.                       Fty.         100 Efgd.                          Iju.          2000 Ghyu.                         Thu.       3455 Gfhbv.                        Uyhgt.      4556   Any suggestions         
Hello.  The splunk service is restarting with an error as shown below during report scheduling execution at a specific time period.   search fail log 08-17-2022 06:00:04.164 INFO SearchOperat... See more...
Hello.  The splunk service is restarting with an error as shown below during report scheduling execution at a specific time period.   search fail log 08-17-2022 06:00:04.164 INFO SearchOperator:inputcsv [12673 phase_1] - sid:scheduler__admin__search__RMD52e8470291689a839_at_1660683600_5272 Successfully read lookup file '/opt/splunk/etc/apps/search/lookups/xxx.csv'. 08-17-2022 06:00:04.166 INFO MultiValueProcessor [12673 phase_1] - Checking max_mem_usage_mb resultsSize=100 maxHeapSize=15728640000 memoryUsage=1824925 earlyExit=0 08-17-2022 06:00:04.169 INFO MultiValueProcessor [12673 phase_1] - Checking max_mem_usage_mb resultsSize=200 maxHeapSize=15728640000 memoryUsage=6273048 earlyExit=0 08-17-2022 06:00:04.170 INFO MultiValueProcessor [12673 phase_1] - Checking max_mem_usage_mb resultsSize=300 maxHeapSize=15728640000 memoryUsage=7531940 earlyExit=0 .... 08-17-2022 06:00:06.484 INFO MultiValueProcessor [12673 phase_1] - Checking max_mem_usage_mb resultsSize=25200 maxHeapSize=15728640000 memoryUsage=531030711 earlyExit=0 08-17-2022 06:00:06.485 INFO MultiValueProcessor [12673 phase_1] - Checking max_mem_usage_mb resultsSize=25300 maxHeapSize=15728640000 memoryUsage=531809607 earlyExit=0 08-17-2022 06:00:13.237 FATAL ProcessRunner [9783 ProcessRunner] - Unexpected EOF from process runner child! 08-17-2022 06:00:13.238 FATAL ProcessRunner [9783 ProcessRunner] - Helper process was killed by SIGKILL. Usually this indicates that the kernel's OOM-killer has decided to terminate the daemon process. 08-17-2022 06:00:13.238 FATAL ProcessRunner [9783 ProcessRunner] - Check the kernel log (possibly /var/log/messages) for more info 08-17-2022 06:00:13.238 ERROR ProcessRunner [9783 ProcessRunner] - helper process seems to have died (child killed by signal 9: Killed)!   Splunk config information /opt/splunk/etc/system/local/limit.conf [default] max_mem_usage_mb = 30000   /opt/splunk/etc/apps/search/local/limit.conf [default] max_mem_usage_mb = 10000   Even with the above settings, it seems that the memory is not actually used as much as the settings. Splunk Spec: 16core, 64GB   If anyone knows about this issue, please share.
I'm attempting to get an AWS EKS cluster instrumented with an AppD agent running against a SaaS instance. I have followed the instructions here: https://docs.appdynamics.com/appd/4.5.x/en/infrastruc... See more...
I'm attempting to get an AWS EKS cluster instrumented with an AppD agent running against a SaaS instance. I have followed the instructions here: https://docs.appdynamics.com/appd/4.5.x/en/infrastructure-visibility/monitoring-kubernetes-with-the-cluster-agent/install-the-cluster-agent/deploy-the-cluster-agent-on-kubernetes as well as other guides. The containers are running, but I get this error in the cluster agent logs: [ERROR]: 2022-08-17 01:06:57 - agentregistrationmodule.go:131 - Failed to send agent registration request: Status: 404 Not Found, Body: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>Error report</title><style type="text/css"><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 404 - Not Found</h1><hr/><p><b>type</b> Status report</p><p><b>message</b>Not Found</p><p><b>description</b>The requested resource is not available.</p><hr/></body></html> I have available Server Visibility licenses so that isn't the issue. What can I do to get this working? 
Attempting to have an indexer join a cluser but i get met with this lovely warning;    Couldn't complete HTTP request: Connection reset by peer   . Any body know what it means or have a clue as to ho... See more...
Attempting to have an indexer join a cluser but i get met with this lovely warning;    Couldn't complete HTTP request: Connection reset by peer   . Any body know what it means or have a clue as to how to fix it?
My search looks similar to the one below: index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" | timechart span=5m avg(latency)   An e... See more...
My search looks similar to the one below: index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" | timechart span=5m avg(latency)   An example event: 2022-08-16 14:04:34,123 INFO [stuff] Latency: 55 [stuff] What have I got wrong in my search that it doesn't draw a graph?
I have the following queries      query 1 : index1 .... | table _time uniqueID query 2 : index2 .... | table _time uniqueID   And I am trying to find events where the uniqueID is found i... See more...
I have the following queries      query 1 : index1 .... | table _time uniqueID query 2 : index2 .... | table _time uniqueID   And I am trying to find events where the uniqueID is found in both AND the the subtraction between the time is greater than N milliseconds    Ideally, the output should be something like :    uniqueID System 1 time System 2 time Difference in Millis 123 Time 1 Time 2 500 1234 Time 11 Time 22 60   Could you please help?
Hi folks, I'd like to know in which config file I can locate the flags from my universal forwarder. I mean the flags like these:  SERVICESTARTTYPE=auto and LAUNCHSPLUNK=1 Doc:  https://docs.splun... See more...
Hi folks, I'd like to know in which config file I can locate the flags from my universal forwarder. I mean the flags like these:  SERVICESTARTTYPE=auto and LAUNCHSPLUNK=1 Doc:  https://docs.splunk.com/Documentation/Forwarder/9.0.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller#:~:text=SET_ADMIN_USER%3D0%20/quiet-,Supported%20commandline%20flags,-Command%2Dline%20flags Thanks in advance,
Hi, i am doing a search and noticing that i am getting 200% on the fields i troubleshooted and used this line at the beginning of my search  KV_MODE = none AUTO_KV_JSON = false  however it inst... See more...
Hi, i am doing a search and noticing that i am getting 200% on the fields i troubleshooted and used this line at the beginning of my search  KV_MODE = none AUTO_KV_JSON = false  however it instead returns with no events what so ever and i have the time on all time yet i still get nothing please help 
I am Learning Splunk the hard way I think, but here are my questions: if I have been able to have logs forwarded and if I can generate information from my host which right now is just the instance th... See more...
I am Learning Splunk the hard way I think, but here are my questions: if I have been able to have logs forwarded and if I can generate information from my host which right now is just the instance that my Splunk is running on how can the error in the picture exist. Seems to me like if I'm getting any kind of data in I should be getting all the data in especially if its simple like running processes.
Hello, I wanted to know if there is a definitive rule on how to structure a props.conf.  I read the docs and it does not say anything about a preference of where to call what operation. I underst... See more...
Hello, I wanted to know if there is a definitive rule on how to structure a props.conf.  I read the docs and it does not say anything about a preference of where to call what operation. I understand the search time operation order form Extract -> Report -> Eval -> FieldAlias -> Lookup. My question is within a stanz does all the extract have to happen at the top, then the Reports, then the Eval Ex: FIELDALIAS-src_ip = srcip ASNEW src_ip FIELDALIAS-dest_ip = dstip ASNEW dest_ip FIELDALIAS-src_port = sport ASNEW src_port FIELDALIAS-dest_port = dport ASNEW dest_port FIELDALIAS-authentication_protocol = protocol ASNEW authentication_protocol     FIELDALIAS-src_ip = srcip ASNEW src_ip FIELDALIAS-dest_ip = dstip ASNEW dest_ip FIELDALIAS-src_port = sport ASNEW src_port FIELDALIAS-dest_port = dport ASNEW dest_port      
Hi, can someone help on how to track Splunk code in Gitlab? Description:-  We do have a Splunk deployments using Gitlab, we want to be able to track and monitor every new changes or new code deploy... See more...
Hi, can someone help on how to track Splunk code in Gitlab? Description:-  We do have a Splunk deployments using Gitlab, we want to be able to track and monitor every new changes or new code deployed using Gitlab. I'm not sure if Splunk GITHUB add-on will do this work???   Thanks
Hi, I've run into an issue while working with the Splunk Rest API, specifically when trying to leverage extracted fields. Within the Splunk App my data lives in I have the following regular express... See more...
Hi, I've run into an issue while working with the Splunk Rest API, specifically when trying to leverage extracted fields. Within the Splunk App my data lives in I have the following regular expression as a field extraction for sendmail QID ^[^\\]\\n]*\\]:\\s+(?P<QID>[^:]+) This works as expected in the GUI for myself and users of the application. However, when attempting to leverage the "QID" field in a REST API Call with the following parameters (x-www-form-urlencoded. I'm showing this as a dict as I use python for my calls.), there is no QID field available to me.   x POST to services/search/jobs { "rf" : "QID", "adhoc_search_level" : "verbose", "search" : "search index=sec_email sourcetype=<mysourcetype> earliest=@d | fields QID, msgid | search msgid=\"<my_message_id>\"" } I've confirmed that I receive results here, but QID field is not available. My question here is: Is there a parameter I am missing to leverage pre-existing field extractions from the Splunk App, or am I going to need to use rex to re-extract (this is what I am doing now, but it's less than ideal).   Thank you!
Guys, can you help me ? I need to know the elapsed time between this two fields: CREATED_TS: 20220816182818.215 CURRENT_TIMESTAMP: 20220816185516 Do you have a tip on how can do this ? Thank... See more...
Guys, can you help me ? I need to know the elapsed time between this two fields: CREATED_TS: 20220816182818.215 CURRENT_TIMESTAMP: 20220816185516 Do you have a tip on how can do this ? Thank you. Clecimar
Hi, I'm wondering if it's possible to get an export of all triggered alerts including the alert name, alert trigger condition(s)/alert query, and alert severity as a table (CSV or JSON preferably)?... See more...
Hi, I'm wondering if it's possible to get an export of all triggered alerts including the alert name, alert trigger condition(s)/alert query, and alert severity as a table (CSV or JSON preferably)? I can access the triggered alerts from Activity > Triggered Alerts and all configured alerts from Search & Reporting Alerts but have not found a straightforward way to export everything. For the alert trigger condition(s)/query, I'm looking specifically for what index(es), field(s), and field value(s) the alert is monitoring for. Thanks in advance!
hai all, we have multiple forwarders installed nearly 1000above. we want to know if any UF stops sending data to splunk due to splunk service not running. how can i create dashboard to check if... See more...
hai all, we have multiple forwarders installed nearly 1000above. we want to know if any UF stops sending data to splunk due to splunk service not running. how can i create dashboard to check if UF is not sending or client is not connected.   thanks 
Hello Everyone, Currently i have an Splunk IT Service Manager installed, and i need to monitor que temperature of the CPU and the temperature of the power supply of the server. Anyone can help me t... See more...
Hello Everyone, Currently i have an Splunk IT Service Manager installed, and i need to monitor que temperature of the CPU and the temperature of the power supply of the server. Anyone can help me to enable those option through the App. Thank you very much. Diego.
Hi , I need some insights on useful alerts to be created to monitor logs and indexing in common.. We have huge logs indexed daily. What kind to alerts can be created to monitor those in common. ... See more...
Hi , I need some insights on useful alerts to be created to monitor logs and indexing in common.. We have huge logs indexed daily. What kind to alerts can be created to monitor those in common. need some use case. Thanks Mala S
We have setup the Splunk Mobile App to be deployed via MDM (InTune). Once installed, I check the instance name we are using and then select the SSO option. Our login page comes up but the screen is g... See more...
We have setup the Splunk Mobile App to be deployed via MDM (InTune). Once installed, I check the instance name we are using and then select the SSO option. Our login page comes up but the screen is grayed out and I can't enter anything. Anyone have any idea what we are doing run? Splunk Enterprise 8.2.2.1 / Secure Gateway 
I am developing a query that shows stats for events with the same orderId. There is a flaw though. When I run the query, I get results with only one event for an orderId, but when I take the orderId ... See more...
I am developing a query that shows stats for events with the same orderId. There is a flaw though. When I run the query, I get results with only one event for an orderId, but when I take the orderId associated to only one event and put it in the original query, the result comes up with 2 events. Here are my queries and results: (index=k8s_main LogType="KafkaMessageProcessedSuccess" message="OrderLineDestinationChangeRequested" Environment="PROD") OR (index=k8s_main container_name=fraud-single-proxy-listener message="Sending a message to kafka topic=order-events-avro*OrderLineDestinationChangeRequested*") | rename contextMap.orderId AS nefiOrderId OrderNumber AS omsOrderId | rename contextMap.requestId AS nefiRequestId NordRequestId AS omsRequestId | rename OrderLineId as omsOrderLineId | rex field=message "\"orderLineId\": \"(?<nefiOrderLineId>.*?)\", " | eval orderLineId = coalesce(nefiOrderLineId, omsOrderLineId) | eval requestId = mvappend(nefiRequestId, omsRequestId) | eval orderId = coalesce(nefiOrderId, omsOrderId) | stats dc(_time) AS eventCount values(_time) AS eventTime values(orderLineId) AS orderLineId values(requestId) AS requestId BY orderId | where eventCount = 1 Second query with the orderId in the initial search:  (index=k8s_main LogType="KafkaMessageProcessedSuccess" message="OrderLineDestinationChangeRequested" Environment="PROD" 381263531) OR (index=k8s_main container_name=fraud-single-proxy-listener message="Sending a message to kafka topic=order-events-avro*OrderLineDestinationChangeRequested*" 381263531) | rename contextMap.orderId AS nefiOrderId OrderNumber AS omsOrderId | rename contextMap.requestId AS nefiRequestId NordRequestId AS omsRequestId | rename OrderLineId as omsOrderLineId | rex field=message "\"orderLineId\": \"(?<nefiOrderLineId>.*?)\", " | eval orderLineId = coalesce(nefiOrderLineId, omsOrderLineId) | eval requestId = mvappend(nefiRequestId, omsRequestId) | eval orderId = coalesce(nefiOrderId, omsOrderId) | stats dc(_time) AS eventCount values(_time) AS eventTime values(orderLineId) AS orderLineId values(requestId) AS requestId BY orderId