All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing... See more...
Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing 1.5Ghz, RAM 512MB, Free Disk Space 5GB with only Splunk Forwarder installed, and no network connection? Answer something like 0.5 to 2%
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^... See more...
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+) Search query with lookup *some query* | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" | lookup abc.csv uri_path OUTPUT serviceName apiName I am using above query to lookup from csv file but not getting any results. In this lookup file i have these fields. apiName is the unique name in this csv file which i am trying to link with the uri_path but not able to do so. Is there a way to match this and produce result with both uri_path and api_name? can anyone please help me on this? serviceName uri_path http_method apiName /google /page1/page1a/633243463476/googlep1 post postusingRRR
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails w... See more...
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails when replication is needed and Job-2 replicates just fine. I have also switched the the loadjob command is called and have the same experience.  When replication isn't needed, the  loadjob for Job-1 does not error out and the report runs as expected. I'm not sure what to look at next. I replaced the ID's for readability.   12-14-2022 17:00:11.945 INFO SearchOperator:loadjob [55258 phase_1] - triggering artifact replication uri=https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, uri_path=/services/search/jobs/scheduler_<ID>/proxy?output_mode=json 12-14-2022 17:00:12.396 ERROR HttpClientRequest [55258 phase_1] - Caught exception while parsing HTTP reply: String value too long. valueSize=524552, maxValueSize=524288 12-14-2022 17:00:12.396 ERROR SearchOperator:loadjob [55258 phase_1] - error accessing https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, statusCode=502, description=OK
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will co... See more...
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will collect cpu utilization per core from mpstat command [script://./bin/cpucore_mpstat.sh] disabled = false interval = 120 source = server sourcetype = cpucore_mpstat index = pg_idx_whse_prod_events _meta = entity_type::NIX service_name::WHSE environment::PROD I can see the events like below:  I want red box highlighted column needs to display in dashboard.  Kindly help on this how to achieve this.   
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keyword... See more...
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed, count(eval(match(Keywords,"Audit Success"))) as Success earliest(_time) as FirstAttempt latest(_time) as LatestAttempt by Account_Name | where Attempts>=5 AND Success>=1 AND Failed>=2 | eval FirstAttempt=strftime(FirstAttempt,"%x %X") | eval LatestAttempt=strftime(LatestAttempt,"%x %X")   The output: Account_Name Attempts Failed Success FirstAttempt LatestAttempt    
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is ena... See more...
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is enabled , the CPU stands at 100 % for 30 mins at boot, after disabling Splunk the CPU was at 100% only for 6 mins as then it dropped to normal usage. How can we fix this issue
Is there a filter or can I create one so that I can quickly see all Incidents owned by me?
Hello! Last week (12/8/2022) my license usage went through the roof, for one source type that used 24 GB. On the other hand, when looking at the sourcetype, there were no events pulled into Splunk... See more...
Hello! Last week (12/8/2022) my license usage went through the roof, for one source type that used 24 GB. On the other hand, when looking at the sourcetype, there were no events pulled into Splunk that day (no events since 9/16). What is the cause of this issue?? How can I see why our license usage went up?? Also -->events pulled in that day (12/8) were same number of events we get pulled in on an average day YET our license usage was at 24. Thank you.  
Mission Control: Can you send Incidents to another ticketing system such as ServiceNow or Jira?
Hello there, My Company's web server is generating too many logs and it is overwhelming the system, I was wondering how can Splunk help me manage the logs that are getting generated, and get the leas... See more...
Hello there, My Company's web server is generating too many logs and it is overwhelming the system, I was wondering how can Splunk help me manage the logs that are getting generated, and get the least and most important logs 
I'm trying to use where(isnotnull(mvfind(mvfield,field))) to search to see which records are part of a list. The fields are all strings, and some of them have parentheticals at the end. I noticed tha... See more...
I'm trying to use where(isnotnull(mvfind(mvfield,field))) to search to see which records are part of a list. The fields are all strings, and some of them have parentheticals at the end. I noticed that mvfind does not seem to capture these fields. To illustrate my point, try the following search.     | makeresults count=10 | streamstats count as n | eval n=n-1 | eval n=case(n<3,"Test (".n.")",n<6,"Test ".n,n<9,"(".n.")",1=1,n) | eventstats list(n) as mv | eval index=mvfind(mv,n)     When you do, you'll see that items 3-9 are captured, but 0-2 are not, even though the very values of n were used to generate the mv field. I currently have a workaround to just use rex commands to substitute different strings for the parenthesis, run my mvfind, and then use rex to substitute them back, but it feels a little ridiculous. Does anyone know why mvfind doesn't work here or a cleaner way to fix it?   
hi, i am using this query to display max value by a host for Disk Read Time.  Also i need the max values TIMESTAMP. This search can be for 24 hrs or a week or a month.. but the timestamp should ... See more...
hi, i am using this query to display max value by a host for Disk Read Time.  Also i need the max values TIMESTAMP. This search can be for 24 hrs or a week or a month.. but the timestamp should be exact of the entry to the max value time.. index=perfmon source="Perfmon:LogicalDisk" host="abc" object=LogicalDisk | search NOT(instance=_Total) counter="% Disk Read Time" | eval Idx=instance | stats max(Value) by Idx, host
I have a splunk dashboard with dropdown as different client names : A,B,C,ALL. There will be logs for each client and then I need to search and print the count of selected client from logs, I am abl... See more...
I have a splunk dashboard with dropdown as different client names : A,B,C,ALL. There will be logs for each client and then I need to search and print the count of selected client from logs, I am able to do that if a user selects A ,B or C, but there is no such client as ALL, if a user selects all, I want to see all logs for A,B,C and sum them and show them in dashboard. A log look like:     Client Map Details : {A=123, B=245, C=456}     If a user selects A, we show 123 and plot on graph If a user selects B, we show 245 and plot on graph If a user selects C, we show 456 and plot on graph Query for above:   index=temp sourcetype="xyz" "Client Map Details : " "A" | rex field=_raw "A=(?<count>\d+)" | table _time count     But how can I change query based on user input "ALL" and run another splunk query that will see all such lines , and iterate over map and sum each value, 123+456+245 and then give a value to plot? How do we change slunk query based on user input from dashboard ?
I am looking for information for a project, my need is to establish a non-productive environment. I am looking for information on licenses that will allow me to do that, Sadly seems that test/dev and... See more...
I am looking for information for a project, my need is to establish a non-productive environment. I am looking for information on licenses that will allow me to do that, Sadly seems that test/dev and developer could not satisfy the needs. any suggestions?
Hi All, I am trying to export events in JSON format, and I am able to do it, and getting events like the one below.   {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}... See more...
Hi All, I am trying to export events in JSON format, and I am able to do it, and getting events like the one below.   {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}} {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}} {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}}   But the My expectation of having these events in an array with commas separated like the below format.   [ {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}}, {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}}, {"preview":false,"result":{"_raw":"{\"tomLogs\":[{\"component\":\"tom\"}]}}} ]   Please provide some references that can help to export events in the expected format.
Hi Team,  How to implement the base search functionality to improve the loading time of Splunk dashboard. I have multiple panels with many server types. Each panel has one type of server. Every tim... See more...
Hi Team,  How to implement the base search functionality to improve the loading time of Splunk dashboard. I have multiple panels with many server types. Each panel has one type of server. Every time when am changing the time filter, taking so much time to load the panels with each server traffic data.  So how I can improve this loading time by implementing the base search functionality? Please suggest on this.
I got a free trial of the cloud platform on 12th Dec. Now that I am trying to access the account it says your account has been blocked out. Please try again later or contact the administrator. This h... See more...
I got a free trial of the cloud platform on 12th Dec. Now that I am trying to access the account it says your account has been blocked out. Please try again later or contact the administrator. This happened with me when I tried with different email addresses.  Could you please help me with this?
I want to cut data that goes up to the fourth symbol "|". How can i do it through | rex? Example data: 2022-12-15 15:27:38.073 - INFO | TID = 1878892572955613 | reactor-http-epoll-36 | x.x.x.x.xxxx... See more...
I want to cut data that goes up to the fourth symbol "|". How can i do it through | rex? Example data: 2022-12-15 15:27:38.073 - INFO | TID = 1878892572955613 | reactor-http-epoll-36 | x.x.x.x.xxxxx.xxx.xxxClient | Response from url=https://xxxxxxxx:8081/xxxx xxxx xxxxxxxx 2022-12-15 15:27:38.082 - INFO | TID = | http-xxx-8080-xxxx-100 | r.n.m.d.d.l.i.util.InfoLoggingUtil | xxxMethod xxxxxx xxxxx {Parsed: bytes=276 | xxxxxxxxMethod.xxxxxxxxMethodData="eyJ0aHJlZURTU2VydmVyVHJhbnNJRCI6ImY3YzIwZTI4LTAzMTctNDFmYS1hZTU5LTkyMzdkZmY4YmNjZCIsInRocmVlRFNNZXRob2ROb3RpZmljYXRpb25VUkwiOiJodHRwczovL3BheW1lbnRjYXJkLnlvb21vbmV5LnJ1OjQ0My8zZHMvZmluZ2VycHJpbnQvbm90aWZpY2F0aW9uLzI3OS9ZR3JmQ21pUS1MdUg1cTFHX2xQTzNLNGFHTzhaLi4wMDIuMjAyMjEyIn0=" | xxxxxxxMethod.param=""} I want:  Response from url=https://xxxxxxxx:8081/xxxx xxxx xxxxxxxx  xxxxxxxxMethod.xxxxxxxxMethodData="eyJ0aHJlZURTU2VydmVyVHJhbnNJRCI6ImY3YzIwZTI4LTAzMTctNDFmYS1hZTU5LTkyMzdkZmY4YmNjZCIsInRocmVlRFNNZXRob2ROb3RpZmljYXRpb25VUkwiOiJodHRwczovL3BheW1lbnRjYXJkLnlvb21vbmV5LnJ1OjQ0My8zZHMvZmluZ2VycHJpbnQvbm90aWZpY2F0aW9uLzI3OS9ZR3JmQ21pUS1MdUg1cTFHX2xQTzNLNGFHTzhaLi4wMDIuMjAyMjEyIn0=" | xxxxxxxMethod.param=""}
Hi Splunk Community, I am interested in parsing Splunk searches and I am hoping that somebody here can point me to an existing grammar of the search language that can be used with ANTLR4.  
Am facing an issue when I connect Aws trusted advisor to Splunk Extention (AWS trusted advisor aggregator) I am adding AWS credentials as input as Splunk AWS trusted advisor aggregator is not givin... See more...
Am facing an issue when I connect Aws trusted advisor to Splunk Extention (AWS trusted advisor aggregator) I am adding AWS credentials as input as Splunk AWS trusted advisor aggregator is not giving any output. Is it possible for Splunk to integrate with AWS trusted advisor?