All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   i saw that there have been previous posts about this topic, but none resolves my issue. I have created a IAM role with CloudWatchFullAccess and i assigned it to the EC2 Instance Splu... See more...
Hi All,   i saw that there have been previous posts about this topic, but none resolves my issue. I have created a IAM role with CloudWatchFullAccess and i assigned it to the EC2 Instance Splun k is running on. And the role was auto-discovered and when setting up an Input for CloudWatch i was able to use that role. So sofar everything is peachy, but i get no metrics in the configured Index. Am I missing a policy for the IAM Role?   Kind regards, Mike
Hi Team I am using Splunk Enterprise with trial user account which is available for 60 days. How to upgrade the trial user account with feature like minimal data ingestion limit (1GB/day) includi... See more...
Hi Team I am using Splunk Enterprise with trial user account which is available for 60 days. How to upgrade the trial user account with feature like minimal data ingestion limit (1GB/day) including search feature and search API accessibility.  Please suggest what upgrade options available for trial user.
HI! I want to make the log below in the form of the table below. What should I do with the spl? [Log ex.] [2023.01.23] TYPE : UPDATE, USER : master, [ ID : jenny, TYPE- AUTH :  AB, O, B, A] [tab... See more...
HI! I want to make the log below in the form of the table below. What should I do with the spl? [Log ex.] [2023.01.23] TYPE : UPDATE, USER : master, [ ID : jenny, TYPE- AUTH :  AB, O, B, A] [table] USER ID TYPE-AUTH master jenny AB O B A   I did SPL as below, and the dashboard comes out as below. HELP ME PLZ... T. T [SPL] | rex field=TYPE-AUTH max_match=0 "(?P<type_auth>\w+)" USER ID TYPE-AUTH master jenny AB
Hi, I have to rearrange below columns in below order i.e. 31-60 Days, 61-90 Days, 91-120 Days,151-180 Days,Over 180 Days, Total Query:  | inputlookup ACRONYM_Updated.csv |stats count by A... See more...
Hi, I have to rearrange below columns in below order i.e. 31-60 Days, 61-90 Days, 91-120 Days,151-180 Days,Over 180 Days, Total Query:  | inputlookup ACRONYM_Updated.csv |stats count by ACRONYM Aging |xyseries ACRONYM Aging count |addtotals  
Hi there -  trying to get foreach statement to apply conditional statement. Essentialy in the eval statement tried a variety of if with options like IN statements (or alternatively but less preferabl... See more...
Hi there -  trying to get foreach statement to apply conditional statement. Essentialy in the eval statement tried a variety of if with options like IN statements (or alternatively but less preferably a long OR to replace the IN statement )-  but frankly not having any luck.  foreach Perc_In* [ eval Out_Of_Norm_For<<MATCHSTR>>=if(IN(<<MATCHSTR>>,"_Range_4","_RANGE_4_to_6"),"Consider","Ignore") ]  If the <<matchstr>> falls in the set of values "_Range_4" or  "_RANGE_4to_6", then the new field  Out_Of_Norm_For<<MATCHSTR>> should take a value of consider - else it takes a value of Ignore
I've a query   index="main" app="student-api" "tags.path"=/enroll "response"=succcess   which also gives a trace_id and then I've   index="main" app="student-api"   which gives a st... See more...
I've a query   index="main" app="student-api" "tags.path"=/enroll "response"=succcess   which also gives a trace_id and then I've   index="main" app="student-api"   which gives a student_id. I want to get the latest timestamp of enrollment (by joining the results) for each student_id (stored in a csv). The output would look like - student_id| latest timestamp of enrollment Please suggest the steps to follow. I tried   index="main" app="student-api" tags.student_id | join type=inner trace_id [| search index="main" app="student-api" "tags.path"="/enroll" "response"=success]   for the join, but it's not yielding the result. Also how to inputlookup the student_id from csv? Appreciate your help with this. Thanks @ITWhisperer @gcusello 
I need to create a correlation search that would trigger an alert if it found a match from IPs from: | inputlookup ip_spywarelist.csv  against an indexer (i.e: index=FW) Any step-by-step guidance?
I have written a splunk query to extract timeout logs for my functions and am creating a scheduled alert. I have created an email alert action. For the email subject, I want the function name to appe... See more...
I have written a splunk query to extract timeout logs for my functions and am creating a scheduled alert. I have created an email alert action. For the email subject, I want the function name to appear in the subject line. I have tried using $result.fieldname$ and $job.label$ in the subject but it does not give the desired output. For example, if my function test_func fails, I want the subject to look like 'Job Failure for test_func'. For this, I am coding the Subject field in the alert as 'Job Failure for $result.function_name$'. But, it just sends an email alert with subject as 'Job Failure for '. I have also tried using other tokens like '$job.label$', but I couldn't get the desired output. Can somebody please pitch in?
I have set up the Universal Forwarder locally in my machine using this guide https://splunk.paloaltonetworks.com/universal-forwarder.html /opt/splunkforwarder/etc/system/local/inputs.conf    ... See more...
I have set up the Universal Forwarder locally in my machine using this guide https://splunk.paloaltonetworks.com/universal-forwarder.html /opt/splunkforwarder/etc/system/local/inputs.conf     [monitor:///var/log/udp514.log] sourcetype = pan:log disabled =0     /opt/splunkforwarder/etc/system/local/outputs.conf     [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = andrea-xps-15-7590:9997 disabled=false [tcpout-server://andrea-xps-15-7590:9997]     (the local ip becomes 'andrea-xps-15-7590' same for the web UI) I have checked that syslog actually send logs event into the file /var/log/udp514.log so I am sure the logs are there. Port 9997 has been allowed on splunk UI (Forwarding and receiving settings). However  when I do a search : source="/var/log/udp514.log" nothing shows up. Also splunk throws a message: 'The TCP output processor has paused the data flow. Forwarding to host_dest=andrea-xps-15-7590 inside output group default-autolb-group from host_src=andrea-XPS-15-7590 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.' I understand data have been forwarded from host_src but the not indexer for some reason does not ingest them so it get blocked? Any idea where the problem is?      
I am feeling puzzled. I am trying to take a date, convert it to epoch time, and then subtract a number of seconds from that time... then reconstruct it back to a human readable format. I have a fi... See more...
I am feeling puzzled. I am trying to take a date, convert it to epoch time, and then subtract a number of seconds from that time... then reconstruct it back to a human readable format. I have a field called "eventTime" that comes in looking like this: 2023-02-20T22:33:00.000Z I am converting it to epoch time like so: | eval eventTime=strptime(eventTime,"%Y-%m-%dT%H:%M:%S.%3QZ") I have then converted to the server time, like so: | eval eventTime=strftime(eventTime, "%+") After those steps, the value in "eventTime" looks like so: Mon Feb 20 22:33:03 MST 2023 I am then attempting to convert to epoch like so: | eval event_etime=strptime(eventTime, "%a %b %e %H:%M:%S %Z %Y") This works, and converts it to this value: 1677034564.000000 Everything works as I would expect thus far... it is when I attempt to do any sort of math, that it turns the value to null. So, with this statement: | eval event_etime=tonumber(event_etime)-25200 I am attempting to subtract 23,200 seconds off the time... but when I do this step, the value goes null. I have tried with and without the "tonumber" function... it doesn't do a thing. Any ideas on how I can subtract "25200" from the epoch time, and retain a value that is not null?
I am running the following query: index="ABCi" sourcetype=DEF | timechart span=1h count | fields - _time | streamstats current=t diff(count) as count_diff | stats avg(count_diff) BUT, I a... See more...
I am running the following query: index="ABCi" sourcetype=DEF | timechart span=1h count | fields - _time | streamstats current=t diff(count) as count_diff | stats avg(count_diff) BUT, I am receiving the following error: Error in 'streamstats' command: The argument 'diff(count)' is invalid. Can you please help? Thanks
Hi Splunkers, I have a GC log like below:     [716920.165s][info][gc] GC(27612) Concurrent reset 24.051ms [716909.883s][info][gc] GC(27611) Concurrent update references 3124.593ms [716909.885... See more...
Hi Splunkers, I have a GC log like below:     [716920.165s][info][gc] GC(27612) Concurrent reset 24.051ms [716909.883s][info][gc] GC(27611) Concurrent update references 3124.593ms [716909.885s][info][gc] GC(27611) Pause Final Update Refs 1.336ms [716909.885s][info][gc] GC(27611) Concurrent cleanup 79178M->58868M(153600M) 0.143ms [716906.314s][info][gc] GC(27611) Pause Final Mark 2121.376ms [716906.315s][info][gc] GC(27611) Concurrent cleanup 71900M->71709M(153600M) 0.240ms [716906.757s][info][gc] GC(27611) Concurrent evacuation 441.920ms [716906.758s][info][gc] GC(27611) Pause Init Update Refs 0.126ms     I'm trying to get statistic related to total time spend by all these fields (the values in ms at the end of line).  I mean calculated all events in ms and drew a chart or table with total value from last 4 hours. For instance  19.00 - 245000ms 20.00 - 344000ms  21.00 - 345500ms 22.00 - 452000ms  I did manage to extract time needed in ms from all fields, but when I use query like: timechart span=1h sum(eval(Concurrent_reset+Concurrent_Update+ Pause_Final_Mark+Concurrent_cleanup+Concurrant_evacuation+Pause_Init_Update)) as total i just receive results from 19.00-20.00 timespan. What I doing wrong here?    regards, Sz  
Our initial install of the MS Windows AD Objects App was successful and the build searches were successful. However, recently the AD Group rebuild search is failing on the last line item of the searc... See more...
Our initial install of the MS Windows AD Objects App was successful and the build searches were successful. However, recently the AD Group rebuild search is failing on the last line item of the search: | outputlookup AD_Obj_Group While the search seems to complete, it completes with the following error: Error in 'outputlookup' command: Could not write to collection '__AD_Obj_Group_LDAP_list_kv': An error occurred while saving to the KV Store. Look at search.log for more information. In looking at the search job, I am also seeing the following error: ERROR FastTyper [30728 localCollectorThread] - caught exception in eventtyper, search='(index=index1 OR index2 OR index=index3source="*:System" "Installation Failure")' in event typer: err='Comparator '=' has an invalid term on the left hand side: index=index=index3source" For the above error I know where the issue is in the search, but I do not know where this configuration is with-in Splunk Cloud to correct it. Thoughts on how to correct the outputlook and/or the search term? Thanks. Jimmy  
Hi, For field extractions in a clustered environment do you have to use the props.conf method or can you use the field extractor GUI on the search head?   Thanks,   Joe
I've read a few posts here related to this topic but can't find a workable solution.   I have 200+ devices that I want to forecast Write Response Time for each device out 30 days.  My initial query... See more...
I've read a few posts here related to this topic but can't find a workable solution.   I have 200+ devices that I want to forecast Write Response Time for each device out 30 days.  My initial query to gather the data from a metric index is in a lookup table.  So I've tried this based on another similar post but I don't get any data for the predict command:   | inputlookup eg.csv | dedupe device_name | map maxsearches=5 search=" | inputlookup eg.csv | search device=$device_name$ | timechart span=1d avg(WriteRT) as avgWriteRT | predict avgWriteRT future_timespan=30 | eval device=$device_name$" | table _time, WriteRT, "prediction(WriteRT)", device   I suspect it has something to do with 'search device=$device_name$' but unsure what that might be.  Running the inputlookup up to the predict command does return results minus the device_name.  
Is it possible to log user logins and things like creation of new accounts and elevation of privileges for users in Splunk On-Call? We'd like to be able to audit these sorts of changes if possible.
Hello Splunkers I have the following search.The search works fine when running it but when its saved as a panel in a dashboard it complains saying waiting for input  as some of field values  for s... See more...
Hello Splunkers I have the following search.The search works fine when running it but when its saved as a panel in a dashboard it complains saying waiting for input  as some of field values  for state have $ in them("5-drained$") ...is there any other way to change the search to ignore it   index=abc | chart latest(state_sinfo) as state by node | stats count by state | eval {state}=count | fields - count | replace allocated WITH "1-allocated" IN state | replace "allocated*" WITH "1-allocated*" IN state | replace "allocated$" WITH "1-allocated$" IN state | replace "completing" WITH "1-completing" IN state | replace "planned" WITH "1-planned" IN state | replace idle WITH "2-idle" IN state | replace "idle*" WITH "2-idle*" IN state | replace maint WITH "3-maint" IN state | replace reserved WITH "4-reserved" IN state | replace down WITH "5-down" IN state | replace "down*" WITH "5-down*" IN state | replace "down$" WITH "5-down$" IN state | replace "drained*" WITH "5-drained*" IN state | replace "drained$" WITH "5-drained$" IN state | replace "drained" WITH "5-drained" IN state | replace "draining" WITH "5-draining" IN state | replace "draining@" WITH "5-draining@" IN state | replace "reboot" WITH "5-reboot" IN state | replace "reboot^" WITH "5-reboot^" IN state | sort +state    Thanks in Advance
The problem: My search head is populating with an audit lookup error after upgrading from 9.0.0 to 9.0.2.  What I've found: Looking into windows cert mmc on my Splunk server I saw two certs. Th... See more...
The problem: My search head is populating with an audit lookup error after upgrading from 9.0.0 to 9.0.2.  What I've found: Looking into windows cert mmc on my Splunk server I saw two certs. The self-signed root CA from Splunk, and a cert named SplunkServerDefaultCert below it that is expired. I'm assuming this expired cert is causing the issue and not the actual upgrade itself. Next, I checked my KVStore status, it's reading "failed."  Then I checked web.conf, enableSplunkWebSSL = true, there's a password populated in sslPassword, then I ensured privateKeyPath/serverCert/sslRootCAPath had the files in each location as well as checked the expiration dates for each one. The PEM for serverCert is indeed expired.  What I've done so far: I renamed the server.pem file to server.pem.back, restarted Splunk and hoped a new cert generated. Didn't work. All that did was prevent the web interface from working.  Then I went into openssl.conf and inserted "extendedKeyUsage = serverAuth, clientAuth" in the [v3_req] settings and uncommented "req_extensions = v3_req"  in [req].  I moved on to openssl to generate a new server cert. Created and signed the new server CSR, verified it, and replaced the  old  server cert w/ the new server PEM. Still didn't work.  Found $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key, renamed it, restarted splunk, found that a new key was generated, and my KVstore status still reads as "failed."  Going forward: Not sure what else I can do to fix this. Given I backed up everything, I restored it all back to square one w/all the OG certs and keys except the openssl.cnf, I left the changes I made stated earlier.  This is my first time working w/certs, I'm not too savvy w/ any of it, but a lot of the things I did above have all come from other asked questions on this community.  I think one place I may have made a mistake was signing the server.csr I created. I signed it with the new private.key that was created along with it, not the key that is currently annotated in web.conf. I don't know if that makes a difference, but I can't think of any other reason why the new server.pem  didn't work.  For reference: Jeremy describes my exact issue in the below post; however, I do not have the password to the OG splunk cert in the mmc, so I cannot recreate it as he did.  Windows upgrade from 8.1.1 to 9.0: Why does it fai... - Splunk Community Additionally, the above case, is the exact issue I am having down to the error codes.
Hi, I have an index= random_index which contains JSON data of a URL HTTP status code like {'availability':200,application:'random_name'}. the above index gets  input  from an RPA bot through sent... See more...
Hi, I have an index= random_index which contains JSON data of a URL HTTP status code like {'availability':200,application:'random_name'}. the above index gets  input  from an RPA bot through sent to the splunk http event collector endpoint every hour. example search query  index=random_index earliest=-24h latest=now |  search availability=200 | lookup Application_details.csv application OUTPUT Service,ServiceOffering,AssignmentGroup,Priority | stats count as avaibility_count | eval availability_percentage=  (avaibility_count/24)*100 | search availability_percentage < 95 | table availability_percentage,Service, ServiceOffering,AssignmentGroup,Priority | appendpipe[  | stats count  | where count=0 | appendcols [| eval availability_percentage=0,Service=random_service,AssignmentGroup=random_group etc | table availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority ]] | dedup availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority | table availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority       if the percentage is less than 95% then we will trigger an email and create a ServiceNow incident through the row that is returned in the Splunk search but in case the index didn't receive the data per hour due to some error, how to check that and still return a dummy result only if no results are returned but not to return the dummy result in the append-pipe section  in the case where availability is less than 95%