All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try without the rounding | timechart span=1m avg(ResponseTime) by API_Name
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:... See more...
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:00 TeamEnd 18:00:00 How do I achieve this using regex or concat expression in splunk. Please suggest.
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the ... See more...
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the splunk search option I am getting some like: {"timestamp: 2024-04-02T22:42:19.655Z sequence: 735 blablaclasname: com.rr.jj.eee.rrr anotherblablaclasnameName: com.rr.rr.rrrr.rrr level: ERRROR exceptionMessage: blablabc .... } - When I received by email the file using the same query I'm getting something like: {"timestamp: 2024-04-02T22:42:19.655Z\nsequence: 735\nblablaclasname: com.rr.jj.eee.rrr\nanotherblablaclasnameName: com.rr.rr.rrrr.rrr\nlevel: ERRROR\n\nexceptionMessage: blablabc\n....} *.conf file I am seeing: LINE_BREAKER = \}(\,?[\r\n]+)\{? Regards  
It worked, thank you so much! But I need one more help. I have another dashboard with a dropdown and two line graphs showing Response-Time and Counts. The base search used for both the graphs is exa... See more...
It worked, thank you so much! But I need one more help. I have another dashboard with a dropdown and two line graphs showing Response-Time and Counts. The base search used for both the graphs is exactly same however the chain search's are little different where one calculates the average response time and other calculates counts for the same. But the counts graph works perfectly however the response time only shows for one selection("All" is selected in dropdown). Please help me in finding the issue.     
It doesn't work that way. You should do TRANSFORMS-netlogon_send_to_nullqueue = netlogon_send_all_to_nullqueue, netlogon_keep_some And have the netlogon_send_all_to_nullqueue transform send comple... See more...
It doesn't work that way. You should do TRANSFORMS-netlogon_send_to_nullqueue = netlogon_send_all_to_nullqueue, netlogon_keep_some And have the netlogon_send_all_to_nullqueue transform send completely _everything_ to nullQueue [netlogon_send_all_to_nullqueue] REGEX = . DEST_KEY = queue FORMAT = nullQueue And then keep only some of them - matching the string you want [netlogon_keep_some] REGEX = NO_CLIENT_SITE DEST_KEY = queue FORMAT = indexQueue
The dropdown has two fields, the label and the value - it is the label that is shown to the user and the value which is put in the token When you use the token in a search, the value can be found ou... See more...
The dropdown has two fields, the label and the value - it is the label that is shown to the user and the value which is put in the token When you use the token in a search, the value can be found out by the user if they open the search in another window. There are ways to make this more difficult to find out but do you really need to go that far?
i want to add  in source index=test (source="/test/log/path/qaserver1.log") without showing these path in dropdown list to user.. can we have some other method?.
In what way didn't it work? Here is a runanywhere example showing it working - I have used eventstats for the final command so you can see the random values used | makeresults count=5 | fields - _t... See more...
In what way didn't it work? Here is a runanywhere example showing it working - I have used eventstats for the final command so you can see the random values used | makeresults count=5 | fields - _time | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 102)/(102)) * (104 - 100) + 100) | foreach f1 f2 f3 f4 [| eval <<FIELD>>=if(<<FIELD>>==1,1,null())] | eventstats dc(H) as d1 by f1 | eventstats dc(H) as d2 by f2 | eventstats dc(H) as d3 by f3 | eventstats dc(H) as d4 by f4 | eventstats values(d*) as d*
Didn't work. one possible way was: f1=1 | stats dc(H) |appendcols [search f2=1 | stats dc(H)] | appendcols [search f3=1 | stats dc(H)] | appendcols [search f4=1 | stats dc(H)] but it is no... See more...
Didn't work. one possible way was: f1=1 | stats dc(H) |appendcols [search f2=1 | stats dc(H)] | appendcols [search f3=1 | stats dc(H)] | appendcols [search f4=1 | stats dc(H)] but it is not efficient 
The where command doesn't support wildcards in the same way as search. Either change where to search or change the dropdown to include the whole command line apart from the "All" option where it shou... See more...
The where command doesn't support wildcards in the same way as search. Either change where to search or change the dropdown to include the whole command line apart from the "All" option where it should be blank.
My intent is that any event message without the string NO_CLIENT_SITE anywhere in it is discarded. 
Thank you for replying. I am using the token in the chain search.  
As configured, the transform will match and discard all events that do not start with NO_CLIENT_SITE. An event starting with SOMEDATA (any string that isn't NO_CLIENT_SITE) would be discarded. Was th... See more...
As configured, the transform will match and discard all events that do not start with NO_CLIENT_SITE. An event starting with SOMEDATA (any string that isn't NO_CLIENT_SITE) would be discarded. Was that your intent?
Try filtering before the stats command
Couldn't you just have one drop down? Environment Log source Server /test/log/path/server1.log TEST /test/log/path/testserver1.log QA /test/log/path/qaserver1.log PROD /test/log/... See more...
Couldn't you just have one drop down? Environment Log source Server /test/log/path/server1.log TEST /test/log/path/testserver1.log QA /test/log/path/qaserver1.log PROD /test/log/path/prodserver1.log  
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? ... See more...
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? root@raz-spldevsh:/opt/splunk/etc/apps# tail -n5000 /opt/splunk/var/log/splunk/splunkd.log |grep -E "ERROR" 04-05-2024 11:26:08.663 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.686 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.699 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py Splunk 9.0.3 App version: 4.5.1    
eg. i have in drop down i have server and cloud.  if i select Server - the path would be  source="/test/log/path/server1.log"   if i select cloud the path would be vary for each region. for TEST... See more...
eg. i have in drop down i have server and cloud.  if i select Server - the path would be  source="/test/log/path/server1.log"   if i select cloud the path would be vary for each region. for TEST -  source="/test/log/path/testserver1.log" for QA -   source="/test/log/path/qaserver1.log" for PROD -   source="/test/log/path/prodserver1.log"   so i kept the first drop down list to select server and cloud. And then environment drop down lsit. and remaining panels load based on the server/cloud drop down list and env .drop down list
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API:... See more...
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") | spath content.payload{} | mvexpand content.payload{} |stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | rex field=message max_match=0 "Expense Extract Process started for (?<FileName>[^\n]+)" | rex field=message max_match=0 "API: START: /v1/expense/extract/ondemand/(?<OtherRegion>[^\/]+)\/(?<OnDemandFileName>\S+)" | eval OtherRegion=upper(OtherRegion) | eval OnDemandFileName=rtrim(OnDemandFileName,"Job") | eval "FileName/JobName"= coalesce(OnDemandFileName,JobName) | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled") | eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('tracePoint',"%EXCEPTION%"),"ERROR") | eval Region= coalesce(Region,OtherRegion) | eval OracleRequestId=mvappend("RequestId:",RequestID,"ImpConReqid:",ImpConReqId) | eval Response= coalesce(message,error,errorMessage) | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","NotMatch") | rename Logon_Time as Timestamp | table Status JobType Response ArchivedFileName ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs priority match | where JobType!=" " | search Status="*" In the response field i want to show only.I dont care about the rest : PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU Status Response ArchiveFileName correlationId Success API: START: /v1/expense/extract After calling flow archive-ConcurExpenseFile-SubFlow Before calling flow archive-ConcurExpenseFile-SubFlow Calling s-ebs-api for AP Import process Concur AP/GL File/s Process Status Concur Ondemand Started Expense Extract Processing Starts Extract has no GL Lines to Import into Oracle PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL File/s Process Status - APAC Records Count Validation Passed EMEA_concur_expenses_ 49cde170-e057-11ee-8125-de5fb5
What do you currently have in your dropdown?
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that coul... See more...
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that could be wrong but I am really not seeing any app logging. Is there anyway to configure that? Does this app have a FIPS incompatibility?  The only thing I am finding are these logs in splunkd.log: ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/assessment_data.py" obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/vuln_data.py" json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)