All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Another option you could try is converting the dashboard to Classic
Does it work if you create two base searches rather than 1 base search and two chained searches?
2 things to check. 1 - I've seen instances where firewall devices inject private cert on outbound traffic causing error messages like this.  Adding an exception for the Splunk forwarder resolved the... See more...
2 things to check. 1 - I've seen instances where firewall devices inject private cert on outbound traffic causing error messages like this.  Adding an exception for the Splunk forwarder resolved the issue. 2 - if you are using self-signed or internal certs, you may need to add the cert to the add-on's trust list. Navigate to $SPLUNK_HOME/etc/apps/Splunk_TA_microsoft-cloudservices/lib/certifi Edit cacert.pem file Append the contents of your root certificate to this file Restart Splunk
Try filtering like this index="mulesoft" applicationName="s-concur-api" environment=PRD "*(SUCCESS): Concur AP/GL Extract V.3.02 - *. Concur Batch ID: * Company Code: * Operating Unit: *" OR "*(SUCC... See more...
Try filtering like this index="mulesoft" applicationName="s-concur-api" environment=PRD "*(SUCCESS): Concur AP/GL Extract V.3.02 - *. Concur Batch ID: * Company Code: * Operating Unit: *" OR "*(SUCCESS): Concur AP/GL Extract V.3.02 - *. Concur Batch ID: *"
Yes, I have tried diff timeframes (Last 15minutes option too) but no luck. Actually, my agenda is to find the response time and counts for the same time frame.  If we are seeing the counts then by de... See more...
Yes, I have tried diff timeframes (Last 15minutes option too) but no luck. Actually, my agenda is to find the response time and counts for the same time frame.  If we are seeing the counts then by default it should show the response time too. But when I click on magnifying glass icon(open in search) in view mode it is giving results for other API's too.
@ITWhisperer  As mentioned i filter before stats.But in the events its showing the values correctly but not showing any table values Query: index="mulesoft" applicationName="s-concur-api" environme... See more...
@ITWhisperer  As mentioned i filter before stats.But in the events its showing the values correctly but not showing any table values Query: index="mulesoft" applicationName="s-concur-api" environment=PRD (*(SUCCESS): Concur AP/GL Extract V.3.02 - *. Concur Batch ID: * Company Code: * Operating Unit: *) OR (*(SUCCESS): Concur AP/GL Extract V.3.02 - *. Concur Batch ID: *) | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") | spath content.payload{} | mvexpand content.payload{} | stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(content.payload{}) as response values(content.Region) as Region values(content.ConcurRunId) as ConcurRunId values(content.HeaderCount) as HeaderCount values(content.SourceFileDTLCount) as SourceFileDTLCount values(content.APRecordsCountStaged) as APRecordsCountStaged values(content.GLRecordsCountStaged) as GLRecordsCountStaged values(content.TotalAPGLRecordsCountStaged) as TotalAPGLRecordsCountStaged values( content.ErrorMsg) as errorMessage values(content.errorMsg) as error values("content.payload{}.AP Import flow processing results{}.requestID") as RequestID values("content.payload{}.GL Import flow processing results{}.impConReqId") as ImpConReqId values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId  
not sure about that, but we are having major issues after the upgrade to 9.2.1 with both of our Deployment Servers (running on Windows Server 2019)  one server is only supposed to show us Servers ... See more...
not sure about that, but we are having major issues after the upgrade to 9.2.1 with both of our Deployment Servers (running on Windows Server 2019)  one server is only supposed to show us Servers and the other is only supposed to show us our Workstations but now they are comingled on both, this poses a major problem as apps meant for servers may end up being installed on the Workstations and vice versa  we opened a Technical Support case on this a week ago and will let you know how it goes, so far their work arounds are not fixing anything for us
Try changing the timeframe for the search to a shorter time frame - does the graph work then?
The Message Trace input requires an additional step that isn't needed for the other inputs.  Did you add the Azure AD app registration to one of the following IAM roles? Exchange Administrator Glo... See more...
The Message Trace input requires an additional step that isn't needed for the other inputs.  Did you add the Azure AD app registration to one of the following IAM roles? Exchange Administrator Global Administrator Global Reader role (recommended) https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configureinputmessagetrace
Warning: "This usually indicates problems with underlying storage performance." But this warning is showing for other graph too.
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present ... See more...
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present in EFG log but it has many other logs coming from different apis also . I only want the log which is having the correlationId same in ABC then it should check the other log .And then I will use this regular expression to get the fields, like spath. Currently I am using this query  index=whcrm ( sourcetype=xl-sfdcapi ("Create / Update Consents for gcid" OR "Failure while Create / Update Consents for gcid" OR "Create / Update Consents done") ) OR ( sourcetype=sl-gcdm-api ("Error in sync-consent-dataFlow:") ) | rename properties.correlationId as correlationId | rex field=_raw "correlationId: (?<correlationId>[^\s]+)" | eval is_success=if(match(_raw, "Create / Update Consents done"), 1, 0) | eval is_failed=if(match(_raw, "Failure while Create / Update Consents for gcid"), 1, 0) | eval is_error=if(match(_raw, "Error in sync-consent-dataFlow:"), 1, 0) | stats sum(is_success) as Success_Count, sum(is_failed) as Failed_Count, | eval Total_Consents = Success_Count + Failed_Count | table Total_Consents, Success_Count, Failed_Count first one is the ABC log and second is the EFG also I want to use this regular expression in between to get the details  | rex field=message "(?<json_ext>\{[\w\W]*\})" | spath input=json_ext Or there can be any other way to write the query and get the counts please help . Thanks in Advance
To get Microsoft Defender XDR data into Splunk, use the Splunk Add-on for Microsoft Security => https://splunkbase.splunk.com/app/6207 All the Microsft Defender XDR incidents, alerts, entities, evid... See more...
To get Microsoft Defender XDR data into Splunk, use the Splunk Add-on for Microsoft Security => https://splunkbase.splunk.com/app/6207 All the Microsft Defender XDR incidents, alerts, entities, evidence, etc. are collected by this add-on.
Try something like this | rex field=TeamWorkTimings "(?<TeamStart>[^-]+)-(?<TeamEnd>.*)"
You have an orange triangle warning symbol in the top right of your chart. What does this message say?
I have followed all the necessary guidelines. The operating system is Windows Server 2022, and I have installed it on a machine that didn't previously have UF installed. I have completely disabled th... See more...
I have followed all the necessary guidelines. The operating system is Windows Server 2022, and I have installed it on a machine that didn't previously have UF installed. I have completely disabled the antivirus. I have performed the installation twice, once with the domain admin and once with the local admin. Each time, I encountered the same issue. The latest installable version on these machines is 9.0.1, and subsequent versions (up to 9.2.1) encounter the same error.
Hi, I have removed the round function in chain search but it is still showing the same graph.
Given the limited amount of information you have given, it is not possible to determine the reason for the difference. Your example data does not represent your real data closely enough. For example,... See more...
Given the limited amount of information you have given, it is not possible to determine the reason for the difference. Your example data does not represent your real data closely enough. For example, do you have special characters / non-alphanumeric characters in your field names? Are your fields multi-valued or appear in your events more than once? If possible, please share a representative example of your data without showing any sensitive data.
my search isn't created with makeresults, I only put it as an example. doesn't work because if I use: search | foreach f1 f2 f3 f4 [| eval <<FIELD>>=if(<<FIELD>>==1,1,null())] | eventstats dc(H... See more...
my search isn't created with makeresults, I only put it as an example. doesn't work because if I use: search | foreach f1 f2 f3 f4 [| eval <<FIELD>>=if(<<FIELD>>==1,1,null())] | eventstats dc(H) as d1 by f1 | eventstats dc(H) as d2 by f2 | eventstats dc(H) as d3 by f3 | eventstats dc(H) as d4 by f4 | stats values(d*) as d* the result of f1 is different comparing with the result if I use: search f1=1 |stats dc(H)
Try without the rounding | timechart span=1m avg(ResponseTime) by API_Name
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:... See more...
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:00 TeamEnd 18:00:00 How do I achieve this using regex or concat expression in splunk. Please suggest.