All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is a Splunk's form, not mine, not of any third party app (be this Splunk supported). If a label on a form says last 60 days, then the respective dashboards should be showing the last 60 days ... See more...
This is a Splunk's form, not mine, not of any third party app (be this Splunk supported). If a label on a form says last 60 days, then the respective dashboards should be showing the last 60 days - ad not less. Ps. at my _internal index there are data much earlier then 30 days. Ex. can search data from early November 2023. Didn't try earlier. regards Altin
Repeating the OP does not answer my questions.  Please use different words to explain what you are looking for. Perhaps the _configtracker index has the information you seek.
Is there any possible queries to get the list of new created use case from ES and the fine tuned use cases and the non triggered use cases for the last 7 days. I have searched over internet but unfo... See more...
Is there any possible queries to get the list of new created use case from ES and the fine tuned use cases and the non triggered use cases for the last 7 days. I have searched over internet but unfortunately did not found as I have found only the list of enabled disabled and triggered use cases.
Added to previous question  ,  for those machine agent Unique host ID is showing in controller UI as <hostname>-java-MA
What exactly do you mean by "new created use cases" and "fine tuned use cases"?  What queries have you tried?  How did those queries not meet expectations?
Mine was failing also until I added the parameter above and install went through fine.
Hai @richgalloway , Using Splunk Web, I go to 'Data inputs' > 'Local performance monitoring' > Selected the input that are created I see the following errors: Failed to fetch data: Admin handler ... See more...
Hai @richgalloway , Using Splunk Web, I go to 'Data inputs' > 'Local performance monitoring' > Selected the input that are created I see the following errors: Failed to fetch data: Admin handler 'win-perfmon-find-collection' not found. This error is displayed for 'Available objects', 'Counters', 'Instances' How can I resolve this error? Thanks
Hi @gcusello @ Splunk dashboard performance Issue   Let me explain my requirement properly. I have message field and i need to extract multiple values from message fileds .for... See more...
Hi @gcusello @ Splunk dashboard performance Issue   Let me explain my requirement properly. I have message field and i need to extract multiple values from message fileds .for that is used multiple joins so its dashboard taking time to load fast. So as you mention in the answer use stats command .I tried to used stats but i cant able to get it in table . In real time senario: I am giving field with keywords   Three are two jobtype  xxx and yyy Message : "concur ondemand" and "expense"  --- Started Successfuly. Message : "Error "                                                           ---Error Message : Progress completed                                ------ completed  and we have unique correlationID.Based on correlation ID we need find out the result. I will copy paste that you suggested to using stats but i am not well in that can you please help to fix the issue.             index="xxx" applicationName="api" environment=DEV timestamp correlationId tracePoint message ("Concur Ondemand Started*") OR (message="Expense Extract Process started for jobName : AP/GL Extract V.3.0*") OR (trace=ERROR) OR ("Before Calling flow archive-Concur*") OR (message="Concur AP/GL File/s Process Status*")|dedup correlationId | rename content.SourceFileName as SourceFileName content.JobName as JobName | eval "FileName/JobName"= coalesce(SourceFileName,JobName) | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint message as Message | eval JobType=case(like('Message',"%Concur Ondemand Started%"), "OnDemand", like('Message',"%Expense Extract Process started for jobName : *%"), "Scheduled") | eval Message=trim(Message,"\"") | rename correlationId as CorrelationId tracePoint as TracePoint message as Message | rename content.loggerPayload.archiveFileName AS ArchivedFileName | eval Status=case(like('Message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('TracePoint',"%EXCEPTION%"),"ERROR") | eval Response= coalesce(Response,Message) | eval Status=if(TracePoint="ERROR","ERROR",Status) | join CorrelationId type=left [ search index="xxx" applicationName="api" | stats earliest(timestamp) AS Timestamp values(TracePoint) AS TracePoint values(Response) AS Response values(JobType) AS JobType values(Status) AS Status values("FileName/JobName") AS "FileName/JobName" values(Message) AS Message BY CorrelationId]           Status    FileName/JobNam JobType ArchivedFileName CorrelationId Timestamp  SUCCESS karthi xxx test1 2essrfsf4dgs   SUCCESS priya yyy test2 46dsfh68  
Problem solved. Many thanks for your help.
From the syntax `Exempted_Dark_Devices`, it's a macro. Look in the macro definitions and you should be able to find the expansion of this macro https://docs.splunk.com/Documentation/Splunk/9.2.0/Kn... See more...
From the syntax `Exempted_Dark_Devices`, it's a macro. Look in the macro definitions and you should be able to find the expansion of this macro https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Definesearchmacros
"encrypt" property is set to "true" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) en... See more...
"encrypt" property is set to "true" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. ClientConnectionId:  Any suggestions? 
I have inherited a Splunk system and this is one of the alerts | metadata index=index-cc* type=hosts | eval age = now()-lastTime | where age > 86400 | sort age d | convert ctime(lastTime) | fields ... See more...
I have inherited a Splunk system and this is one of the alerts | metadata index=index-cc* type=hosts | eval age = now()-lastTime | where age > 86400 | sort age d | convert ctime(lastTime) | fields lastTime,host,source,age | rename age as "Seconds Since Last Event" | search `Exempted_Dark_Devices`   How do I find the file Exempted_Dark_Devices?   Thank you
In my keystore directory, there´s only default.jks, could you please help what data is required in cert.jks
Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Sp... See more...
Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Splunk. OpenTelemetry is capable of generating traces, metrics, and logging data tailored for services. Currently, our focus is directed towards collecting telemetry data for a single service stack. However, if this proves successful, we are open to expanding and incorporating additional services in the future. To facilitate this integration, we are utilizing the OpenTelemetry Collector, a crucial component of the OpenTelemetry project and a freely available open-source tool. Although Splunk offers its version, we are presently not utilizing it. We seek confirmation that there are no associated costs for using the OpenTelemetry Collector, considering its contribution to OpenTelemetry, where vendors extend the functionality. Furthermore, our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. As a Splunk Administrator, I am interested in understanding how to configure and onboard OpenTelemetry logs into Splunk. However, we are seeking clarification on potential costs and efforts associated with this initiative. Is it a separate subscription or something similar, as I currently lack information on this matter? Kindly assist in checking and providing an update.
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request ... See more...
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request from one of our application teams to integrate and ingest MongoDB Atlas (Host & Audit) logs into Splunk. Following the provided documentation, the application team attempted to install the Splunk OpenTelemetry (otel) connector on a Linux VM for a Proof of Concept (POC). In the process, they requested the generation of a token, which I fulfilled by generating one from our Splunk Cloud Search head. Unfortunately, the attempted integration did not yield the expected results. I am now seeking clarification on whether the token generated from the Splunk Search head is adequate, or if there is a need to generate an Organizational access token. If the latter is necessary, I would appreciate guidance on where and how to generate it. As the administrator of our Splunk Cloud instances, I am curious about the role of Splunk OpenTelemetry and whether it is included with Splunk Cloud. We receive multiple requests from users wanting to send OTEL logs into Splunk. If Splunk OpenTelemetry is indeed included, I would appreciate guidance on generating the organizational token and where this process should take place.   https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mongodb-atlas-receiver.html https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/install-linux.html#otel-install-linux https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens   As I examine the documentation, it explicitly mentions Splunk Observability. I seek confirmation that I am following the correct procedure. The user attempted the installation using the same base64-encoded access token, but unfortunately, the result was once again unsuccessful. Additionally, the user has confirmed that there is internet access from the VM. At this juncture, we require guidance on how to generate an "organizational access token" to facilitate the integration process.   Can anyone kindly check and guide me on this please.    
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FY... See more...
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FYI -We are doing end to end monitoring by AppDynamics.  Thanks!
Hi, are you telling me to write dedup command again. pls check the screenshot below:  
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of i... See more...
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of integrating the ZenGRC tool with Splunk. On the ZenGRC tool side, there is a Splunk connector. I have created an account using the Splunk authentication method, with admin privileges. Following the documentation, when I attempted to connect to Splunk via the Connectors section in ZenGRC, I encountered an error message: "Failed to Connect: Unknown error." https://reciprocitylabs.atlassian.net/wiki/spaces/ZenGRCOnboardingGuide/pages/562331727/Splunk+Connector    For reference, the ZenGRC documentation providing information on the Splunk Connector can be found above. When configuring the ZenGRC end, three pieces of information are required: Splunk Instance API URL: https://[yourdomain].splunkcloud.com:8089 UserName/Email: xxx Password: yyy   Upon attempting to connect, the process fails. Additionally, I have whitelisted the IPs as indicated in the confluence documentation. Kindly provide guidance on resolving this issue. IP Whitelisting - ZenGRC Wiki - Confluence (atlassian.net)      
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file with... See more...
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file without change the limitation ?  I cannot run curl command since we are using saml authentication    Thanks