All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do... See more...
Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do not appear in the dashboard when they used to appear. Any idea where to start?
Hi Team, Is there is a possibility to monitor or capture metric of node/metric limit for a particular controller. Reason behind this is there are many controller and it would be great if we can have... See more...
Hi Team, Is there is a possibility to monitor or capture metric of node/metric limit for a particular controller. Reason behind this is there are many controller and it would be great if we can have common dashboard where we can monitor all controller limit as it is difficult to see on each admin.jsp page to see the limit 
Hi, I am trying to export data into Splunk using splunkhecexporter by Opentelemetry with TLS insecure_skip_verify=false. When I tried to send it into Splunk, it will shows an error message. After th... See more...
Hi, I am trying to export data into Splunk using splunkhecexporter by Opentelemetry with TLS insecure_skip_verify=false. When I tried to send it into Splunk, it will shows an error message. After that, I tried to add in the TLS certificate inside the configuration, it still shows the same error message. Following is the configuration I configured in Opentelemetry .yaml exporters: logging: verbosity: detailed splunk_hec: token: "SPLUNK HEC TOKEN" endpoint: "SPLUNK COLLECTOR URL" tls: ca_file: /etc/ssl/certs/cert.crt cert_file: /etc/ssl/certs/<Splunk_cert>.crt key_file: /etc/ssl/certs/<Splunk_Key>.key Following is the error message prompted. 2023-07-26T06:12:01.308Z info exporterhelper/queued_retry.go:433 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec", "error": "Post \"https://<SPLUNK_LINK>:<SPLUNK_PORT>/services/collector\": x509: certificate is not valid for any names, but wanted to match <SPLUNK DNS>", "interval": "7.402035261s"}   May I know what causes this to happen and how to solve this issue, please? Thank you.
Hello Team,   We have configured the standalone splunk kafka connect server with event hub kafka topic . But we are not able to fetch the logs from kafka topic to Splunk. Below are the our config... See more...
Hello Team,   We have configured the standalone splunk kafka connect server with event hub kafka topic . But we are not able to fetch the logs from kafka topic to Splunk. Below are the our configuration details.   standalone.properties ================================================================================== bootstrap.servers=<hostname>:9093 plugin.path=/opt/app/kafka/plugins,/opt/app/kafka/kafka_2.13-3.4.0/jre1.8.0_211 # unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs group.id=<group>   value.converter=org.apache.kafka.connect.json.JsonConverter key.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.flush.interval.ms=60000 consumer.security.protocol=SASL_SSL consumer.sasl.mechanism=OAUTHBEARER security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER consumer.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer //Kafka Serializer class for Kafka record values, we have set our message body to be String consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer consumer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="<token>" clientSecret="<token>" scope="https://<bootstrap_server>/.default"; consumer.sasl.oauthbearer.token.endpoint.url=https://login.microsoftonline.com/<values>/oauth2/v2.0/token consumer.sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler offset.storage.file.filename=/opt/app/kafka/run/offset_connectorservice_1 #consumer.auto.offset.reset=latest auto.offset.reset=latest consumer.group.id=<topic_group> sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler sasl.oauthbearer.token.endpoint.url=https://login.microsoftonline.com/<values>/oauth2/v2.0/token sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="<token>" clientSecret="<token>" scope="https://<bootstrap_server>/.default"; access.control.allow.origin=* access.control.allow.methods=GET,OPTIONS,HEAD,POST,PUT,DELETE ================================================================================= Logs from standalone kafka ================================================================================ WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.Errors logErrors WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. [2023-07-27 04:37:51,213] INFO Started o.e.j.s.ServletContextHandler@252dc8c4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921) [2023-07-27 04:37:51,213] DEBUG STARTED @14030ms o.e.j.s.ServletContextHandler@252dc8c4{/,null,AVAILABLE} (org.eclipse.jetty.util.component.AbstractLifeCycle:191) [2023-07-27 04:37:51,213] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:309) [2023-07-27 04:37:51,213] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56) =================================================================================   splunksinkconnector.sh ================================================================================= #!/bin/bash curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d '{ "name": "splunk-asla-dev", "config": { "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector", "tasks.max": "1", "topics": "<topic>", "splunk.hec.uri": "https://<splunk_indexer>:8088", "splunk.hec.token": "<token>", "splunk.hec.ack.enabled": "true", "splunk.hec.raw": "true", "splunk.hec.track.data": "true", "splunk.hec.ssl.validate.certs": "false", "splunk.indexes": "index", "splunk.sourcetypes": "sourcetype", "splunk.hec.raw.line.breaker": "\n" } }' ===================================================================================   Below are the error messages, after running the sink connector,   ================================================================================ [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] sslCiphers: closed 1 metric(s). (org.apache.kafka.common.network.Selector:269) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: entering performPendingMetricsOperations (org.apache.kafka.common.network.Selector:213) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: leaving performPendingMetricsOperations (org.apache.kafka.common.network.Selector:229) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: closed 0 metric(s). (org.apache.kafka.common.network.Selector:269) [2023-07-27 04:40:50,906] INFO [sink_connector|task-0] [Principal=:74fafaf6-a0c9-4b8b-bd8f-397ccb3c1212]: Expiring credential re-login thread has been interrupted and will exit. (org.apache.kafka.common.security.oauthbearer.internals.expiring.ExpiringCredentialRefreshingLogin:95) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] LoginManager(serviceName=kafka, publicCredentials=[SaslExtensions[extensionsMap={}]], refCount=0) released (org.apache.kafka.common.security.authenticator.LoginManager:157) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=version, group=app-info, description=Metric indicating version, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=commit-id, group=app-info, description=Metric indicating commit-id, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=start-time-ms, group=app-info, description=Metric indicating start-time-ms, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] INFO [sink_connector|task-0] App info kafka.consumer for connector-consumer-sink_connector-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83) [2023-07-27 04:40:50,907] DEBUG [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] Kafka consumer has been closed (org.apache.kafka.clients.consumer.KafkaConsumer:2425)   ===============================================================================   Please let us know what we are missing in this configuration .  
Hello Evryone i have a panel with piechart i want to remove the highlighted part in the panel its coming with the chart its not the title   
I am still trying to get my head around regular expressions in splunk, and would like to use regex that could parse the _raw data to create an extracted field with the contents that are between the s... See more...
I am still trying to get my head around regular expressions in splunk, and would like to use regex that could parse the _raw data to create an extracted field with the contents that are between the square brackets: _raw example data looks like this: 2023-07-26 15:11:16.932 [ engine1] [Error-1] INFO java.Exception: example text 2023-07-26 15:11:16.932 [ core2] [Thread-5] WARN java.Exception: example text 2 2023-07-26 15:11:16.932 [ main3] [Token-2] INFO java.Exception: example text 3 2023-07-26 15:11:16.932 [ Job4] [Thread-1] WARN java.Exception: example text 4 I need to extract field that is based on the data between the first square brackets. If I need another field that is based on teh second square brackets. So, I would like the results to look like like below: Field_1         Field_2 engine1       Error-1 core2           Thread-5 main3          Token-2 Job4             Thread-1 Any feedback and help would greatly appreciated. Thanks
I created the identity it got created when i am trying to create a connection getting below error   Login failed. The login is from an untrusted domain and cannot be used with integrated authenti... See more...
I created the identity it got created when i am trying to create a connection getting below error   Login failed. The login is from an untrusted domain and cannot be used with integrated authentication.
I am working on securing all of the communications to my Splunk Enterprise servers. Looking at the output of "splunk btool server list" there is a section named "distributed_leases" that has "sslVer... See more...
I am working on securing all of the communications to my Splunk Enterprise servers. Looking at the output of "splunk btool server list" there is a section named "distributed_leases" that has "sslVerifyServerCert=false" set (as given in etc/system/default/server.conf). On the one hand, it also has "disabled=true" so I should be able to set it however I want and see no effects. On the other hand, I don't prefer to tweak settings without knowing what they belong to. Looking at the spec file revealed no more than a trivial answer about securing the connection.  Searching on docs.splunk.com turned up nothing except for the server.conf.spec file that I already had. Does anyone know what "distributed_leases" is used for?
Hi, I need help! I have this query. Ticket_Encryption_Type=0x17 Account_Domain="ad.contoso.com" but I need, pull all the Service name in a list. how can I do that? thanks   07/26/2023 12:... See more...
Hi, I need help! I have this query. Ticket_Encryption_Type=0x17 Account_Domain="ad.contoso.com" but I need, pull all the Service name in a list. how can I do that? thanks   07/26/2023 12:31:30 PM LogName=Security EventCode=4769 EventType=0 ComputerName=a-dc1.ad.contoso.com SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=1125639551 Keywords=Audit Success TaskCategory=Kerberos Service Ticket Operations OpCode=Info Message=A Kerberos service ticket was requested.   Account Information: Account Name: team_explorer@ad.contoso.com Account Domain: ad.contoso.com Logon GUID: {XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}   Service Information: Service Name: NAME-ONE$ Service ID: S-X-X-XX-XXXXXXXXX-XXXXXXXXXX-XXXXXXXXX-XXXXX   Network Information: Client Address: ::ffff:192.168.8.49 Client Port: 41365   Additional Information: Ticket Options: 0x810000 Ticket Encryption Type: 0x17 Failure Code: 0x0 Transited Services: -
I am trying to make splunk less noisy and filter out some of our 4688 events.  I have tried to use the Ingest Action on our Indexer but when I select our sourcetype "WMI:WinEventLog:Security"  none o... See more...
I am trying to make splunk less noisy and filter out some of our 4688 events.  I have tried to use the Ingest Action on our Indexer but when I select our sourcetype "WMI:WinEventLog:Security"  none of the data appears when I pull a sample.  If I run a straight search query I can see the data and filter down to the correct events I want to filter out.  Is there something I'm missing?
Hi aLL, I'm trying to find the solution for the blocked queue.      Thanks...
Hello splunkers, i have a simple timechart query for avg USED_SPACE of disks for last 4 days  index=abc sourectype=disk_data |timechart span=24h avg(USED_SPACE) by DISK limit=10 usenull=f useothe... See more...
Hello splunkers, i have a simple timechart query for avg USED_SPACE of disks for last 4 days  index=abc sourectype=disk_data |timechart span=24h avg(USED_SPACE) by DISK limit=10 usenull=f useother=f below is the result  TIME DATApoint MADOD MA_ODP MA_PD MA_PI MA_PSD MA_PSI MA_P_DBCI MA_P_DBIN MA_P_DCRED MA_P_DEMS MA_P_DFDB MA_P_DFDMS MA_P_DFDP 22/07/2023 01:00 476054.33 39856.33 20105.25 106848.78 23032.06 703847.03 74755.98 20326.13 28006.5 20959.31 21400.05 21921.19 76224.02 83074.48 23/07/2023 01:00 476054.33 39864.33 20104.81 106848.78 23032.06 703847.03 74755.98 20326.13 28006.5 20959.31 21400.05 21921.19 76224.02 83074.48 24/07/2023 01:00 476166.33 39872.33 20105.25 106855.78 23032.06 703847.03 74755.98 20326.13 28022.5 20975.31 21400.05 21937.19 76228.02 83074.48 25/07/2023 01:00 476238.33 39880.33 20105.19 106862.78 23032.06 703851.03 74757.05 20326.13 28038.5 20975.31 21400.05 21953.19 76232.02 83074.48   My question is i understand DATApoint Disk has highest value so its first column but why is MADOD column in 2nd place its should be on 7th or 8th  column as other disk have higher used space and if ilimit to 5  it gives me wrong info because MA_PSD should be in 1st column and its in 7th  any suggestion how can i rearrange the columns based on used_space after timechart i'm expecting something like below  TIME MA_PSD DATApoint MA_PD MA_P_DFDP MA_P_DFDMS MA_PSI MADOD MA_P_DBIN MA_P_DFDB MA_P_DEMS MA_P_DCRED MA_P_DBCI MA_PI MA_ODP   22/07/2023 01:00 703847.03 476054.3 106848.8 83074.48 76224.02 74755.98 39856.33 28006.5 21921.2 21400.05 20959.31 20326.13 23032.06 20105.25   23/07/2023 01:00 703847.03 476054.3 106848.8 83074.48 76224.02 74755.98 39864.33 28006.5 21921.2 21400.05 20959.31 20326.13 23032.06 20104.81   24/07/2023 01:00 703847.03 476166.3 106855.8 83074.48 76228.02 74755.98 39872.33 28022.5 21937.2 21400.05 20975.31 20326.13 23032.06 20105.25   25/07/2023 01:00 703851.03 476238.3 106862.8 83074.48 76232.02 74757.05 39880.33 28038.5 21953.2 21400.05 20975.31 20326.13 23032.06 20105.19    
Hello All,  I'm trying to run query that will allow me to exclude events with part of a file path built in a windows event.  The event looks like this alert: %PlaceStuffIsStored%\ANYTHING\STUFF\And\... See more...
Hello All,  I'm trying to run query that will allow me to exclude events with part of a file path built in a windows event.  The event looks like this alert: %PlaceStuffIsStored%\ANYTHING\STUFF\And\Things\Blah\Blah\Blah\Blah.EXE was allowed to run but would have been prevented from running if the AppLocker policy were enforced. I was able to Rex out the file path to now I have a new field called path which is Dir Dir=%PlaceStuffIsStored%\ANYTHING\STUFF\And\Things\Blah\Blah\Blah\Blah.EXE I've tried the basics but no luck  Search!="*ANYTHING\STUFF\*" |mysearch  Not (Dir="*ANYTHING\STUFF\*")   Thanks
Hello everyone Please assist me in solving the problem below. I'm attempting to determine how to track it in Splunk if a field's place changes in logs. Is SPL tracing in SPLUNK possible? Ex: ... See more...
Hello everyone Please assist me in solving the problem below. I'm attempting to determine how to track it in Splunk if a field's place changes in logs. Is SPL tracing in SPLUNK possible? Ex: Logs : when we onboard the logs in splunk on the below positions. if it changed to then How to trace it by SPL? Please guide me   
Hello Splunkers!! I am facing an issue while running below search. As you can see in the screenshot. Can anyone help me to fix this issue.   search query : | makeresults | addinfo | eval ear... See more...
Hello Splunkers!! I am facing an issue while running below search. As you can see in the screenshot. Can anyone help me to fix this issue.   search query : | makeresults | addinfo | eval earliest=max(trunc(info_min_time),info_min_time),latest=min(max(trunc(info_max_time),info_max_time+0),2000000000) | map search="search `indextime`>=`bin($earliest$,300)` `indextime`<`bin($earliest$,300,+300)` earliest=`bin($earliest$,300,-10800)` latest=`bin($latest$,300,+300)``" | where false() Screenshot for a query error:    
Hi Team, I have created a federated provider and test connection successful . what will be our next steps ? is federated index mandatory to create ? if yes all the indexes across SHs should be cre... See more...
Hi Team, I have created a federated provider and test connection successful . what will be our next steps ? is federated index mandatory to create ? if yes all the indexes across SHs should be created ?'
I'm trying to set up a logarithmic scale on my y-axis, and couldn't find anything that's relevant - the XML syntax doesn't match the dashboard editor, and I'm a bit confused.   I tried doing this, ... See more...
I'm trying to set up a logarithmic scale on my y-axis, and couldn't find anything that's relevant - the XML syntax doesn't match the dashboard editor, and I'm a bit confused.   I tried doing this, ripping the line from the properties of an identical search I ran with a logarithmic y-axis, but I'm not getting results.   Any help would be appreciated; thanks!
I am unaware of how to filter or disable the processing of ANSI escape codes as recommended by Splunk, due to the recently announced log injection vulnerability. We have a clustered environment runni... See more...
I am unaware of how to filter or disable the processing of ANSI escape codes as recommended by Splunk, due to the recently announced log injection vulnerability. We have a clustered environment running 9.0.5 on AWS EC2 instances running Linux. How can I implement these recommendations?
Hello, I have a bar chart that has some dates in the legends. I need to add another value together at this name. How can I do that?