All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I was trying to explore all the null values in my index but is it not working as expected do we need any changes in the search  index=vpn earliest=-7d | fieldsummary | where match(values... See more...
Hello, I was trying to explore all the null values in my index but is it not working as expected do we need any changes in the search  index=vpn earliest=-7d | fieldsummary | where match(values, "^\[{\"value\":\"null\",\"count\":\d+\}\]$") Thanks  
Hi there I've run into an issue where I can sort of guess why I'm having issues though have no clear idea regarding how to solve it. In our distributed environment we have a "lookup app" in our dep... See more...
Hi there I've run into an issue where I can sort of guess why I'm having issues though have no clear idea regarding how to solve it. In our distributed environment we have a "lookup app" in our deployer, TA_lookups/lookups/lookupfile.csv Recently a coworker added a few new lookup files and made additions to the file in question. This is where the problem manifests, logging onto the deployer, checking that the correct files are present in /opt/splunk/etc/shcluster/apps/TA_lookups/lookups/lookupfile.csv Everything looks great. Applying the bundle worked without any complaints/errors. All the new csv files show up in the cluster and are accesible from the GUI, however. This one file, the "lookupfile.csv" is not updated. So I can sort of guess that it may have something to do with the file being in use or something, though I am stompt as to how I should go about solving this? I've tried making some additional changes to the file, checked for any wierd linebraking or something, and nothing. I can se from the CLI that this one file has not been modified since the initial deployment, so the deployer applies the bundle, there are no complaints on either end that I can find, it just skips this one pre-existing csv file completely and as far as I can see, silently. What do I do here? Is there a way to "force" the push? Is the only way to solve this to just manually remove the app from the SH cluster an push again? All suggestions are welcome Best regards
Hi Team,   We are currently using splunk version 7.2, it was installed by a third party and currently we don't have info on the login credentials used to download the splunk earlier. if I download ... See more...
Hi Team,   We are currently using splunk version 7.2, it was installed by a third party and currently we don't have info on the login credentials used to download the splunk earlier. if I download the latest version with free trial and update the splunk version, will it update the existing license or we have to download with the same login as earlier to get the license?   Thanks and Regards, Shalini S
Hi, I had blacklisted C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk.exe  in inputs.conf  of Deploymentserver. blacklist3 = EvenCode="4688" Message="(?:New Process Name:).+(?:SplunkUnive... See more...
Hi, I had blacklisted C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk.exe  in inputs.conf  of Deploymentserver. blacklist3 = EvenCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk.exe) Still I can see the logs ingestion into splunk,  How we can stop this ingestion.
Hello, In K8S, on a pod running a Spring Boot 3.x application (with OpenJDK 17) auto-instrumented by cluster-agent, the Java Agent fails on startup: [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INF... See more...
Hello, In K8S, on a pod running a Spring Boot 3.x application (with OpenJDK 17) auto-instrumented by cluster-agent, the Java Agent fails on startup: [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver22.9.0.34210] [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver22.9.0.34210] Agent logging directory set to [/opt/appdynamics-java/ver22.9.0.34210/logs] [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Agent logging directory set to [/opt/appdynamics-java/ver22.9.0.34210/logs] Could not start Java Agent, disabling the agent with exception java.lang.NoClassDefFoundError: Could not initialize class org.apache.logging.log4j.message.ReusableMessageFactory, Please check log files In the pod, the jar file (log4j-api) containing the ReusableMessageFactory is there (part of the appdynamics java-agent): sh-4.4$ pwd /opt/appdynamics-java/ver22.9.0.34210/lib/tp sh-4.4$ ls log4j* log4j-api-2.17.1.1.9.cached.packages.txt log4j-core-2.17.1.1.9.cached.packages.txt log4j-jcl-2.17.1.cached.packages.txt log4j-api-2.17.1.1.9.jar log4j-core-2.17.1.1.9.jar log4j-jcl-2.17.1.jar log4j-api-2.17.1.1.9.jar.asc log4j-core-2.17.1.1.9.jar.asc log4j-jcl-2.17.1.jar.asc From the POD manifest: - name: JAVA_TOOL_OPTIONS value: ' -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -Dappdynamics.agent.startup.log.level=debug -Dappdynamics.agent.reuse.nodeName.prefix=eric-tmo-des-ms-entitlements -javaagent:/opt/appdynamics-java/javaagent.jar' I tried with the latest java-agent (23.9) but same result. I don't seem to have the problem with SpringBoot 2.7 (which does include log4j-api as opposed to 3.x). It seems the classloader can't find the class from the java-agent distribution.)  Has anyone encountered this ?  Thank you.
I m trying to login splunk using my sc_admin user through shell script where i want to login and fetch the logs according to the string which i will give but it is failing could you please help me fo... See more...
I m trying to login splunk using my sc_admin user through shell script where i want to login and fetch the logs according to the string which i will give but it is failing could you please help me for the same script: #!/bin/bash # Splunk API endpoint SPLUNK_URL="https://prd-p-cbutz.splunkcloud.com:8089" # Splunk username and password USERNAME=$Username PASSWORD=$Password # Search query to retrieve error messages (modify this as needed) SEARCH_QUERY="sourcetype=error" # Maximum number of results to retrieve MAX_RESULTS=10 response=$(curl -k -s -v -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD") echo "Response from login endpoint: $response" # Authenticate with Splunk and obtain a session token #SESSION_TOKEN=$(curl -k -s -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD" | xmllint --xpath "//response/sessionKey/text()" -) SESSION_TOKEN=$(curl -k -s -v -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD" | grep -oP '<sessionKey>\K[^<]+' | awk '{print $1}') if [ -z "$SESSION_TOKEN" ]; then echo "Failed to obtain a session token. Check your credentials or Splunk URL." exit 1 fi # Perform a search and retrieve error messages SEARCH_RESULTS=$(curl -k -s -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/search/jobs/export" -d "search=$SEARCH_QUERY" -d "count=$MAX_RESULTS") # Check for errors in the search results if [[ $SEARCH_RESULTS == *"ERROR"* ]]; then echo "Error occurred while fetching search results:" echo "$SEARCH_RESULTS" exit 1 fi # Parse the JSON results and extract relevant information echo "Splunk Error Messages:" echo "$SEARCH_RESULTS" | jq -r '.result | .[] | .sourcetype + ": " + .message' # Clean up: Delete the search job curl -k -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/search/jobs" -X DELETE # Logout: Terminate the session curl -k -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/auth/logout" exit 0 even i m also not sure about is i m using the correct port number or not  error:  $ bash abc.sh * Trying 44.196.237.135:8089... * connect to 44.196.237.135 port 8089 failed: Timed out * Failed to connect to prd-p-cbutz.splunkcloud.com port 8089 after 21335 ms: Couldn't connect to server * Closing connection 0 Response from login endpoint: * Trying 44.196.237.135:8089... * connect to 44.196.237.135 port 8089 failed: Timed out * Failed to connect to prd-p-cbutz.splunkcloud.com port 8089 after 21085 ms: Couldn't connect to server * Closing connection 0 Failed to obtain a session token. Check your credentials or Splunk URL.
Hi, I have Error logs which is having more than 50 lines but requirement is to be displayed for 1st 10 lines instead more than 50 and there is no common statement in each events to write it in the r... See more...
Hi, I have Error logs which is having more than 50 lines but requirement is to be displayed for 1st 10 lines instead more than 50 and there is no common statement in each events to write it in the regex. So, Kindly help.
Hi Splunkers,  I'm trying to extract the fields from the raw event can you help if this can be done through rex or substr and provide examples if possible. Sample Event [August 28, 2023 7:22:45 PM... See more...
Hi Splunkers,  I'm trying to extract the fields from the raw event can you help if this can be done through rex or substr and provide examples if possible. Sample Event [August 28, 2023 7:22:45 PM EDT] APPLE Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) Designate that a disabled port or surface is the root cause. This event can be circumvent by setting the SuppressDisabledAlerts to FALSE.   Expected new fields as follows  1 ) Fruit = APPLE 2) Test = Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) 3) Timestamp = August 28, 2023 7:22:45 PM EDT 4) Message = Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) Designate that a disabled port or surface is the root cause. This event can be circumvent by setting the SuppressDisabledAlerts to FALSE.   Please Advise
Dashboard xml: I am using this dashboard  to Schedule PDF report, and all panels are showing data for 7 days. I need to show the time period at the top  of the report like Time Period: 01-17-2023 ... See more...
Dashboard xml: I am using this dashboard  to Schedule PDF report, and all panels are showing data for 7 days. I need to show the time period at the top  of the report like Time Period: 01-17-2023 to 01-23-2023 how can i do this??     <dashboard> <label> Dashboard title</label> <row> <panel> <title>first panel</title> <single> <search> <query>|tstats count as internal_logs where index=_internal </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>second panel</title> <single> <search> <query>|tstats count as audit_logs where index=_audit </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Third panel</title> <single> <search> <query>|tstats count as main_logs where index=main </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>      
I've got the following query to detect that a worker instance of mine is actually doing what it's supposed to on a regular basis. If it doesn't in a particular environment, the query won't return a r... See more...
I've got the following query to detect that a worker instance of mine is actually doing what it's supposed to on a regular basis. If it doesn't in a particular environment, the query won't return a row for that environment. I thought perhaps I could join the results with a literal dataset of environments, to ensure there is a row for each environment, but despite looking over the documentation, I can't find a way to make the join work. Admittedly, I'm new to Splunk querying, so might be missing something obvious, or there might be some other way of doing this without `join`.   | mstats sum(worker.my_metric) AS my_metric WHERE index="service_metrics" AND host=my-worker-* earliest=-2h BY host | eval env = replace(host, "^my-worker-(?<env>[^-]+)$", "\1") | stats sum(my_metric) AS my_metric BY env | eval active = IF(my_metric > 0, "yes", "no") | join type=right left=M right=E WHERE M.env = E.env from [{ env: "dev" }, { env: "beta" }, { env: "prod" }]      
Hello, everyone. I just ran into an issue where a stanza within apps\SplunkUniversalForwarder\local\inputs.conf on a forwarder is overwriting other apps\AppName\local\inputs.conf  from other apps in... See more...
Hello, everyone. I just ran into an issue where a stanza within apps\SplunkUniversalForwarder\local\inputs.conf on a forwarder is overwriting other apps\AppName\local\inputs.conf  from other apps in the apps folder. I would like to either disable this app, or delete the \SplunkUniversalForwarder\local folder or delete the stanza. The problem is that this has happened on multiple hosts and I need an automated method of doing this. Does anyone have an idea so that this default app that I don't even want to touch doesn't overwrite my own actually used apps? Thanks
whats the difference between :: and = in splunk search. what are the benefits vs drawbacks
I want to get information related to writing of debug logs to Splunk from Saleforce Apex code. Can you provide us with steps or which Managed packe package or COnnector can we use for this.   Than... See more...
I want to get information related to writing of debug logs to Splunk from Saleforce Apex code. Can you provide us with steps or which Managed packe package or COnnector can we use for this.   Thanks, regards  Kr Saket
What is the fastest way to run a query to get an event count on a timechart per host? This is for windows events and I want to get a list of how many events each device is logging per month so that I... See more...
What is the fastest way to run a query to get an event count on a timechart per host? This is for windows events and I want to get a list of how many events each device is logging per month so that I can identify the increase/decrease. They are all ingested in one index. A query like this will take a while to run if run for about a year. Is there a faster way to get this data? index=<index_name> | timechart count by Computer span=1mon Thanks.
What's the simplest regex that will match any character including newline? I want to be able to match all unknown content between two very specific capture groups. Thanks! Jonathan
I'm using the rex command to parse a value out of the results of a transaction command. Is there an easy way to restrict the resulting capture from searching either the start or end block of the tran... See more...
I'm using the rex command to parse a value out of the results of a transaction command. Is there an easy way to restrict the resulting capture from searching either the start or end block of the transaction? This would be much easier than doing it in the regex itself, since both blocks of text returned are very similar. Thanks! Jonathan
I have event Logs Similar to this. {Level: Information MessageTemplate: Received Post Method for activity: {Activity} Properties: { [-] ActionId: 533b531b-3078-448f-a054-7f54240962af ActionName... See more...
I have event Logs Similar to this. {Level: Information MessageTemplate: Received Post Method for activity: {Activity} Properties: { [-] ActionId: 533b531b-3078-448f-a054-7f54240962af ActionName: Pcm.ActivityLog.ActivityReceiver.Controllers.v1.ActivitiesController.Post (Pcm.ActivityLog.ActivityReceiver) Activity: {"ClientId":"1126","TenantCode":"BL.Activities","ActivityType":"CreateCashTransactionType","Source":"Web Entry Form","SourcePath":null,"TenantContextId":"00-9b57deb074fd41df69f90226cb03f499-353e17ffab1a6d25-01","ActivityStatus":"COMPLETE","OriginCreationTimestamp":"2023-09-28T11:39:48.4840749+00:00","Data":{"traceId":"9b57deb074fd41df69f90226cb03f499","parentSpanId":"88558259300b25e5","pcm.user_id":2,"pcm.name":"Transaction_Type_2892023143936842"}} Application: ActivityLogActivityReceiver ConnectionId: 0HMU00KGAKUBJ CurrentCorrelationId: 95c2f966-1110-405b-ae9a-47a024343b6c Environment: AWS-OB-DEV5 OriginCorrelationId: 95c2f966-1110-405b-ae9a-47a024343b6c ParentCorrelationId: 95c2f966-1110-405b-ae9a-47a024343b6c RequestId: 0HMU00KGAKUBJ:00000003 RequestPath: /api/activitylog/v1/activities SourceContext: ActivityLog.ActivityReceiver.Controllers.v1.ActivitiesController TenantContextId: 00-9b57deb074fd41df69f90226cb03f499-353e17ffab1a6d25-01 XRequestId: 3ba2946fa8cc0e5d5e3e82f27f566dd4 } }   I want to create a table from Properties.Activity with some specific fields. "ActivityType", "Source","OriginCreationTimestamp" "CreateCashTransactionType","Web Entry Form","2023-09-28T11:39:48.4840749+00:00" Can you help me to write the query, I tried spath/mvexpand but was not able to find it. 
To start with, I am very new to Splunk and I've been stumbling my way through this with varying degrees of success.  We recently upgraded Splunk from 8.2 to 9.1.2. We noticed the new SSL requirement... See more...
To start with, I am very new to Splunk and I've been stumbling my way through this with varying degrees of success.  We recently upgraded Splunk from 8.2 to 9.1.2. We noticed the new SSL requirements but went we have a self-signed cert but the website shows as not secure. We wanted to make sure everything was as secure as possible. We created an actual CA Cert chain and redirected the web.conf to the cert along with the key. I had issues with this at first because we weren't using a passphrase on the cert creation but we fixed that and it seems to accept it. Now the webpage seems to load, but it takes an incredibly long time. Once loaded, we should be able to login with LDAP. That's no longer working. I tried the local admin and it thinks for a while and then goes to a "Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage." page.  This is on the deploy server.  I changed the server.conf to use the cert as well though that doesn't appear to make a difference. I checked the openldap.conf and added the cert to that but then the page wouldn't load anymore. (doing a splunk restart between each change).  I'm not sure which logs to even look at to find the problem. I have gone through the documentation to setup the TLS which we want to do for interserver communication and for the webpage. the forwarders aren't necessary right now. Can anyone give me a clue what I might be doing wrong? EDIT: I did discover this error in the splunkd.log relating to my cert. Only post I've found so far says to combine the key and pem into a single file it can use. message="error:0906D06C:PEM routines:PEM_read_bio:no start line Here's my config files server.conf       [general] serverName = servername.com [changed for privacy reason] pass4SymmKey =[redacted] [sslConfig] # turns on TLS certificate host name validation sslVerifyServerName = true serverCert = /opt/splunk/etc/auth/servername.com.pem #sslPassword =[redacted] #SSL No longer valid option # sslPassword = [redacted] # turns on TLS certificate host name validation cliVerifyServerName = true sslPassword = [redacted] # Reference the file that contains all root certificate authority certificates combined together sslRootCAPath = /opt/splunk/etc/auth/servername.com.pem sslCommonNameList = servername.com, servername [pythonSslClientConfig] #sslVerifyServerCert = true #sslVerifyServerName = true [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [lmpool:auto_generated_pool_enterprise] description = auto_generated_pool_enterprise quota = MAX slaves = * stack_id = enterprise [license] active_group = Enterprise [kvstore] storageEngineMigration = true         web.conf       [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/myprivate.key serverCert = /opt/splunk/etc/auth/servername.com.pem sslPassword =[redacted]         authentication.conf [authentication] authSettings = ldapserver.com authType = LDAP [roleMap_ldapserver.com] admin = SplunkAdmins [ldapserver.com] SSLEnabled = 1 anonymous_referrals = 1 bindDN = CN=ServiceAccount,CN=AccountFolder,DC=SubOrg,DC=Org,DC=com bindDNpassword = [redacted] charset = utf8 emailAttribute = mail enableRangeRetrieval = 0 groupBaseDN = OU=Groups,OU=Users & Computers,OU=MainFolder,DC=SubOrg,DC=Org,DC=com groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = ldapserver.SubOrg.Org.Com nestedGroups = 0 network_timeout = 20 pagelimit = -1 port = 636 realNameAttribute = displayname sizelimit = 1000 timelimit = 15 userBaseDN = OU=Users,OU=Users & Computers,OU=MainFolder,DC=SubOrg,DC=Org,DC= com userNameAttribute = samaccountname   ldap.conf # See ldap.conf(5) for details # This file should be world readable but not world writable. ssl start_tls TLS_REQCERT demand TLS_CACERT /opt/splunk/etc/auth/ldapserver.pem # The following provides modern TLS configuration that guarantees forward- # secrecy and efficiency. This configuration drops support for old operating # systems (Windows Server 2008 R2 and earlier). # To add support for Windows Server 2008 R2 set TLS_PROTOCOL_MIN to 3.1 and # add these ciphers to TLS_CIPHER_SUITE: # ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA: # ECDHE-RSA-AES128-SHA # TLS_PROTOCOL_MIN: 3.1 for TLSv1.0, 3.2 for TLSv1.1, 3.3 for TLSv1.2. TLS_PROTOCOL_MIN 3.3 TLS_CIPHER_SUITE ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256> #TLS_CACERT absolute path to trusted certificate of LDAP server. For example /opt/splunk/etc/openldap/certs/mycertificate.pem #TLS_CACERTDIR absolute path to directory that contains trusted certificates of LDAP server. For example /opt/splunk/etc/openldap/certs  
I need to compare the values of 2 fields from the Splunk data with the field-values from the lookup and find the missing values from the Splunk data and output those missing field value pairs For ex... See more...
I need to compare the values of 2 fields from the Splunk data with the field-values from the lookup and find the missing values from the Splunk data and output those missing field value pairs For ex: index=test  sourcetype=splunk_test_data fields: field1, field2 lookup: test_data.csv Fields: field1, field2 The output should show missing values from the Splunk data and output those missing values Any help would be appreciated  Thanks
My query returns multiple rows, one for each environment that meets a certain condition. I would like to trigger an alert for each row (environment) that meets the condition. Is there a way to do thi... See more...
My query returns multiple rows, one for each environment that meets a certain condition. I would like to trigger an alert for each row (environment) that meets the condition. Is there a way to do this in Splunk?