All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In the practical Lab environment, how important is it to configure TLS on Splunk servers during the practical Lab. Do i get penalized for not securing UF-IDX traffic using SSL/TLS 
How do I configure the inputs.conf for  Ta_tshark TA_tshark (Network Input for Windows) | Splunkbase
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Own... See more...
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Owner: sc_admin journal : EXTRACT-destip Inline "dest_ip\":\"(?P<destip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search Global | Permissions Enabled object should appear: all apps permissions: apps r/w, sc_admin r/w   journal : EXTRACT-srcip Inline "src_ip\":\"(?P<srcip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search App | Permissions Enabled object should appear: this app only (search) permissions: sc_admin r/w   After Add data from a tar.gz file upload, splunkcloud (login as sc_admin)>search>interesting fields: all fields:all fields doesn't include those fields. What am I missing? Btw, if I extract new fields with the same names it objects because they already exist.
We have an existing Splunk 9.1.3 Enterprise environment and run Splunkweb at port 8000 using an outside CA signed certificate for https.  A partner wants to stream syslog data to our Splunk using a s... See more...
We have an existing Splunk 9.1.3 Enterprise environment and run Splunkweb at port 8000 using an outside CA signed certificate for https.  A partner wants to stream syslog data to our Splunk using a secure connection.  I added the following to inputs.conf located in system/local. [tcp-ssl:6514] sourcetype = syslog index=syslog disabled = 0 [SSL] privKeyPath = /opt/splunk/etc/auth/splunkweb/2024/splprkey.key serverCert = /opt/splunk/etc/auth/splunkweb/2024/prcert.pem requireClientCert = false After a restart ,I used openssl to test the connection.  Port 8000 worked normally as expected; the certificate was returned and I could see the TLS negotiation in Wireshark   The openssl  connection to port 6154 did not work .  A connection was made and openssl did send a "Client Hello" which was visible in Wireshark,  but other than an ACK the Splunk server never sent anything further. Based on an article I read, I also copied the certificate path to the server.conf file, but that didn't change anything.  What am I missing? Is it incorrect to assume the same cert could be used for different ports? Any assistance appreciated! Thanks,
Hi Team,   I am using splunk otel to gather logs from GKE to splunk cloud platformand I see the below errors: otel-collector 2025-02-25T23:29:46.515Z error reader/reader.go:214 failed to process t... See more...
Hi Team,   I am using splunk otel to gather logs from GKE to splunk cloud platformand I see the below errors: otel-collector 2025-02-25T23:29:46.515Z error reader/reader.go:214 failed to process token {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/lxysdsdb/istio-proxy/0.log", "error": "failed to send entry after error: remove: field does not exist: attributes.time"} How can I resolve this?   I am using the below helm template values, can someone point out to what can be changed? I am using cri and otel (not fluentd) to collect the logs. # This is an example of using insecure configurations clusterName: "${cluster_name}" splunkPlatform: endpoint: ${endpoint} token: ${global_token} index: ${index_name} metricsIndex: "${index_name}_metrics" insecureSkipVerify: true logsEnabled: true metricsEnabled: false tracesEnabled: false logsEngine: otel cloudProvider: "gcp" distribution: "gke" agent: enabled: true ports: otlp: containerPort: 4317 hostPort: 4317 protocol: TCP enabled_for: [traces, metrics, logs, profiling] otlp-http: containerPort: 4318 protocol: TCP enabled_for: [metrics, traces, logs, profiling] resources: limits: cpu: ${logging_cpu_requests} memory: ${logging_memory_requests} podLabels: %{ for label, value in labels ~} ${label}: "${value}" %{ endfor ~} clusterReceiver: enabled: false logsCollection: # Container logs collection containers: enabled: true # Container runtime. One of `docker`, `cri-o`, or `containerd` # Automatically discovered if not set. containerRuntime: "${log_format_type}" excludePaths: %{ for path in exclude_path ~} - ${path} %{ endfor ~} # Boolean for ingesting the agent's own log excludeAgentLogs: true  
Hi, https://docs.splunk.com/observability/en/gdi/get-data-in/rum/browser/manual-rum-browser-instrumentation.html#create-custom-spans-for-single-page-applications how to create custom events for PEG... See more...
Hi, https://docs.splunk.com/observability/en/gdi/get-data-in/rum/browser/manual-rum-browser-instrumentation.html#create-custom-spans-for-single-page-applications how to create custom events for PEGA Application instrumented in SPlunk oc. PEGA application doesn't have page wise URL'S. we need to monitor couple transactions for calculating response time for each transaction. we tried via RUM URL grouping its not worked since there is no page wise URL. So How to create custom events to monitor the transaction metrics. Please share the sample code snippets if any. Thanks.    
Hi everyone.  I suppose this is a very simple question, but I'm new to Splunk and I've tried everything that I have knowledge of.  The field that contains the timestamp is called "payload.event... See more...
Hi everyone.  I suppose this is a very simple question, but I'm new to Splunk and I've tried everything that I have knowledge of.  The field that contains the timestamp is called "payload.eventProcessedAt" Trying to parse with  | eval time_var=strptime(payload.eventProcessedAt, "%Y-%m-%dT%H:%M:%S.%3NZ")  doesn't work, giving my only "null/empty" values. The same occurs with "strftime". How can I do this?
In splunk how we create these CMDB fields mapped to any sourcetype when new host added as asset.. like the below fields, if we don't have C CRITICITY ENVIRONMENT FUNCTION OFFER BUSINESS UNIT C... See more...
In splunk how we create these CMDB fields mapped to any sourcetype when new host added as asset.. like the below fields, if we don't have C CRITICITY ENVIRONMENT FUNCTION OFFER BUSINESS UNIT CODEREF DATACENTER
Hello i am seeing this error MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) how to resolve this.
Hello Splunkers!! We recently migrated Splunk from version 8.1.1 to 9.1.1 and encountered the following errors:   ERROR TimeParser [12568 SchedulerThread] - Invalid value "`bin" for time term ... See more...
Hello Splunkers!! We recently migrated Splunk from version 8.1.1 to 9.1.1 and encountered the following errors:   ERROR TimeParser [12568 SchedulerThread] - Invalid value "`bin" for time term 'latest' ERROR TimeParser [12568 SchedulerThread] - Invalid value "$info_max_time_2$" for time term 'latest' Upon reviewing the Splunk 9.1.1 release notes, I found that this issue is listed as a known bug. Has anyone observed and resolved this issue before? If you have implemented a fix, could you share the specific configuration changes or workarounds applied? Any insights on where to check (e.g., saved searches, scheduled reports, or specific configurations) would be greatly appreciated. Below is the screenshot of the known bug in 9.1.1   Thanks in advance for your help!
Hello All,   I have a multivalue field which contains domain names (for this case, say it is in field named emailDomains and it contains 5 values). I have a lookup named whitelistdomains which cont... See more...
Hello All,   I have a multivalue field which contains domain names (for this case, say it is in field named emailDomains and it contains 5 values). I have a lookup named whitelistdomains which contains 2000+ domains names. Now, what I want is to look for these multivalue domains names field and check if that domain name is present in my lookup. Is that possible. Example and expected output is below. I did tried doing this using mvexpand but sometimes I end up with memory issues on splunk cloud and hence want to avoid this. I tried using map, mvmap to see somehow I can pass one value at a time in inputlookup command and get the output. But so far, I am not able to figure it out properly. I did achived this via a very dirty method of using appendpipe to get list of values in lookup and then eventstats to create that variable against each event for comparison. But this made search very clunky and I am sure there are better ways of doing this? So, if you can please sugesst a better way, that would be amazing.   emailDomains field: test.com sample.com example.com   whitelistdomains Lookup data: whitelist.com sample.com something.com example.com ......and so on..   Expected output: whitelistedDomains (this is a new field after looking up all multifield values against lookup) sample.com example.com  
Hello All, I have a use case where in need to compare two json objects and highlight their key value differences. This is just to ensure that we can let OSC know only about the changes that has been... See more...
Hello All, I have a use case where in need to compare two json objects and highlight their key value differences. This is just to ensure that we can let OSC know only about the changes that has been made rather than sending both old and new json as as alert. Is that doable? I tried using foreach, spath, mvexpand but not able to figure out a proper working solution. Any help on this is much appreciated. Json1: { "id": "XXXXX", "displayName": "ANY DISPLAY NAME", "createdDateTime": "2021-10-05T07:01:58.275401+00:00", "modifiedDateTime": "2025-02-05T10:30:40.0351794+00:00", "state": "enabled", "conditions": { "applications": { "includeApplications": [ "YYYYY" ], "excludeApplications": [], "includeUserActions": [], "includeAuthenticationContextClassReferences": [], "applicationFilter": null }, "users": { "includeUsers": [], "excludeUsers": [], "includeGroups": [ "USERGROUP1", "USERGROUP2" ], "excludeGroups": [], "includeRoles": [], "excludeRoles": [] }, "userRiskLevels": [], "signInRiskLevels": [], "clientAppTypes": [ "all" ], "servicePrincipalRiskLevels": [] }, "grantControls": { "operator": "OR", "builtInControls": [ "mfa" ], "customAuthenticationFactors": [], "termsOfUse": [] }, "sessionControls": { "cloudAppSecurity": { "cloudAppSecurityType": "monitor", "isEnabled": true }, "signInFrequency": { "value": 1, "type": "hours", "authenticationType": "primaryAndSecondaryAuthentication", "frequencyInterval": "timeBased", "isEnabled": true } } }   json2: { "id": "XXXXX", "displayName": "ANY DISPLAY NAME 1", "createdDateTime": "2021-10-05T07:01:58.275401+00:00", "modifiedDateTime": "2025-02-06T10:30:40.0351794+00:00", "state": "enabled", "conditions": { "applications": { "includeApplications": [ "YYYYY" ], "excludeApplications": [], "includeUserActions": [], "includeAuthenticationContextClassReferences": [], "applicationFilter": null }, "users": { "includeUsers": [], "excludeUsers": [], "includeGroups": [ "USERGROUP1", "USERGROUP2", "USERGROUP3" ], "excludeGroups": [ "USERGROUP4" ], "includeRoles": [], "excludeRoles": [] }, "userRiskLevels": [], "signInRiskLevels": [], "clientAppTypes": [ "all" ], "servicePrincipalRiskLevels": [] }, "grantControls": { "operator": "OR", "builtInControls": [ "mfa" ], "customAuthenticationFactors": [], "termsOfUse": [] }, "sessionControls": { "cloudAppSecurity": { "cloudAppSecurityType": "block", "isEnabled": true }, "signInFrequency": { "value": 2, "type": "hours", "authenticationType": "primaryAndSecondaryAuthentication", "frequencyInterval": "timeBased", "isEnabled": true } } }   Output expected (Based on above sample jsons): KeyName , Old Value, New Value displayName, "ANY DISPLAY NAME", "ANY DISPLAY NAME 1" modifiedDateTime, "2025-02-05T10:30:40.0351794+00:00", "2025-02-06T10:30:40.0351794+00:00" users."includeGroups", ["USERGROUP1","USERGROUP2"], ["USERGROUP1","USERGROUP2", "USERGROUP3"] "excludeGroups",[],["USERGROUP4"] sessionControls."cloudAppSecurityType","moitor","block" signInFrequency."value",1,2   Thanks  
Hi Splunkers, I am currently working on a development activity with the Splunk React app and need to get the list of timezones from Splunk into my app. From my research, I found that the list o... See more...
Hi Splunkers, I am currently working on a development activity with the Splunk React app and need to get the list of timezones from Splunk into my app. From my research, I found that the list of timezones is located in a file called TimeZones.js at the following path: C:\Program Files\Splunk\quarantined_files\share\splunk\search_mrsparkle\exposed\js\collections\shared\TimeZones.js Questions: How can I retrieve the full list of timezones from the TimeZones.js file? Is there a way to get the timezones via a REST API? Any other suggestions or thoughts on how to achieve this would be appreciated. Thanks in advance! Sanjai
Hi there team,   What  API's are currently included in Cisco ThousandEyes Add-on for Splunk?   Is there a plan for adding more API's in future?   YP  
Apart from https://community.splunk.com/t5/Splunk-Enterprise/Linear-memory-growth-with-Splunk-9-4-0-and-above/m-p/712550#M21712 Some splunk instances(UF/HF/SH/IDX) might see higher memory usage afte... See more...
Apart from https://community.splunk.com/t5/Splunk-Enterprise/Linear-memory-growth-with-Splunk-9-4-0-and-above/m-p/712550#M21712 Some splunk instances(UF/HF/SH/IDX) might see higher memory usage after the upgrade. 9.4.0 has introduced new active channel cache. It has a cache TTL of 3600 sec. active_eligibility_age = <integer> * The time, in seconds, after which splunkd removes an idle input channel from the active channel cache to free up memory. * Default: 3600 Before 9.4.0, splunkd was using inactive channel cache. It had a cache ttl of 330 sec. It's not used anymore. inactive_eligibility_age_seconds = <integer> * Time, in seconds, after which an inactive input channel will be removed from the cache to free up memory. * Default: 330 Because of high active channel cache TTL, splunkd memory footprint might be higher on some splunk deployments. In limits.conf reduce active channel cache TTL to 330 ( 9.4.2 onwards, by default it's 330)     [input_channels] active_eligibility_age = 330        
Hi Team I want to have a dashboard that show API stats 1.Nof request--how to get the total count for a request made based on date range selected below is my splunk log for  index=* source IN (*... See more...
Hi Team I want to have a dashboard that show API stats 1.Nof request--how to get the total count for a request made based on date range selected below is my splunk log for  index=* source IN (*) {    event: { [-]      body: null      httpMethod: GET path:/data/v1/name      queryStringParameters: {         identifier: 106      }       requestContext: {         authorizer: {           integrationLatency: 0          principalId: some@example.com        }        domainName: domain        }        domainName: domain      }      resource: /v1/name    }    msg: data:invoke } 2.Response Time-how to get the total count for a response time  based on date range selected below is the splunk log format {     client: Ksame@example.com    domain: domain    entity: name    msg: responseTime    queryParams: {       identifier: 666    }    requestType: GET    responseTime: 114 } i have only above two logs in splunk how do i get below stats count 3.Request per min(Count of requests processed by an API service per minute.) 4.Passed SLA% (Percentage of service requests that passed service level agreement parameters, including response time and uptime.)
When I try to run "./splunk start" it says "cannot execute binary file: Exec format error". Im in the bin directory running as the root user, tried as the splunk fwd user also tried "splunk start" in... See more...
When I try to run "./splunk start" it says "cannot execute binary file: Exec format error". Im in the bin directory running as the root user, tried as the splunk fwd user also tried "splunk start" in the bin directory but having the same issue. Anyone know how to resolve this?
I am trying to create an Azure connection in my Splunk install in Azure Government using the Microsoft Cloud Services add-on. When I create a new Azure account and provide the client and tenant IDs a... See more...
I am trying to create an Azure connection in my Splunk install in Azure Government using the Microsoft Cloud Services add-on. When I create a new Azure account and provide the client and tenant IDs and the client secret, as directed by the documentation, it fails with the error: In handler 'passwords': cannot parse secret JSON: Unexpected EOF. Looking closely my suspicion is that this cannot handle the special characters that are in these secret keys, which includes a tilde, dash, underscore, and period. I have tried keys that only have one or two of those special characters and they still fail, and I am unable to create a key that doesn't have at least one of them. Looking for some guidance on how to proceed. If I can't create the account, I can't connect my Event Hub and ingest into Splunk.
Hi everyone, We are pulling Firewall data from a Storage Account containing several categories. There is one specific category, AZFWDnsQuery which need to be dropped.  I tested the regex in the sea... See more...
Hi everyone, We are pulling Firewall data from a Storage Account containing several categories. There is one specific category, AZFWDnsQuery which need to be dropped.  I tested the regex in the search as well as on regex101. It is successfully matching only those specific events with this category. But once deployed, Splunk starts dropping all events from this input, including for other categories that do not match the regex.  Sample events { "time": "2025-02-27T18:46:08.307710+00:00", "resourceId": "/SUBSCRIPTIONS/xxxxxx/xxx/Path", "properties": {"SourceIp":"x.x.x.x","SourcePort":25208,"QueryId":51787,"QueryType":"A","QueryClass":"IN","QueryName":"google.com","Protocol":"udp","RequestSize":48,"DnssecOkBit":false,"EDNS0BufferSize":512,"ResponseCode":"NOERROR","ResponseFlags":"qr,rd,ra","ResponseSize":94,"RequestDurationSecs":0.007257565,"ErrorNumber":0,"ErrorMessage":""}, "category": "AZFWDnsQuery"} { "time": "2025-02-27T18:46:08.307329+00:00", "resourceId": "/SUBSCRIPTIONS/xxxxxx/xxx/Path", "properties": {"SourceIp":"x.x.x.x","SourcePort":62730,"QueryId":16828,"QueryType":"A","QueryClass":"IN","QueryName":"google.com","Protocol":"udp","RequestSize":35,"DnssecOkBit":false,"EDNS0BufferSize":512,"ResponseCode":"NOERROR","ResponseFlags":"qr,rd,ra","ResponseSize":68,"RequestDurationSecs":0.012227477,"ErrorNumber":0,"ErrorMessage":""}, "category": "AZFWDnsQuery"} { "time": "2025-02-27T18:46:08.307262+00:00", "resourceId": "/SUBSCRIPTIONS/xxxxxx/xxx/Path", "properties": {"SourceIp":"x.x.x.x","SourcePort":45452,"QueryId":25241,"QueryType":"A","QueryClass":"IN","QueryName":"google.com","Protocol":"udp","RequestSize":35,"DnssecOkBit":false,"EDNS0BufferSize":512,"ResponseCode":"NOERROR","ResponseFlags":"qr,rd,ra","ResponseSize":68,"RequestDurationSecs":0.008439891,"ErrorNumber":0,"ErrorMessage":""}, "category": "AZFWDnsQuery"} { "time": "2025-02-27T18:46:08.307129+00:00", "resourceId": "/SUBSCRIPTIONS/xxxxxx/xxx/Path", "properties": {"SourceIp":"x.x.x.x","SourcePort":14846,"QueryId":3916,"QueryType":"A","QueryClass":"IN","QueryName":"google.com","Protocol":"udp","RequestSize":35,"DnssecOkBit":false,"EDNS0BufferSize":512,"ResponseCode":"NOERROR","ResponseFlags":"qr,rd,ra","ResponseSize":68,"RequestDurationSecs":0.009026804,"ErrorNumber":0,"ErrorMessage":""}, "category": "AZFWDnsQuery"}   Regex \"category\":\s\"AZFWDnsQuery\"   Here is how props.conf and transforms.conf are configured.   [sourcetype] TRANSFORMS-null=DropFirewallEvents [DropFirewallEvents] REGEX=_raw=\"category\":\s\"AZFWDnsQuery\" DEST_KEY=queue FORMAT=nullQueue   What could be doing wrong here for Splunk to drop every event from this input? Thanks
Hello, I’m trying to only pull a spefic value from the msgTxt log. In the log below, the example is 2024. This value does change and could be one digit or up to 6 digits. msgTxt = xxiskServicxxxappe... See more...
Hello, I’m trying to only pull a spefic value from the msgTxt log. In the log below, the example is 2024. This value does change and could be one digit or up to 6 digits. msgTxt = xxiskServicxxxapper - MxxeNext completed in 2024 ms. (request details: environment: Production | desired services: BusixxxsOwnexxxerritory | property type: Commercial | address: x RxxxDANx DR , xxxSHFIELD , xx 02xx0) Below is the search I'm trying to use but its not working. Any help would be apreseated. | eval msgTxt=" msgTxt: xxiskServicxxxapper - MxxeNext completed in 2024 ms. (request details: environment: Production | desired services: BusixxxsOwnexxxerritory*" | rex "in=(?<in>\w+)." | stats count by in