All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good afternoon! We have a problem in the workflow: a part of the customer's system, which is not developed by us, is not able to fill in the following fields of business messages that fall into splu... See more...
Good afternoon! We have a problem in the workflow: a part of the customer's system, which is not developed by us, is not able to fill in the following fields of business messages that fall into splunk: "event": "Test1", "sourcetype": "testsystem-script707", "fields": This system, at the moment, is able to form content intended for the field: "fields": {content_data}, but it does not fill the field itself, it looks like this: --data-raw '{ "messageId": "<ED280816-E404-444A-A2D9-FFD2D171F928>", "srcMsgId": "<rwfsdfsfqwe121432gsgsfgdg>", "correlationMsgId": "<rwfsdfsfqwe135432gsgsfgdg>", "baseSystemId": "<SDS-IN>", "routeInstanceId": "<TPKSABS-SMEV>", "routepointID": "<1.SABS-GIS.TO.KBR.SEND>", "eventTime": "<1985-04-12T23:20:50>", "messageType": "<ED123>", "GISGMPResponseID": "<PS000BA780816-E404-444A-A2D9-FFD2D1712345>", "GISGMPRequestID": "<PS000BA780816-E404-444A-A2D9-FFD2D1712344>", "tid": "<ED280816-E404-444A-A2D9-FFD2D171F900>", "PacketGISGMPId": "<7642341379_20220512_123456789>", "result.code": "<400>", "result.desc": "<Ошибка: абвгд>" }' Unfortunately, this subsystem is part of our system, and while it is being finalized, we also need to set up monitoring for it. Tell me, can we configure splunk to accept and process such minimal messages?  
Hi, I just wanted to ask if there are specific ways or cli commands to check if my log sources adhere to CIM? I've checked the following link below but I need assistance outside of the CIM add-on. ... See more...
Hi, I just wanted to ask if there are specific ways or cli commands to check if my log sources adhere to CIM? I've checked the following link below but I need assistance outside of the CIM add-on. https://docs.splunk.com/Documentation/CIM/5.0.2/User/UsetheCIMtovalidateyourdata Thank you in advance. Mikhael
Is there a Systemd file for splunk SOAR. The problem I'm facing is I cannnot stop and start Splunk SOAR using the start and stop bash script which is available while we install the Splunk SOAR. Pos... See more...
Is there a Systemd file for splunk SOAR. The problem I'm facing is I cannnot stop and start Splunk SOAR using the start and stop bash script which is available while we install the Splunk SOAR. Postgresql is giving error. When I stop SOAR, postegres is still running, so I'm unable to start the SOAR after stopping.  Can anyone give an insight about why this is happening or about the service file of Splunk SOAR
log: {“timeMillis”:“1667091964927",“timestamp”:“2022-10-30T01:06:04.927Z”,“thread”:“reactor-http-epoll-3",“level”:“INFO”,“logger”:“com.status”,“message”:“Business Key=null, Publishing  status : [ cli... See more...
log: {“timeMillis”:“1667091964927",“timestamp”:“2022-10-30T01:06:04.927Z”,“thread”:“reactor-http-epoll-3",“level”:“INFO”,“logger”:“com.status”,“message”:“Business Key=null, Publishing  status : [ client_req_id fd6e03ea-99ed-4f0c-b52f-db7ceca0adc8, event_date 2022-10-30T01:06:04.927Z, event_name STORE_EXIT_AUDIT_RESULT, request_source RECEIPT_AUDIT_SERVICE ] message {“receipts”:[{“tc_id”:“20420249182009622534”,“tc_date”:“2022-10-30T01:06:04.927Z”,“tc_store_id”:“92”,“tc_operator_number”:“11”,“tc_register_number”:“11”,“tc_audit_status”:“SUCCESS”,“tc_previously_audited”:false,“tc_previously_audited_date”:null}],“audit_result”:{“audit_date”:“2022-10-30T01:06:03.846Z”,“audit_store_id”:“92",“audit_associate_id”:“103956635",“audit_result”:“Pass”,“audit_failure_reason”:null,“scanned_items_number”:3,“items_found”:[],“items_not_found”:[]}}” ,“traceId”:“”, “spanId”:“”, “parentSpanId”:“”, “traceparent”:“”, “tracestate”:“”, “message_code”:“”, “flag”:“”, “tenant”: “”, “businessKey”:“”, “ringId”:“”, “locationNumber”:“”, “dc”:“”} I want to see Store exit result with operator number, store id, register number , audit result , scanned items number , items found , items not found and also audit date , tc_id, timestamp in a table with those values. can anyone please   
I am able to index my local C:/ drive local files in Splunk , but unable to index X:/ drive (Sharepoint path) folder data through inputs.conf. Note: X:/ drive contains the mounted path of Sharep... See more...
I am able to index my local C:/ drive local files in Splunk , but unable to index X:/ drive (Sharepoint path) folder data through inputs.conf. Note: X:/ drive contains the mounted path of Sharepoint location Any help would be appreciated! Thanks, Praveena    
Dears, We have the 3 SHC servers, and appearing below error in the health check icon on all SCH servers, you can see the screen shot. Captain Connection   Root Cause(s): Classic bundle r... See more...
Dears, We have the 3 SHC servers, and appearing below error in the health check icon on all SCH servers, you can see the screen shot. Captain Connection   Root Cause(s): Classic bundle replication in Search Head Cluster has failed. Unhealthy Instances: XXXXX   Please your support for that.   Best Regards,
I´m making a union of two search, and now I´m trying to make a subtract of the two variables.   | set union [search "Output MeshRequestId" | stats avg(time) as avgTimeOut ] [search "Input MeshReq... See more...
I´m making a union of two search, and now I´m trying to make a subtract of the two variables.   | set union [search "Output MeshRequestId" | stats avg(time) as avgTimeOut ] [search "Input MeshRequestId" | stats avg(time) as avgTimeInt ] |stats count(eval(diff=avgTimeOut-avgTimeInt)) as TimeInThrottler   Everything fine before the latest count. I can see the definition of two avg variables. But when I do the stats count(eval(diff=avgTimeOut-avgTimeInt)) as TimeInThrottler Always return 0. Any idea what is wrong
 Below query is in string text format need to separate each field and create a table with all columns for operator , register , store , timestamp, associate_id, audit_result log: {“timeMillis”:“1666... See more...
 Below query is in string text format need to separate each field and create a table with all columns for operator , register , store , timestamp, associate_id, audit_result log: {“timeMillis”:“166665", “timestamp”:“2022-10-16", “level”:“INFO”,“logger”:“com.abc”, “message”:“Business Key=null, Publishing status: [ client_req_id dc366, event_date 2022-10-16, event_name EXIT ], message {“receipts”:[{“id”:“150”, “date”:“2022-10-24”, “store”:“99”, “operator”:“48”, “register”:“48”,“status”:“pass”,}], “result”: {“date”:“2022-10-16",“store”:“99",“associate_id”:“92",“result”:“Pass”,“failure_reason”:null,“scanned_items”:1, “items_found”:[],“items_not_found”:[]}}”   Tried query as spath input= , path= , output= | table id, operator , register, store , timestamp but dont work  
For 3 Spring Boot applications (2 classic servlet and one spring cloud gateway in front) I have enabled OTEL and see the first metrics arriving successfully in my Controller. Due to whatever reasons... See more...
For 3 Spring Boot applications (2 classic servlet and one spring cloud gateway in front) I have enabled OTEL and see the first metrics arriving successfully in my Controller. Due to whatever reasons it seems that the Business Transactions only show that the app is NodeJS & the name of the Transaction is HTTP Post. I don't see the different routes that are available via the gateway & routed to the 2 classic spring boot applications. For a short test I also reconfigured the collector to log every request instead of forwarding it to the controller, please see attached details about the request: Resource attributes: -> appdynamics.controller.account: Str(acme) -> appdynamics.controller.host: Str(acme.saas.appdynamics.com) -> appdynamics.controller.port: Int(443) -> service.name: Str(gateway) -> service.namespace: Str(acme-namespace) -> telemetry.sdk.language: Str(java) -> telemetry.sdk.name: Str(opentelemetry) -> telemetry.sdk.version: Str(1.16.0) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope org.springframework.cloud.sleuth Span #0 Trace ID : 5fea943a828ccd2fff3072992d4e5ab6 Parent ID : ID : 5a8b847d55a33279 Name : HTTP GET Kind : SPAN_KIND_SERVER Start time : 2022-10-29 17:19:45.595667 +0000 UTC End time : 2022-10-29 17:19:45.640038791 +0000 UTC Status code : STATUS_CODE_ERROR Status message : Attributes: -> http.host: Str(localhost:8881) -> http.method: Str(GET) -> http.path: Str(/api/acme/endpoint) -> http.route: Str(/api/acme/endpoint) -> http.scheme: Str(http) -> http.status_code: Int(500) -> http.target: Str(/api/acme/endpoint) -> http.user_agent: Str(curl/7.79.1) -> net.host.name: Str(localhost) -> net.transport: Str(http) Events: SpanEvent #0 -> Name: exception -> Timestamp: 2022-10-29 17:19:45.637675125 +0000 UTC -> DroppedAttributesCount: 0 -> Attributes:: -> exception.message: Str(Connection refused: localhost/[0:0:0:0:0:0:0:1]:8884) -> exception.stacktrace: Str(io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/[0:0:0:0:0:0:0:1]:8884 Suppressed: The stacktrace has been enhanced by Reactor, refer to additional information below: Error has been observed at the following site(s): *__checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authorization.AuthorizationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authorization.ExceptionTranslationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authentication.logout.LogoutWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.savedrequest.ServerRequestCacheWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.context.SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authentication.AuthenticationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.context.ReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.header.HttpHeaderWriterWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.config.web.server.ServerHttpSecurity$ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain] Original Stack Trace: Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) ) -> exception.type: Str(io.netty.channel.AbstractChannel.AnnotatedConnectException) Any idea?
Recently i upgraded our splunk enterprise version from 9.0.0 to 9.0.1 in all our master , search head & indexer nodes. The order we updated is indexer - search head - master.  Once the upgrade was ... See more...
Recently i upgraded our splunk enterprise version from 9.0.0 to 9.0.1 in all our master , search head & indexer nodes. The order we updated is indexer - search head - master.  Once the upgrade was successfully done we weren't able to bring up the splunk cluster in which indexer node is keep on failing with the below mentioned error: 10-27-2022 23:02:27.083 +0000 ERROR CMSlave [91467 MainThread] - event=getActiveBundle failed with err="invalid active_bundle_id=. Check the cluster manager for bundle validation/errors or other issues." even after multiple attempts, Exiting.. 10-27-2022 23:02:27.106 +0000 ERROR loader [91467 MainThread] - Failed to download bundle from the cluster manager, err="invalid active_bundle_id=. Check the cluster manager for bundle validation/errors or other issues.", Won't start splunkd.  There are no errors in master & search head node's logs. Please help me to fix this bundle validation error.
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not ge... See more...
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not getting the excepted results when using format command or $field_name, inputlookup host.csv - consists of list of hosts to be monitored main search  index=abc source=cpu sourcetype=cpu CPU=all [| inputlookup host.csv ] | eval host=mvindex(split(host,"."),0) | stats avg(pctIdle) AS CPU_Idle by host | eval CPU_Idle=round(CPU_Idle,0) | eval warning=15, critical=10 | where CPU_Idle<=warning | sort CPU_Idle sub-search [search index=abc source=top | dedup USER | return $USER]
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if us... See more...
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if user_action="Update*" OR  Condition 2:  within each 5 min bucket, if any user has access more than 400 destination in the same index, index1 but it doesn't work. How can I check both condition on the same search?  Thanks in advanced!     index=index1 ``` condition 1 ``` ( user_action="Update*" ) OR ``` condition 2 ``` ( [search index=index1 NOT user IN ("system*", "nobody*") | bin _time span=5m | stats values(dest) count by _time, user | where count > 400 ] ) | table _time, user, dest      
Hello all, We are starting to integrate spunk into our systems, and in order to make sure everything goes smoothly we want to write a PowerShell script for the installation. We use Splunk Cloud, ... See more...
Hello all, We are starting to integrate spunk into our systems, and in order to make sure everything goes smoothly we want to write a PowerShell script for the installation. We use Splunk Cloud, so we are unsure if there is a way to set a PowerShell script to install it across our systems. We would like guidance where possible.   Thank you
Hello All, Basic questions on using table row highlighting 1. Do I need to have an "app" to use the various java scripts, and jquery scripts, etc 2. a quote from the instal area, "you need to s... See more...
Hello All, Basic questions on using table row highlighting 1. Do I need to have an "app" to use the various java scripts, and jquery scripts, etc 2. a quote from the instal area, "you need to save the libraries directly into your app directory. " I do not have an "app" in this path: $SPLUNK_HOME/etc/apps/[YourApp]/appserver/static/table_cell_highlighting.js I see were all these scripts and libraries are stored in the simple_xml_examples path. I only have a dashboard which returns/shows a list of users and hosts, and whether or not the user is loggoed on. It is just a dashboard not an "app" There is a "search" app in the path above: $SPLUNK_HOME/etc/apps, would I create a directory there called "my_app" , or something like that, and put all the jquery scripts and the css and the js. in that directory. The install process for the dashboard Examples is not really clear to me. Thanks, eholz1    
Hi all, i have below query index=advcf request=* host=abgc host=efgh host=jhty host=hjyu host=kjnbh here i want the email alert to trigger when data is not coming from... See more...
Hi all, i have below query index=advcf request=* host=abgc host=efgh host=jhty host=hjyu host=kjnbh here i want the email alert to trigger when data is not coming from any one of the hosts. and i want to see that host name in a table format in the mail. how can i do that????
I do now that you need a Deployer to send apps to a Search Head Cluster, but there are one thing I do not find any answer to,  can I send apps from the Deployment Server to the Deployer? Found this... See more...
I do now that you need a Deployer to send apps to a Search Head Cluster, but there are one thing I do not find any answer to,  can I send apps from the Deployment Server to the Deployer? Found this answer Deployment Server vs. Deployer   A deployment server is used to deploy apps to forwarders (and technically could be used to deploy apps to other Splunk servers as well but with a number of caveats)   Since Deployer is a Splunk server, I guess so.  It would be hard to maintain equal application on various server.
Hello all,  This is my first post here. I have been learning Splunk over the past few months and I am loving it.  I am running in to an interesting issue.  I am using the transaction command to g... See more...
Hello all,  This is my first post here. I have been learning Splunk over the past few months and I am loving it.  I am running in to an interesting issue.  I am using the transaction command to group events together.  An example of the queries below: index::web ealiest=-30m | transaction maxspan=3s result:  STARTED RECORD UPDATE: {"update"=>"contacts", "ordered"=>true, "updates"=>[{"q"=>{"_id"=>BSON::ObjectId('123456789')} COMPLETED record update {"status": 200} But if I append a table command to display the _raw field some of the characters are automatically encoded, as shown below: index::web ealiest=-30m | transaction maxspan=3s | table _raw result: STARTED RECORD UPDATE: {&quot;update&quot;=&gt;&quot;contacts&quot;, &quot;ordered&quot;=&gt;true, &quot;updates&quot;=&gt;[{&quot;q&quot;=&gt;{&quot;_id&quot;=&gt;BSON::ObjectId('123456789')} COMPLETED record update {&quotstatus&quot: 200}   I tried recreating this behavior by using makeresults, but in that case it works as I would expect. Does anyone have an idea of why this might be happening?    Thanks, Julio
I am trying to understand what would cause a variance in the volume used in our quota vs the log size it is ingesting. I have been failing to find an explanation, and am wondering if anybody else has... See more...
I am trying to understand what would cause a variance in the volume used in our quota vs the log size it is ingesting. I have been failing to find an explanation, and am wondering if anybody else has figured our the reason for the variance.  As an example, I am sending syslog from a wifi AP to our syslog server that is running a UF. On the syslog server,  on Oct 27 I am getting 484 lines of logs and those logs have 91,462 bytes in the log file.  In Splunk if I search for these events using this search:     index=network sourcetype=wifi | eval eventSize=len(_raw) | stats sum(eventSize) count by sourcetype     I also get 484 events but using len(_raw) function, I get length of characters of 90,978. I would assume these numbers should be pretty close and they are.  Now when I look in the _internal index for the *license_usage.log  metrics for the wifi sourcetype, using this query:     index=_internal sourcetype=splunkd source=*license_usage.log type=Usage st=wifi | stats sum(b) as bytes by st     I get a different sum of 103,794 bytes.  I am trying to determine how this could be or makes sense. This isn't a large difference (roughly 12%), but its spread out over every index and sourcetype and combined equals a large part of our license quota  that I cannot explain for.  Another example is our firewall logs, which is one of our largest indexes. For Oct 27, len(_raw) = 62,899,298,079 chars in length.  *license_usage.log = 81,139,209,296  This is a difference of roughly 22% over much larger percentage of our quota. There are other large indexes with similar variance.  I only have one production cluster, and I don't have a great way to verify this would be the same result on someone else's cluster. Do other people have this issue? I am trying to find a logical reason for why this would be the case. Things that I have tried to track down: 1) Using the License Usage dashboards in the Monitoring Console of our LM, on License Usage - Today I see numbers that align with the metrics using the len(_raw) metric. Using the Historical License Usage dashboard - If I switch to "NoSplit" parameter, I get numbers that also align with the len(_raw) metrics. If I change the Historical License Usage parameter to "SplitByIndex" I get numbers that align with the *license_usage.log metrics.  2) I have a support case opened to try to understand this difference. My SE told me that the "NoSplit" parameter (which is using the Type=RolloverSummary attribute in its base search) is the correct metric to measure license usage. My Support tech has told me that this is false, and that the "SplitByIndex" metric (using type=Usage) is the true count. Based on my manual measurements of the logs on the syslog server, I have to agree with the SE, but do not have any way to prove my LM is reporting incorrectly. 3) I have looked for duplicates using a variety of searches. Most show a couple of events, but we are talking less then 10, and its typically isolated on log sources with verbose or debugging output like DNS. 4) I have looked for misconfigured inputs.conf or outputs.conf files. This did yield some results. I found one SH that had multiple outputs.conf files that were cloning some of the data inputs originating from that SH (a WIN!!), but in regards to the syslog wifi and firewall sourcetypes, this doesn't seem to be the case. 5) I have reviewed the character encoding that is described here:  https://docs.splunk.com/Documentation/Splunk/latest/Data/Configurecharactersetencoding Every props.conf that I can find is set to CHARSET=UTF-8, which as I understand means Splunk is encoding all ingested logs using UTF-8. I thought this might be the culprit as higher order characters can take up multiple bytes in UTF-8. I do not think this is the case, as the syslog logs for wifi are only using the lower ASCII characters of UTF-8 from 0-127, which I believe are only taking up 1 byte per character. I am making this claim based on https://www.rfc-editor.org/rfc/rfc5424 and just visually looking at the logs, there is not that much to them. This also aligns with my manual measurement of the characters in the shell and using len(_raw).  Am I missing something here (or not understanding the underlaying process)? Is there other root causes I am overlooking? Does my cluster have some sort of issue, maybe with the configuration or architecture? Do others have this similar extra volume that is not easily explained, and this is normal behavior? Anything helps, thanks.
Hello. We are trying to upgrade our Splunk UFs from 8.1.9 to 9.0.1. We use a configuration management tool (Puppet) to upgrade our UFs. This has worked fine until the leap to 9.0.1. We are experienc... See more...
Hello. We are trying to upgrade our Splunk UFs from 8.1.9 to 9.0.1. We use a configuration management tool (Puppet) to upgrade our UFs. This has worked fine until the leap to 9.0.1. We are experiencing a timeout during a kvstore migration. This appears to be a known issue, and I employed the solution found here.  The solution (or workaround) appears simple: Edit server.conf and add this: [kvstore] disabled = true And that works. However, I'm concerned about the consequences of this fix. I'm not familiar with the kvstore or how it works, etc. Is there any harm in using this workaround?  Thank you in advance.
Hi, I have developed an app with my AppBuilder and am looking to go through the validation process. However - I am getting an error when logging in. I changed the password several times and still... See more...
Hi, I have developed an app with my AppBuilder and am looking to go through the validation process. However - I am getting an error when logging in. I changed the password several times and still having an issue. The error response from login is below. Also - I am unable to get to the support case portal either. Ive got my account admin also looking at why I can't get in. Trying to get to force support redirects me back to an error page. Is someone around here able to point me in the right direction or turn a knob somewhere.     https://www.splunk.com/404?ErrorCode=18&ErrorDescription=Invalid+account HTTP/2 400 Bad Request Date: Fri, 28 Oct 2022 16:55:31 GMT Content-Type: application/json Content-Length: 234 Server: nginx/1.20.0 X-Amzn-Requestid: 4d6957db-ac76-4df7-b0fa-bc3c480806ef X-Amz-Apigw-Id: auZsdE5BPHcFz1Q= X-Amzn-Trace-Id: Root=1-635c0982-2cd11e0872d8f47027637090 {"errors":"oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"The credentials provided were invalid.\"}","msg":"Failed to authenticate user","status":"error","status_code":400}