All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have installed the latest Splunk SOAR (5.4) on my instance for testing. The default https port is set to 8443. I have tried to force the port to be on 443 by using the --https-port 443 op... See more...
Hi all, I have installed the latest Splunk SOAR (5.4) on my instance for testing. The default https port is set to 8443. I have tried to force the port to be on 443 by using the --https-port 443 option during the installation. However, I am not able to configure the port to be on 443 as it states any port from 1 to 1024 requires root access and I can't run it on an unprivileged installation.   Is there any workaround for this to configure the port to go thru 443 instead of 8443?
Which version of splunk Universal forwarder supports for AIX 6.1 version OS
Hey Splunkers, Can someone please help me with the logic, how can I finetune the search below to detect DNS tunnelling? The one here is too noisy.. or if a better SPL is available that can be used... See more...
Hey Splunkers, Can someone please help me with the logic, how can I finetune the search below to detect DNS tunnelling? The one here is too noisy.. or if a better SPL is available that can be used instead.     | tstats `security_content_summariesonly` dc("DNS.query") as count from datamodel=Network_Resolution where DNS.message_type=QUERY by DNS.src,DNS.query | rename DNS.src as src DNS.query as message | eval length=len(message) | stats sum(length) as length by src | append [ tstats `security_content_summariesonly` dc("DNS.answer") as count from datamodel=Network_Resolution where DNS.message_type=QUERY by DNS.src,DNS.answer | rename "DNS.src" as src "DNS.answer" as message | eval message=if(message=="unknown","", message) | eval length=len(message) | stats sum(length) as length by src ] | stats sum(length) as length by src | where length > 100     Thanks!
i need to write regular expression for the below log and i need to extract error code,message and status code: {"log":"28/Oct/2022:22:23:39 +1100 [qtp2012846597-33] [correlationId=00223854-356e-4a... See more...
i need to write regular expression for the below log and i need to extract error code,message and status code: {"log":"28/Oct/2022:22:23:39 +1100 [qtp2012846597-33] [correlationId=00223854-356e-4a24-bc04-4bce27407dfa] ERROR au.com.commbank.pso.payments.reportgen.util.LoggingUtil - Severity = \"ERROR\", DateTimestamp = \"28/Oct/2022 22:23:39\", Error Code = \"REPORT_GENERATION_ERR_0007\", Error Message = \"API call to IDP failed with HTTP Status Code 4XX\", HTTP Status Code = \"500\"     Thanks in Advance
SPL to extract field and field value when data seems like below screenshot attached. I need help in extracting field as memUsedPct which should hold the value as 82,3.  
Hello, I'm having a hard time creating a download button. I have a scheduled report every day. I would like to create a button in the dashbard that downloads that report. I cannot use js. Can yo... See more...
Hello, I'm having a hard time creating a download button. I have a scheduled report every day. I would like to create a button in the dashbard that downloads that report. I cannot use js. Can you help me? I found a way to download the query created but so I also view the table and I don't want to show it, just download the result. Tks Bye Antonio
Hi All, If i would like to stats count by destination email  and show result by sum each domain (gmail , hotmail ) please help to recommend   Best Regards, CR  
Hi, I have a question for my understanding. Kindly help. You had data in the past, one fine day if you see there is no data, how do you troubleshoot? Regards Suman P.
Good afternoon! We have a problem in the workflow: a part of the customer's system, which is not developed by us, is not able to fill in the following fields of business messages that fall into splu... See more...
Good afternoon! We have a problem in the workflow: a part of the customer's system, which is not developed by us, is not able to fill in the following fields of business messages that fall into splunk: "event": "Test1", "sourcetype": "testsystem-script707", "fields": This system, at the moment, is able to form content intended for the field: "fields": {content_data}, but it does not fill the field itself, it looks like this: --data-raw '{ "messageId": "<ED280816-E404-444A-A2D9-FFD2D171F928>", "srcMsgId": "<rwfsdfsfqwe121432gsgsfgdg>", "correlationMsgId": "<rwfsdfsfqwe135432gsgsfgdg>", "baseSystemId": "<SDS-IN>", "routeInstanceId": "<TPKSABS-SMEV>", "routepointID": "<1.SABS-GIS.TO.KBR.SEND>", "eventTime": "<1985-04-12T23:20:50>", "messageType": "<ED123>", "GISGMPResponseID": "<PS000BA780816-E404-444A-A2D9-FFD2D1712345>", "GISGMPRequestID": "<PS000BA780816-E404-444A-A2D9-FFD2D1712344>", "tid": "<ED280816-E404-444A-A2D9-FFD2D171F900>", "PacketGISGMPId": "<7642341379_20220512_123456789>", "result.code": "<400>", "result.desc": "<Ошибка: абвгд>" }' Unfortunately, this subsystem is part of our system, and while it is being finalized, we also need to set up monitoring for it. Tell me, can we configure splunk to accept and process such minimal messages?  
Hi, I just wanted to ask if there are specific ways or cli commands to check if my log sources adhere to CIM? I've checked the following link below but I need assistance outside of the CIM add-on. ... See more...
Hi, I just wanted to ask if there are specific ways or cli commands to check if my log sources adhere to CIM? I've checked the following link below but I need assistance outside of the CIM add-on. https://docs.splunk.com/Documentation/CIM/5.0.2/User/UsetheCIMtovalidateyourdata Thank you in advance. Mikhael
Is there a Systemd file for splunk SOAR. The problem I'm facing is I cannnot stop and start Splunk SOAR using the start and stop bash script which is available while we install the Splunk SOAR. Pos... See more...
Is there a Systemd file for splunk SOAR. The problem I'm facing is I cannnot stop and start Splunk SOAR using the start and stop bash script which is available while we install the Splunk SOAR. Postgresql is giving error. When I stop SOAR, postegres is still running, so I'm unable to start the SOAR after stopping.  Can anyone give an insight about why this is happening or about the service file of Splunk SOAR
log: {“timeMillis”:“1667091964927",“timestamp”:“2022-10-30T01:06:04.927Z”,“thread”:“reactor-http-epoll-3",“level”:“INFO”,“logger”:“com.status”,“message”:“Business Key=null, Publishing  status : [ cli... See more...
log: {“timeMillis”:“1667091964927",“timestamp”:“2022-10-30T01:06:04.927Z”,“thread”:“reactor-http-epoll-3",“level”:“INFO”,“logger”:“com.status”,“message”:“Business Key=null, Publishing  status : [ client_req_id fd6e03ea-99ed-4f0c-b52f-db7ceca0adc8, event_date 2022-10-30T01:06:04.927Z, event_name STORE_EXIT_AUDIT_RESULT, request_source RECEIPT_AUDIT_SERVICE ] message {“receipts”:[{“tc_id”:“20420249182009622534”,“tc_date”:“2022-10-30T01:06:04.927Z”,“tc_store_id”:“92”,“tc_operator_number”:“11”,“tc_register_number”:“11”,“tc_audit_status”:“SUCCESS”,“tc_previously_audited”:false,“tc_previously_audited_date”:null}],“audit_result”:{“audit_date”:“2022-10-30T01:06:03.846Z”,“audit_store_id”:“92",“audit_associate_id”:“103956635",“audit_result”:“Pass”,“audit_failure_reason”:null,“scanned_items_number”:3,“items_found”:[],“items_not_found”:[]}}” ,“traceId”:“”, “spanId”:“”, “parentSpanId”:“”, “traceparent”:“”, “tracestate”:“”, “message_code”:“”, “flag”:“”, “tenant”: “”, “businessKey”:“”, “ringId”:“”, “locationNumber”:“”, “dc”:“”} I want to see Store exit result with operator number, store id, register number , audit result , scanned items number , items found , items not found and also audit date , tc_id, timestamp in a table with those values. can anyone please   
I am able to index my local C:/ drive local files in Splunk , but unable to index X:/ drive (Sharepoint path) folder data through inputs.conf. Note: X:/ drive contains the mounted path of Sharep... See more...
I am able to index my local C:/ drive local files in Splunk , but unable to index X:/ drive (Sharepoint path) folder data through inputs.conf. Note: X:/ drive contains the mounted path of Sharepoint location Any help would be appreciated! Thanks, Praveena    
Dears, We have the 3 SHC servers, and appearing below error in the health check icon on all SCH servers, you can see the screen shot. Captain Connection   Root Cause(s): Classic bundle r... See more...
Dears, We have the 3 SHC servers, and appearing below error in the health check icon on all SCH servers, you can see the screen shot. Captain Connection   Root Cause(s): Classic bundle replication in Search Head Cluster has failed. Unhealthy Instances: XXXXX   Please your support for that.   Best Regards,
I´m making a union of two search, and now I´m trying to make a subtract of the two variables.   | set union [search "Output MeshRequestId" | stats avg(time) as avgTimeOut ] [search "Input MeshReq... See more...
I´m making a union of two search, and now I´m trying to make a subtract of the two variables.   | set union [search "Output MeshRequestId" | stats avg(time) as avgTimeOut ] [search "Input MeshRequestId" | stats avg(time) as avgTimeInt ] |stats count(eval(diff=avgTimeOut-avgTimeInt)) as TimeInThrottler   Everything fine before the latest count. I can see the definition of two avg variables. But when I do the stats count(eval(diff=avgTimeOut-avgTimeInt)) as TimeInThrottler Always return 0. Any idea what is wrong
 Below query is in string text format need to separate each field and create a table with all columns for operator , register , store , timestamp, associate_id, audit_result log: {“timeMillis”:“1666... See more...
 Below query is in string text format need to separate each field and create a table with all columns for operator , register , store , timestamp, associate_id, audit_result log: {“timeMillis”:“166665", “timestamp”:“2022-10-16", “level”:“INFO”,“logger”:“com.abc”, “message”:“Business Key=null, Publishing status: [ client_req_id dc366, event_date 2022-10-16, event_name EXIT ], message {“receipts”:[{“id”:“150”, “date”:“2022-10-24”, “store”:“99”, “operator”:“48”, “register”:“48”,“status”:“pass”,}], “result”: {“date”:“2022-10-16",“store”:“99",“associate_id”:“92",“result”:“Pass”,“failure_reason”:null,“scanned_items”:1, “items_found”:[],“items_not_found”:[]}}”   Tried query as spath input= , path= , output= | table id, operator , register, store , timestamp but dont work  
For 3 Spring Boot applications (2 classic servlet and one spring cloud gateway in front) I have enabled OTEL and see the first metrics arriving successfully in my Controller. Due to whatever reasons... See more...
For 3 Spring Boot applications (2 classic servlet and one spring cloud gateway in front) I have enabled OTEL and see the first metrics arriving successfully in my Controller. Due to whatever reasons it seems that the Business Transactions only show that the app is NodeJS & the name of the Transaction is HTTP Post. I don't see the different routes that are available via the gateway & routed to the 2 classic spring boot applications. For a short test I also reconfigured the collector to log every request instead of forwarding it to the controller, please see attached details about the request: Resource attributes: -> appdynamics.controller.account: Str(acme) -> appdynamics.controller.host: Str(acme.saas.appdynamics.com) -> appdynamics.controller.port: Int(443) -> service.name: Str(gateway) -> service.namespace: Str(acme-namespace) -> telemetry.sdk.language: Str(java) -> telemetry.sdk.name: Str(opentelemetry) -> telemetry.sdk.version: Str(1.16.0) ScopeSpans #0 ScopeSpans SchemaURL: InstrumentationScope org.springframework.cloud.sleuth Span #0 Trace ID : 5fea943a828ccd2fff3072992d4e5ab6 Parent ID : ID : 5a8b847d55a33279 Name : HTTP GET Kind : SPAN_KIND_SERVER Start time : 2022-10-29 17:19:45.595667 +0000 UTC End time : 2022-10-29 17:19:45.640038791 +0000 UTC Status code : STATUS_CODE_ERROR Status message : Attributes: -> http.host: Str(localhost:8881) -> http.method: Str(GET) -> http.path: Str(/api/acme/endpoint) -> http.route: Str(/api/acme/endpoint) -> http.scheme: Str(http) -> http.status_code: Int(500) -> http.target: Str(/api/acme/endpoint) -> http.user_agent: Str(curl/7.79.1) -> net.host.name: Str(localhost) -> net.transport: Str(http) Events: SpanEvent #0 -> Name: exception -> Timestamp: 2022-10-29 17:19:45.637675125 +0000 UTC -> DroppedAttributesCount: 0 -> Attributes:: -> exception.message: Str(Connection refused: localhost/[0:0:0:0:0:0:0:1]:8884) -> exception.stacktrace: Str(io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/[0:0:0:0:0:0:0:1]:8884 Suppressed: The stacktrace has been enhanced by Reactor, refer to additional information below: Error has been observed at the following site(s): *__checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authorization.AuthorizationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authorization.ExceptionTranslationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authentication.logout.LogoutWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.savedrequest.ServerRequestCacheWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.context.SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.authentication.AuthenticationWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.context.ReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.header.HttpHeaderWriterWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.config.web.server.ServerHttpSecurity$ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain] Original Stack Trace: Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) ) -> exception.type: Str(io.netty.channel.AbstractChannel.AnnotatedConnectException) Any idea?
Recently i upgraded our splunk enterprise version from 9.0.0 to 9.0.1 in all our master , search head & indexer nodes. The order we updated is indexer - search head - master.  Once the upgrade was ... See more...
Recently i upgraded our splunk enterprise version from 9.0.0 to 9.0.1 in all our master , search head & indexer nodes. The order we updated is indexer - search head - master.  Once the upgrade was successfully done we weren't able to bring up the splunk cluster in which indexer node is keep on failing with the below mentioned error: 10-27-2022 23:02:27.083 +0000 ERROR CMSlave [91467 MainThread] - event=getActiveBundle failed with err="invalid active_bundle_id=. Check the cluster manager for bundle validation/errors or other issues." even after multiple attempts, Exiting.. 10-27-2022 23:02:27.106 +0000 ERROR loader [91467 MainThread] - Failed to download bundle from the cluster manager, err="invalid active_bundle_id=. Check the cluster manager for bundle validation/errors or other issues.", Won't start splunkd.  There are no errors in master & search head node's logs. Please help me to fix this bundle validation error.
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not ge... See more...
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not getting the excepted results when using format command or $field_name, inputlookup host.csv - consists of list of hosts to be monitored main search  index=abc source=cpu sourcetype=cpu CPU=all [| inputlookup host.csv ] | eval host=mvindex(split(host,"."),0) | stats avg(pctIdle) AS CPU_Idle by host | eval CPU_Idle=round(CPU_Idle,0) | eval warning=15, critical=10 | where CPU_Idle<=warning | sort CPU_Idle sub-search [search index=abc source=top | dedup USER | return $USER]
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if us... See more...
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if user_action="Update*" OR  Condition 2:  within each 5 min bucket, if any user has access more than 400 destination in the same index, index1 but it doesn't work. How can I check both condition on the same search?  Thanks in advanced!     index=index1 ``` condition 1 ``` ( user_action="Update*" ) OR ``` condition 2 ``` ( [search index=index1 NOT user IN ("system*", "nobody*") | bin _time span=5m | stats values(dest) count by _time, user | where count > 400 ] ) | table _time, user, dest