All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I had been using  search syntax "rename "_raw" AS errortrace" in my custom search but one of my app team needs _raw data to extract some header info. How can i still pass _raw filed data still ... See more...
Hi, I had been using  search syntax "rename "_raw" AS errortrace" in my custom search but one of my app team needs _raw data to extract some header info. How can i still pass _raw filed data still with renamin syntax still in place Thanks
The above screen shot Blue color line event into one Event and above Blue color lines in to single event  please provide line break event queries.  
Hello, Anyone knows if we can use eval-ingest with lookup command in Splunk Cloud? The problem is that in Splunk Cloud we can only add configuration via custom app in SH.  Eval-ingest in genera... See more...
Hello, Anyone knows if we can use eval-ingest with lookup command in Splunk Cloud? The problem is that in Splunk Cloud we can only add configuration via custom app in SH.  Eval-ingest in general working, but when I'm trying to use lookup command I'm receiving error that lookup was not found. I guess that problem is in this that lookup is on SH level, not on IDX level. but maybe I'm doing something wrong. Fields.conf - ok props.conf - ok transforms.conf - ok for simple eval-ingest without lookup command   Example from transforms.conf [test_lookup_manual2] INGEST_EVAL = test_lookup=json_extract(lookup("test.csv",json_object("hostname_test",hostname_test), json_array(value)),"value")   lookup added in directory lookups, permissions are ok, visible in splunk from every context
We are currently using "MIME Decoder Add-on for Cisco ESA" to decode email subjects. It seems that this add-on is not supported in the cloud. Is there another way to decode UTF-8  
I am getting below error on HFs  Invalid key in stanza [setup] in "/opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf", line 20: cluster_mode_enabled (value: false). Can anyb... See more...
I am getting below error on HFs  Invalid key in stanza [setup] in "/opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf", line 20: cluster_mode_enabled (value: false). Can anybody tell us why?
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorith... See more...
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorithm: Secure Sockets Layer/Transport Layer Security (SSL/TLS) Compression Algorithm Information Leakage Vulnerability Affected objects: port 8089/tcp over SSL port 8191/tcp over SSL port 8088/tcp over SSL SOLUTION: Compression algorithms should be disabled. The method of disabling it varies depending on the application you're running. If you're using a hardware device or software not listed here, you'll need to check the manual or vendor support options. RESULTS: Compression_method_is DEFLATE .   Tried to solve: Add these strings to server.conf on local location: [sslConfig] allowSslCompression = false useClientSSLCompression = false useSplunkdClientSSLCompression = false   Result of attempt: On some servers it only helped with 8089, on some servers it helped with 8191, and on some servers it didn't help at all.   Question. Has anyone been able to solve this problem? And how can I understand why I got different results with the same settings? What other solutions can you suggest? Thank you all in advance!  
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller ... See more...
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller database and not checking from the controller gui. below is the ref snap from the database agent setting screen where one is active host and other passive host. 
Could I please get assistance on how to resolve this issue and get the AlgoSec App for Security Incident Analysis and Response (2.x) Splunk application working. No changes have been made to any app... See more...
Could I please get assistance on how to resolve this issue and get the AlgoSec App for Security Incident Analysis and Response (2.x) Splunk application working. No changes have been made to any application files The steps in the algosec installation documentation has been followed: Integrate ASMS with Splunk (algosec.com) The Splunk Version being used: Splunk Enterprise 9.2 (Trial License) When installing the application, this error is returned: 500 Internal Server Error  Error Details: index=_internal host="*********" source=*web_service.log log_level=ERROR requestid=6694b1a1307f3b003f6d50 2024-07-15 15:20:33,402 ERROR [6694b1a1307f3b003f6d50] error:338 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 422, in default return route.target(self, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-500>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 41, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-498>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 119, in check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-497>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 167, in validate_ip return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-496>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 246, in preform_sso_check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-495>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 285, in check_login return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-494>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 305, in handle_exceptions return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-489>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 360, in apply_cache_headers response = fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 1798, in listEntities app_name = eai_acl.get('app') AttributeError: 'NoneType' object has no attribute 'get'   Thanks Splunk Community
Hi, I am new to splunk development .Please provide your assistance for creating a search  . Thanks advance. Trying to create a report where I need to fetch the requestId, propositions id based on o... See more...
Hi, I am new to splunk development .Please provide your assistance for creating a search  . Thanks advance. Trying to create a report where I need to fetch the requestId, propositions id based on odds and accountno. Attached a sample event where multiple requests are in a single event which comes as a combined event to splunk . I have used a query like below, but it displays all the propositions to all requests,odds combination. I want to display the propositionid  only related to a particular request id and odds. attaching a sample for reference   index=abc source="data.log" "Response.errors{}.message"="cobination" | spath "Response.errors{}.code" | search "Response.errors{}.code"=COMBINATION | spath "Response.b{}.legs{}.propositions{}.propositionId"| spath "Response.b{}.legs{}.odds"|rename "Response.b{}.legs{}.odds" as Odds | spath "accountDetails.accountNumber"|dedup "accountDetails.accountNumber" |rename "accountDetails.accountNumber" as AccountNumber | spath "Response.b{}.requestId" | stats values("Response.error{}.code") as ErrorCode ,values("Response.b{}.legs{}.propositions{}.propositionId") as PropositionId by AccountNumber,Odds,RequestId,_time  
My Objective is to create an Alert in ServiceNow whenever an failure alert triggered in SPlunk. I have installed Splunk Add on for ServiceNow and configured the connection setup.  I was able to suc... See more...
My Objective is to create an Alert in ServiceNow whenever an failure alert triggered in SPlunk. I have installed Splunk Add on for ServiceNow and configured the connection setup.  I was able to successfully post the incident is ServiceNow with the default fields available in ServiceNow Incident Alert. However i need to update the Description field in ServiceNow with the details of Alert Name and Alert Result to identify why that alert triggered.
Hi Dev Team, It has been a while since the last update version for this App was released (mid 2023) and now lost Splunk Cloud certification for compatibility. Can we please have this App revised to... See more...
Hi Dev Team, It has been a while since the last update version for this App was released (mid 2023) and now lost Splunk Cloud certification for compatibility. Can we please have this App revised to match Splunk Cloud requirements for compatibility? Thank you. Regards.  
Hello Splunkers!!   I have a below event and I want to parse. But the event is not parsing with time format in Splunk. Please help me to get it fix . TIME_FORMAT : %dT%H:%M:%S.%3QZ TIME_PREFIX :... See more...
Hello Splunkers!!   I have a below event and I want to parse. But the event is not parsing with time format in Splunk. Please help me to get it fix . TIME_FORMAT : %dT%H:%M:%S.%3QZ TIME_PREFIX : \<eqtext\:EventTime\> I have used the above setting but nothings works. StillI can see isse with indexed and event time. Please help me to get it fix.   Below are the raw events:   <eqtext:EquipmentEvent xmlns:eqtext="http:///FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7053</AreaID><ZoneID>33</ZoneID><EquipmentID>25</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Welder cold</eqtext:Description><eqtext:MIS_Address>6.2</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>CAME_IN</State><eqtext:EventTime>2024-07-13T16:21:31.287Z</eqtext:EventTime><eqtext:MsgNr>7751154552301783480</eqtext:MsgNr><Severity>INFO</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent></eqtexo:EquipmentEventReport>
Hi Team, While setting up our new remote Heavy Forwarder, we configured it to collect data from 20 universal Forwarders and Syslog devices, averaging about 30GB daily. To control network bandwidth... See more...
Hi Team, While setting up our new remote Heavy Forwarder, we configured it to collect data from 20 universal Forwarders and Syslog devices, averaging about 30GB daily. To control network bandwidth usage, we applied a maximum throughput limit of 1MBps (1024KBps) using the maxKBps setting in limits.conf on the new remote Heavy Forwarder. This setting is intended to cap the rate at which data is forwarded to our Indexers, aiming to prevent exceeding the specified bandwidth limit. However, according to Splunk documentation, this configuration doesn't guarantee that data transmission will always stay below the set maxKBps. It depends on factors such as the status of processing queues and doesn't directly restrict the volume of data being sent over the network. How can we ensure the remote HF is not exceeding the value set in maxKBps in any case. Regards VK
I have result like this     column, row 1 TotalHits: Create, 171 TotalHits: Health, 894 TotalHits: Search, 172 TotalHits: Update, 5 perc90(Elapsed): Create, 55 per... See more...
I have result like this     column, row 1 TotalHits: Create, 171 TotalHits: Health, 894 TotalHits: Search, 172 TotalHits: Update, 5 perc90(Elapsed): Create, 55 perc90(Elapsed): Health, 52 perc90(Elapsed): Search, 60 perc90(Elapsed): Update, 39       I want to convert this into   Total Hits perc90(Elapsed) Create 171 55 Update 5 52 Search 172 60 Health 894 52   What query should I use Btw, to reach the above output I used like this, even this I am not sure whether its the best way index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span=1m | timechart count as TotalHits, perc90(Elapsed) by Feature | stats max(*) AS * | transpose Basically I am trying to get the MAX of the 90th percentile and Total Hits during a time window.
Hi  Trying to install Splunk Enterprise on Windows Server 2022 with my Domain account but every time I install it, it keeps rolling back. I have checked online but keep see info around giving my Dom... See more...
Hi  Trying to install Splunk Enterprise on Windows Server 2022 with my Domain account but every time I install it, it keeps rolling back. I have checked online but keep see info around giving my Domain  account needs to have the relevant permissions. The version of Splunk Enterprise I am installing is 9.2.0.1   Can you please advise me on what permission should be granted to the domain account or if there is anything else that may be causing the rollback        
Hi Folks,   I have two types of events that look like this Type1: TXN_ID=abcd inbound call INGRESS Type2: TXN_ID=abcd inbound call EGRESS   i want to find out how many events of each type per... See more...
Hi Folks,   I have two types of events that look like this Type1: TXN_ID=abcd inbound call INGRESS Type2: TXN_ID=abcd inbound call EGRESS   i want to find out how many events of each type per TXN_ID. If the counts per type don't match per TXN_ID, I want to out put that TXN_ID   I know that we can do stats count by TXN_ID. But how do so do that Per event type in same query?   Appreciate the help.   Thanks
I'm getting confused in SH clustering, can someone help me.
  Hello everyone,    I try to follow this manual the https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/InstallStreamForwarderonindependentmachine  I face an issue below, once ... See more...
  Hello everyone,    I try to follow this manual the https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/InstallStreamForwarderonindependentmachine  I face an issue below, once I try to ssh and run the command on my linux vm.   
hello Splunkers , Need some clarification on Smartstore data migration. as per the docs , You can still search any existing buckets that were tsidx-reduced before migration to SmartStore.  e.g. we h... See more...
hello Splunkers , Need some clarification on Smartstore data migration. as per the docs , You can still search any existing buckets that were tsidx-reduced before migration to SmartStore.  e.g. we have 18 months of data retention. We need to keep 6 months of data in local/cache storage due to frequent audit/forensic searches that need raw data fields. questions: 1> is it possible to  migrate tsidx reduced buckets to obj store without need for rebuild & Indexer cluster  will still search them as normal ( slower) process for tsidx reduced buckets ? OR do we need to rebuild all buckets before initiating data migration to obj store. in our case then we need to rebuild all the buckets from 7 to 18 months! In some cases we may run out of local space if we have to do this 2> What is the performance impact to search reduced bucket with addition of smartstore. Since cache manager has to fetch bucket from remote store & then rebuild it locally in cache(?) before it being searchable , the two levels of performance hit is too much? Anyone have had such a situation. thanks for your attention Manduki     https://docs.splunk.com/Documentation/Splunk/latest/Indexer/AboutSmartStore Tsidx reduction. Do not set enableTsidxReduction to "true". Tsidx reduction modifies bucket contents and is not supported by SmartStore. Note: You can still search any existing buckets that were tsidx-reduced before migration to SmartStore. As with non-SmartStore deployments such searches will likely run slowly.
While using Splunk ES, we noticed that correlation searches were set To an incorrect security field on the Incident Review page. This leads to inaccurate classifications of events Security and affe... See more...
While using Splunk ES, we noticed that correlation searches were set To an incorrect security field on the Incident Review page. This leads to inaccurate classifications of events Security and affects the decision-making process The first step is to set security Domain = Access The problem is that instead of being classified as security Domain = Access, it is classified as Theret, and so all cases are classified as Theret This causes us a problem with the values ​​not appearing on the Security Posture page