All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please provide some sample (anonymised), representative raw events in a code block (this helps with understanding your data and allows us to set up tests of solutions to your question).
To expand on @PickleRick point 1, you may actually get a double negative effect - those you call out may be less likely to responds to specific demands on their time, and those you don't call out may... See more...
To expand on @PickleRick point 1, you may actually get a double negative effect - those you call out may be less likely to responds to specific demands on their time, and those you don't call out may think you don't value their contributions (as much), so why should they bother?
The first three may be working because Splunk might not be finding the timestamp you are searching for within 520 characters, so it is finding the sbt:MessageTimeStamp, which happens to be the same a... See more...
The first three may be working because Splunk might not be finding the timestamp you are searching for within 520 characters, so it is finding the sbt:MessageTimeStamp, which happens to be the same as the EventTime in these events. sbt:MessageTimeStamp does not exist in the failing event so Splunk is using the ingest time in the fourth event. The fourth event is a different format to the the other three events "eqtext:EquipmentEvent" instead of "eqtexo:EquipmentEventReport" so should ideally be in a different sourcetype (at least the source file names are different so it should be relatively easy to split them off). The timestamp in the fourth event is at least around 627 characters in so your lookahead should at least cover that (and as @PickleRick said, it looks like you are dealing with variable length data, so 627 may not be enough).
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorith... See more...
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorithm: Secure Sockets Layer/Transport Layer Security (SSL/TLS) Compression Algorithm Information Leakage Vulnerability Affected objects: port 8089/tcp over SSL port 8191/tcp over SSL port 8088/tcp over SSL SOLUTION: Compression algorithms should be disabled. The method of disabling it varies depending on the application you're running. If you're using a hardware device or software not listed here, you'll need to check the manual or vendor support options. RESULTS: Compression_method_is DEFLATE .   Tried to solve: Add these strings to server.conf on local location: [sslConfig] allowSslCompression = false useClientSSLCompression = false useSplunkdClientSSLCompression = false   Result of attempt: On some servers it only helped with 8089, on some servers it helped with 8191, and on some servers it didn't help at all.   Question. Has anyone been able to solve this problem? And how can I understand why I got different results with the same settings? What other solutions can you suggest? Thank you all in advance!  
 Thanks for quick turnaround.  expecting the results for the account like below: requestId  Odds Odds propositionid 0 126 1.75 6768     2.75 6685     1.85 6770     3.5 ... See more...
 Thanks for quick turnaround.  expecting the results for the account like below: requestId  Odds Odds propositionid 0 126 1.75 6768     2.75 6685     1.85 6770     3.5 6710     4.25 6716 1 71 1.75 6683     3.75 6692     1.85 6705     4.25 6716
@PickleRick No, I am not using INDEXED_EXTRACTIONS. I am using KV_MODE=xml in my setting ( props). Is there any other significance of INDEXED_EXTRACTIONS ? 
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller ... See more...
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller database and not checking from the controller gui. below is the ref snap from the database agent setting screen where one is active host and other passive host. 
It's a rather philosophical question. The short answer is you can't. The long answer is - depending on the definition of throughput, you can find a"lower-level" metric that you will not be able to c... See more...
It's a rather philosophical question. The short answer is you can't. The long answer is - depending on the definition of throughput, you can find a"lower-level" metric that you will not be able to control (for example, you can't get lower than line speed when sending the packet onto the wire). So setting throughput limits in limits.conf should get you below said limit on average but you can have bursts of data exceeding this. In fact due to how network works the only way to put a hard cap on throughput would be to have a medium of a capped line speed.
Ok, regardless of your transposing issues you have a logical flaw in your search (or I'm misunderstanding something) index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span... See more...
Ok, regardless of your transposing issues you have a logical flaw in your search (or I'm misunderstanding something) index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span=1m | timechart count as TotalHits, perc90(Elapsed) by Feature This part I understand but here: | stats max(*) AS * Youre finding a max value separately for each column which means that max(count) might have been during a different time period than max('perc90(Elapsed)'). Are you sure that is what you want?
You might want to set it to a bit higher value. The timestamp is relatively late in the event and the part before the timestamp contains dynamic data which can be of varying length so you have to acc... See more...
You might want to set it to a bit higher value. The timestamp is relatively late in the event and the part before the timestamp contains dynamic data which can be of varying length so you have to account for that. Bonus question - you're not using INDEXED_EXTRACTIONS, are you?
This is an external app (in this case - written by MS) so it's their responsibility to maintain it. You might want to use the email addres from the contact tab in Splunkbase to submit feedback to the... See more...
This is an external app (in this case - written by MS) so it's their responsibility to maintain it. You might want to use the email addres from the contact tab in Splunkbase to submit feedback to the maintainers of the app.
@PickleRick According to your suggestion my settings will be as below  MAX_TIMESTAMP_LOOKAHEAD = 520  ( timestamps comes after 520 character of events)
1. Please don't call out people by name. If they have spare time they'll probably help you. If they don't they won't. And calling them out explicitly can make them less likely to want to help you act... See more...
1. Please don't call out people by name. If they have spare time they'll probably help you. If they don't they won't. And calling them out explicitly can make them less likely to want to help you actually. 2. It's a bit confusing - what does your single event look like? Please post a full event sample (preferably in a code block). 3. If I understand correctly, you have an array within your json structure and the fields of separate structures within your array get "squished" so you can't correlate between values in those fields, right? Typically for that you need to extract the array field as a whole to a multivalued field, then split the event on that field to multiple ones and then parse the json further. Like | spath path="propositions" | mvexpand propositions | spath input=propositions It's gonna be more complicated if you have several arrays in a single event and you have to "split" them all this way and correlate. That's more of a case of badly formatted data.
Could I please get assistance on how to resolve this issue and get the AlgoSec App for Security Incident Analysis and Response (2.x) Splunk application working. No changes have been made to any app... See more...
Could I please get assistance on how to resolve this issue and get the AlgoSec App for Security Incident Analysis and Response (2.x) Splunk application working. No changes have been made to any application files The steps in the algosec installation documentation has been followed: Integrate ASMS with Splunk (algosec.com) The Splunk Version being used: Splunk Enterprise 9.2 (Trial License) When installing the application, this error is returned: 500 Internal Server Error  Error Details: index=_internal host="*********" source=*web_service.log log_level=ERROR requestid=6694b1a1307f3b003f6d50 2024-07-15 15:20:33,402 ERROR [6694b1a1307f3b003f6d50] error:338 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 422, in default return route.target(self, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-500>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 41, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-498>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 119, in check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-497>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 167, in validate_ip return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-496>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 246, in preform_sso_check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-495>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 285, in check_login return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-494>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 305, in handle_exceptions return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-489>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 360, in apply_cache_headers response = fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 1798, in listEntities app_name = eai_acl.get('app') AttributeError: 'NoneType' object has no attribute 'get'   Thanks Splunk Community
Thought as much, it's just worth noting that things can be perceived differently than what we wanted to say Now, check the technical part of my response Most probably, you need to increase th... See more...
Thought as much, it's just worth noting that things can be perceived differently than what we wanted to say Now, check the technical part of my response Most probably, you need to increase the lookahead because you have no timestamp in first 24 chars of your event. The architectural issue might also mean that when you fix that you'll be doing the right thing but in wrong place.
@PickleRick Please don't take my words otherwise. I didn't mean to say that. Btw way thanks for correcting me. I will take care with my words from the next time.
I tried to update  $result.field$  in the Description of Custom fields as in the screenshot, but it is not updating in servicenow
@inventsekar I have added these  three corrected settings in props.conf. I am waiting for the real event to come in, if this works then the job will be done. LINE_BREAKER = <\/eqtext:EquipmentEven... See more...
@inventsekar I have added these  three corrected settings in props.conf. I am waiting for the real event to come in, if this works then the job will be done. LINE_BREAKER = <\/eqtext:EquipmentEvent>() TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TZ = America/Glace_Bay
Hi, I am new to splunk development .Please provide your assistance for creating a search  . Thanks advance. Trying to create a report where I need to fetch the requestId, propositions id based on o... See more...
Hi, I am new to splunk development .Please provide your assistance for creating a search  . Thanks advance. Trying to create a report where I need to fetch the requestId, propositions id based on odds and accountno. Attached a sample event where multiple requests are in a single event which comes as a combined event to splunk . I have used a query like below, but it displays all the propositions to all requests,odds combination. I want to display the propositionid  only related to a particular request id and odds. attaching a sample for reference   index=abc source="data.log" "Response.errors{}.message"="cobination" | spath "Response.errors{}.code" | search "Response.errors{}.code"=COMBINATION | spath "Response.b{}.legs{}.propositions{}.propositionId"| spath "Response.b{}.legs{}.odds"|rename "Response.b{}.legs{}.odds" as Odds | spath "accountDetails.accountNumber"|dedup "accountDetails.accountNumber" |rename "accountDetails.accountNumber" as AccountNumber | spath "Response.b{}.requestId" | stats values("Response.error{}.code") as ErrorCode ,values("Response.b{}.legs{}.propositions{}.propositionId") as PropositionId by AccountNumber,Odds,RequestId,_time  
1. For "ASAP" you pay your friendly consultant or PS. This is a community-driven forum - people help others in their own spare time. Saying "help me ASAP" can be perceived as rude. 2. How do you ing... See more...
1. For "ASAP" you pay your friendly consultant or PS. This is a community-driven forum - people help others in their own spare time. Saying "help me ASAP" can be perceived as rude. 2. How do you ingest your data? UF->indexer? HF->indexer? UF->HF->indexer? What input do the events come in by. Where do you have the props.conf for the sourcetype? 3. You have the timestamp relatively late in the event and - as you've shown - your MAX_TIMESTAMP_LOOKAHEAD is set to only 24. 4. When posting config excerpts or data samples please use code block or preformatted style. It greatly helps readability.