All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick Here I am monitoring the network files from the network folder. No UF, No HF I am using.
Fisrtly, https://docs.splunk.com/Documentation/SecureGateway/3.5.15/Admin/ConfigureSecureGatewayConf "In Splunk Secure Gateway version 3.4.25 and higher, Splunk Secure Gateway no longer reads from t... See more...
Fisrtly, https://docs.splunk.com/Documentation/SecureGateway/3.5.15/Admin/ConfigureSecureGatewayConf "In Splunk Secure Gateway version 3.4.25 and higher, Splunk Secure Gateway no longer reads from the securegateway.conf file. Configure Secure Gateway using the UI in Administration > Deployment configuration > Advanced settings" Secondly, you should not need SSG on HFs.
No, no, no. Don't add it anywhere. Where are you ingesing the data from? A file on this Splunk server or by means of a remote UF?
@PickleRick I am using the standalone machine ( act as search head and indexer both ). So its good to add this attribute in props ?
Yes, INDEXED_EXTRACTIONS can alter the procesing path of your event. Without it the event is parsed on the first "heavy" component the event goes through - typically either the intermediate HF or the... See more...
Yes, INDEXED_EXTRACTIONS can alter the procesing path of your event. Without it the event is parsed on the first "heavy" component the event goes through - typically either the intermediate HF or the destination indexer. When you enable indexed extractions on a UF, the data is parsed directly on the originating UF and is not touched after that (apart from possible ingest actions).
I am getting below error on HFs  Invalid key in stanza [setup] in "/opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf", line 20: cluster_mode_enabled (value: false). Can anyb... See more...
I am getting below error on HFs  Invalid key in stanza [setup] in "/opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf", line 20: cluster_mode_enabled (value: false). Can anybody tell us why?
It's a bit older but it seems to be some confusion around: Your second example is completely different from the first one as you use double quotes, i.e.   | eval var_type = typeof("num")   "num"... See more...
It's a bit older but it seems to be some confusion around: Your second example is completely different from the first one as you use double quotes, i.e.   | eval var_type = typeof("num")   "num" is a literal string which has nothing to do with your variable! Take care about using double quotes (for strings), single quotes (for field names, e.g. containing spaces (for whatever reason... )) or no quotes at all (also for field names). Besides that to me it seems that this `tostring` function is buggy. If I convert any number using `tostring(number)` that should(!) become a string, regardless of any "format"-argument. And the "typeof()" function should then return "String" for this string.
@inventsekar thanks for the response. I am referring to Splunk Enterprise 9.2.0.1. When I trying installing this using my Domain account it keeps rolling back. This is the issue I am facing
Please provide some sample (anonymised), representative raw events in a code block (this helps with understanding your data and allows us to set up tests of solutions to your question).
To expand on @PickleRick point 1, you may actually get a double negative effect - those you call out may be less likely to responds to specific demands on their time, and those you don't call out may... See more...
To expand on @PickleRick point 1, you may actually get a double negative effect - those you call out may be less likely to responds to specific demands on their time, and those you don't call out may think you don't value their contributions (as much), so why should they bother?
The first three may be working because Splunk might not be finding the timestamp you are searching for within 520 characters, so it is finding the sbt:MessageTimeStamp, which happens to be the same a... See more...
The first three may be working because Splunk might not be finding the timestamp you are searching for within 520 characters, so it is finding the sbt:MessageTimeStamp, which happens to be the same as the EventTime in these events. sbt:MessageTimeStamp does not exist in the failing event so Splunk is using the ingest time in the fourth event. The fourth event is a different format to the the other three events "eqtext:EquipmentEvent" instead of "eqtexo:EquipmentEventReport" so should ideally be in a different sourcetype (at least the source file names are different so it should be relatively easy to split them off). The timestamp in the fourth event is at least around 627 characters in so your lookahead should at least cover that (and as @PickleRick said, it looks like you are dealing with variable length data, so 627 may not be enough).
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorith... See more...
Hi all! We deployed Splunk Cluster on OEL 8. The latest version is currently installed - 9.2.2. The vulnerability scanner found a vulnerabilities on all servers related to the compression algorithm: Secure Sockets Layer/Transport Layer Security (SSL/TLS) Compression Algorithm Information Leakage Vulnerability Affected objects: port 8089/tcp over SSL port 8191/tcp over SSL port 8088/tcp over SSL SOLUTION: Compression algorithms should be disabled. The method of disabling it varies depending on the application you're running. If you're using a hardware device or software not listed here, you'll need to check the manual or vendor support options. RESULTS: Compression_method_is DEFLATE .   Tried to solve: Add these strings to server.conf on local location: [sslConfig] allowSslCompression = false useClientSSLCompression = false useSplunkdClientSSLCompression = false   Result of attempt: On some servers it only helped with 8089, on some servers it helped with 8191, and on some servers it didn't help at all.   Question. Has anyone been able to solve this problem? And how can I understand why I got different results with the same settings? What other solutions can you suggest? Thank you all in advance!  
 Thanks for quick turnaround.  expecting the results for the account like below: requestId  Odds Odds propositionid 0 126 1.75 6768     2.75 6685     1.85 6770     3.5 ... See more...
 Thanks for quick turnaround.  expecting the results for the account like below: requestId  Odds Odds propositionid 0 126 1.75 6768     2.75 6685     1.85 6770     3.5 6710     4.25 6716 1 71 1.75 6683     3.75 6692     1.85 6705     4.25 6716
@PickleRick No, I am not using INDEXED_EXTRACTIONS. I am using KV_MODE=xml in my setting ( props). Is there any other significance of INDEXED_EXTRACTIONS ? 
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller ... See more...
hi team  we have active/passive configuration of the db agent for the db collectors in the controller. is there any query where we can find which is active/passive host by running in the controller database and not checking from the controller gui. below is the ref snap from the database agent setting screen where one is active host and other passive host. 
It's a rather philosophical question. The short answer is you can't. The long answer is - depending on the definition of throughput, you can find a"lower-level" metric that you will not be able to c... See more...
It's a rather philosophical question. The short answer is you can't. The long answer is - depending on the definition of throughput, you can find a"lower-level" metric that you will not be able to control (for example, you can't get lower than line speed when sending the packet onto the wire). So setting throughput limits in limits.conf should get you below said limit on average but you can have bursts of data exceeding this. In fact due to how network works the only way to put a hard cap on throughput would be to have a medium of a capped line speed.
Ok, regardless of your transposing issues you have a logical flaw in your search (or I'm misunderstanding something) index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span... See more...
Ok, regardless of your transposing issues you have a logical flaw in your search (or I'm misunderstanding something) index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span=1m | timechart count as TotalHits, perc90(Elapsed) by Feature This part I understand but here: | stats max(*) AS * Youre finding a max value separately for each column which means that max(count) might have been during a different time period than max('perc90(Elapsed)'). Are you sure that is what you want?
You might want to set it to a bit higher value. The timestamp is relatively late in the event and the part before the timestamp contains dynamic data which can be of varying length so you have to acc... See more...
You might want to set it to a bit higher value. The timestamp is relatively late in the event and the part before the timestamp contains dynamic data which can be of varying length so you have to account for that. Bonus question - you're not using INDEXED_EXTRACTIONS, are you?
This is an external app (in this case - written by MS) so it's their responsibility to maintain it. You might want to use the email addres from the contact tab in Splunkbase to submit feedback to the... See more...
This is an external app (in this case - written by MS) so it's their responsibility to maintain it. You might want to use the email addres from the contact tab in Splunkbase to submit feedback to the maintainers of the app.
@PickleRick According to your suggestion my settings will be as below  MAX_TIMESTAMP_LOOKAHEAD = 520  ( timestamps comes after 520 character of events)