All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

By enabling splunk_http_enabled it worked
I cannot differentiate except for the order they arrive. And yes I can assume they are somehow present in the same order. The only available info is _time, Receive/sent, tradenumber. Now this is t... See more...
I cannot differentiate except for the order they arrive. And yes I can assume they are somehow present in the same order. The only available info is _time, Receive/sent, tradenumber. Now this is the problematic scenario.  In most cases, the trade only gets created in the system and is sent to the market. Most of the time, the trade are not being modified or cancelled. The problem is that when another action occurs on trade (modified or cancelled) it screws-up the time range and delays for that trade and it also screws up the alerts triggered on max(Delay) and avg(Delay).  
Yes indexery discovery I mean. So finally the index which is present in indexers will be used right? What if we don't configure indexAndForward=false in other components? This is by default false??
No. I mean that in a properly deployed installation all components except indexers should have indexAndForward=false setting so they don't do any indexing on their own - everything should be forwarde... See more...
No. I mean that in a properly deployed installation all components except indexers should have indexAndForward=false setting so they don't do any indexing on their own - everything should be forwarded to indexers (with an exception for some internal data needed by deployment server which should selectively be indexed locally but that's another story). EDIT: Wait, what? "Our HF outputs.conf is configured to CM ip address and from there HF will forward data to indexers". You mean that you use indexer discovery and CM returns a list of indexers to which your HF forwards the data? That's ok. Still it has nothing to do with indexes themselves.
There are two important things about this question. 1. As @livehybrid already pointed out, if you're getting your data as parsed (already processed by another "full" instance of Splunk Enterprise - ... See more...
There are two important things about this question. 1. As @livehybrid already pointed out, if you're getting your data as parsed (already processed by another "full" instance of Splunk Enterprise - typically a HF but your events might also originate at SH, DS, CM...) they will not be parsed again so your transforms will be "dead". 2. There is a question of what you're trying to achieve. If you overwrite your sourcetype (or source, or host) during ingestion, it will not change the ingestion pipeline - your event will still get treated as per the original sourcetype/source/host trio. Yes, in search-time it will be processed according to the new sourcetype, but the ingestion process is decided at the very beginning and is not altered "mid-flight" (with a possible exception of CLONE_SOURCETYPE but that is a relatively advanced topic).
Yes. The transforms are applied 1. Within a single transform class - left to right 2. Separate transform classes are called in alphabetical order. What is important though (and what is often overl... See more...
Yes. The transforms are applied 1. Within a single transform class - left to right 2. Separate transform classes are called in alphabetical order. What is important though (and what is often overlooked by the beginners; guilty of it myself ) is that all matching transforms are "fired". It's not like ACL that only first one that matches is executed. So if you want to do something only for some events, you have to _first_ do the default action (for example, redirect to nullQueue as in this example) and only _then_ update some events to get the special treatment (in this case - get them indexed).
@PickleRick  But in a well-deployed Splunk installation HFs and SHs would not index anything locally so even though the indexes would be defined on them, they would be empty since all events would b... See more...
@PickleRick  But in a well-deployed Splunk installation HFs and SHs would not index anything locally so even though the indexes would be defined on them, they would be empty since all events would be pushed to outputs only. ---> I didn't get you.  What do you mean by locally? etc/system/local folder? Our HF outputs.conf is configured to CM ip address and from there HF will forward data to indexers. 
So finally in my case will it take index created from cluster master (which pushed to indexers) or not? I created index in etc/master-apps/app/local/indexes.conf/<index> and pushed it to indexers. ... See more...
So finally in my case will it take index created from cluster master (which pushed to indexers) or not? I created index in etc/master-apps/app/local/indexes.conf/<index> and pushed it to indexers. I have created identical index in HF but paths are default but in indexers (pushed from CM) index is correctly configured...
OK. 1. Instead of this ping-pong we need an _example_ of the data (filter out/obfuscate sentitive part if needed but leave the important things in place so we have something to work on) 2. Can you ... See more...
OK. 1. Instead of this ping-pong we need an _example_ of the data (filter out/obfuscate sentitive part if needed but leave the important things in place so we have something to work on) 2. Can you guarantee that 2a) It's always "received" and then "sent"? 2b) There's never an overlap between two different "transactions". So that you don't have something like received,received,sent,sent.
None of them. As I said - each instance has its own indexes.conf (or set of separate index.conf files which are merged according to the precedence rules - https://docs.splunk.com/Documentation/Splun... See more...
None of them. As I said - each instance has its own indexes.conf (or set of separate index.conf files which are merged according to the precedence rules - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles ) So the indexes.conf from CM will get pushed to indexers and indexers will use their own local copies of that file, SH has its own indexes.conf (might have been pushed from SH deployer) and HF has its own one (possibly coming from an app from DS). But in a well-deployed Splunk installation HFs and SHs would not index anything locally so even though the indexes would be defined on them, they would be empty since all events would be pushed to outputs only.
As others already pointed out - what data you're searching? Also, searching across all events from all indexes will be slooooooow. You should limit your search as early as possible. But the main is... See more...
As others already pointed out - what data you're searching? Also, searching across all events from all indexes will be slooooooow. You should limit your search as early as possible. But the main issue is - what actually is your problem? You seem to have some search which doesn't actually search for anything, just lists all events. And you want to "display duplicate ids". Do you have statically defined ids? Or do you want to extract the ids from your data and find any that are duplicate? Be more verbose about your problem.
If you use Splunk Cloud, the issue might be related to the splunkclouduf.spl credential package. There is a workaround: https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upg... See more...
If you use Splunk Cloud, the issue might be related to the splunkclouduf.spl credential package. There is a workaround: https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 Be aware that you will have to combine these three certs (instead of the two included in the guide) in this order for the Find more Apps page to work properly:   $SPLUNK_HOME/etc/auth/appsCA.pem $SPLUNK_HOME/etc/auth/cacert.pem $SPLUNK_HOME/etc/apps/100_<stackname>_splunkcloud/default/<stackname>_cacert.pem    
@PickleRick what if I have indexes.conf in HF etc/system/local and in cluster manager etc/master-apps/<app_name>/local... Which index will splunk take finally when going to SH?  Data will forward fr... See more...
@PickleRick what if I have indexes.conf in HF etc/system/local and in cluster manager etc/master-apps/<app_name>/local... Which index will splunk take finally when going to SH?  Data will forward from HF to indexer... 
This does not work for me
To add a bit to this - otherwise good and valid - answer... Every Splunk component uses local indexes.conf instance. So if you have a search head, it uses its own indexes.conf file to see which inde... See more...
To add a bit to this - otherwise good and valid - answer... Every Splunk component uses local indexes.conf instance. So if you have a search head, it uses its own indexes.conf file to see which indexes it can autofill for you, if you have a HF, only those indexes which are in indexes.conf on that HF are shown in the "create input" dialogs. If a component does not do local indexing, those entries are - apart from the component being able to show those indexes - meaningless since... well, the component does only forward events "to outside" but it has to know about those indexes to show them. In no situation splunk "reaches out" anywhere for a list of indexes.
OK. You seem to be struggling a bit with the regex. I haven't read your attempts  thoroughly before but now I see that they seem to have some mistakes in one point or another. Use regex101.com to ve... See more...
OK. You seem to be struggling a bit with the regex. I haven't read your attempts  thoroughly before but now I see that they seem to have some mistakes in one point or another. Use regex101.com to verify your regexes. They don't need any escaping in config as long as you chose proper delimiters which do not interfere with the regex contents (so if you want to enclose your regex with quotes, your regex itself mustn't contain quotes and so on). And I wouldn't worry about whether the group is capturing or not. It's not that important memory-wise in this case and you're not using the groups for anything anyway.
Hi @Paaattt  Are you using Splunk Cloud as your destination? If so you'll need to download the UF app download package which will contain your certificates, and if not you'll need to gather them fro... See more...
Hi @Paaattt  Are you using Splunk Cloud as your destination? If so you'll need to download the UF app download package which will contain your certificates, and if not you'll need to gather them from your Splunk Enterprise deployment (the location may depend on your setup). Kiteworks requires separate files for the server certificate, intermediate certificate, root certificate, and private key for TLS setup. Typically for Splunk we combine these in a single PEM file, but Kiteworks needs them as distinct files. Obtain your Splunk PEM certs, this would be inside the UF forwarder app if you're using Splunk Cloud. Split out the certs/keys into individual certificates (server, intermediate, root) and the private key in separate files. Verify that the certificates are in the correct format (PEM) and the private key is in RSA format Once you have these files you should be able to upload these to KiteWorks which will then hopefully allow you to enable to output to Splunk. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
How to embed "https://docs.splunk.com/Documentation" in Splunk dashboard Is it possible to do? I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options... See more...
How to embed "https://docs.splunk.com/Documentation" in Splunk dashboard Is it possible to do? I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options' to 'sameorigin'. <row> <panel> <html> <iframe src="https://docs.splunk.com/Documentation" width="100%" height="300">&gt;</iframe> </html> </panel> </row>  
Hi @Karthikeya , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Seems like 9.3.2 version is running fine, but still not able to send logs to splunk server which is running on EC2 instance.. below is my splunkforwarder.yml. Can you help me with this? It seems li... See more...
Seems like 9.3.2 version is running fine, but still not able to send logs to splunk server which is running on EC2 instance.. below is my splunkforwarder.yml. Can you help me with this? It seems like forward server and monitor is not setup in the pod with below yml. How should I configure the inputs / outputs.conf files when using splunkforwarder image? I don't see issue from splunk-server. apiVersion: v1 kind: Pod metadata: name: splunk-forwarder spec: containers: - name: splunk image: splunk/universalforwarder:9.3.2 env: - name: SPLUNK_START_ARGS value: "--accept-license" - name: SPLUNK_USER value: "root" - name: SPLUNK_PASSWORD value: "YourSplunkPassword" - name: SPLUNK_ADD value: "monitor /var/logs" - name: SPLUNK_SERVER value: "splunk-server:9997" volumeMounts: - name: log-storage mountPath: /var/logs volumes: - name: log-storage persistentVolumeClaim: claimName: log-pvc