All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did you copy all those files and directories which are mentioned on item 2.4 in referenced post? I’m not sure how it works and are there some additional stuff to do as you have changed the name/url fo... See more...
Did you copy all those files and directories which are mentioned on item 2.4 in referenced post? I’m not sure how it works and are there some additional stuff to do as you have changed the name/url for the new master? I prefer to use FQDN as all instance names (CNAME or A records) to avoid additional issues which can arise when there are too many changes at same time. Is your old master still available to test with it? Are there any reasonable error messages on CM’s or indexers’ logs to get more information what is the issue?
You're supposed to check the log for this search, not the general logs ingested into _internal. Log for a particular search is - as far as I remember - a part of the artifacts package from the search... See more...
You're supposed to check the log for this search, not the general logs ingested into _internal. Log for a particular search is - as far as I remember - a part of the artifacts package from the search and gets removed after the search outlives its retention. So search.log is the thing that you get to by clocking at Job -> Inspect Job and there you have the link to see the search.log And in your case it's probably an issue with permissions (you haven't exported the script itself properly from the app - I struggled with it for a long time myself; you can't do it via GUI, exporting lookup definition is not sufficient, you must export the script and allow reading)
Ahhhh... You had yet another field _called_ value. I suppose we all missed that and assumed "value" meant the value of one of the title* fields, not a separate field. *facepalm* In this case, you ca... See more...
Ahhhh... You had yet another field _called_ value. I suppose we all missed that and assumed "value" meant the value of one of the title* fields, not a separate field. *facepalm* In this case, you can still avoid using eventstats | sort - alert_level title1 | streamstats current=t dc(alert_level) as selector by title1 | where selector=1 | stats values(title4) as title4s by title1 Don't get me wrong - eventstats is a powerful and useful command but with some bigger datasets you might consider alternatives.
Hi @scelikok  Thanks a lot for your reply, it was most helpful, and it helped me finding a solution. However, I realised that the snippet I had provided had some subtle differences with the actual ... See more...
Hi @scelikok  Thanks a lot for your reply, it was most helpful, and it helped me finding a solution. However, I realised that the snippet I had provided had some subtle differences with the actual data, and so I had to slightly adapt your solution. That being said, I was under the impression that your regex was not quite right either as I ran it through regex101 first and it only matched the first xml block (I stripped the beginning of the square bracket line to emulate the line breaker in props.conf) So, to recap, here is a more accurate example of the log: [1][DATA]BEGIN --- - 06:03:09[012] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>value</tag1>   <nestedTag>     <tag2>another value</tag2>   </nestedTag> </root> [1][DATA]END --- - 06:03:09[012] [1][DATA]BEGIN --- - 07:03:09[123] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>some stuff</tag1>   <nestedTag>     <tag2>other stuff</tag2>   </nestedTag> </root> [1][DATA]END --- - 07:03:09[123] [1][DATA]BEGIN --- - 08:03:09[456] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>some more data</tag1>   <nestedTag>     <tag2>fooband a bit more</tag2>   </nestedTag> </root> [1][DATA]END --- - 08:03:09[456] Here is the props.conf I ended up using (as per @scelikok's suggestion): [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml And here is the corresponding transforms.conf, slightly tweaked - I ended up being a bit more explicit on the end of the event and removed some of the capturing groups: [transform2xml] REGEX = ^([^\[]+)\[\d+\][\r\n]+(<\?xml.*>[^\[]+)\[1\]\[DATA\]END --- - [\d:]+\[\d+\][\r\n]* FORMAT = <time>$1</time>$2 DEST_KEY = _raw It may not be a perfect xml, but that it works as expected and the xml is now automatically parsed. Thanks again for your help @scelikok !
Hi Woodcock, May I please double check the nature of this setting as it stands today, say if I have the below: [tcpout] defaultGroup = group1, group2 blockOnCloning = [0] [tcpout:group1] server ... See more...
Hi Woodcock, May I please double check the nature of this setting as it stands today, say if I have the below: [tcpout] defaultGroup = group1, group2 blockOnCloning = [0] [tcpout:group1] server = server1:9997 blockOnCloning = [1] [tcpout:group2] server = server2:9997 blockOnCloning = [2] Would the outcomes be as follows, I want to check if it being in the main tcpout supersedes the separate groups but also want to make sure if one side collapses, the other is fine. ID 0 1 2 Outcome if Server 1 collapses Outcome if Server 2 collapses 1 true true true Results stopped for both Results stopped for both 2 true true false Results stopped for both Results continue for 1 3 true false false Results stopped for both Results stopped for both 4 false false false Results continue for 2 Results continue for 1 5 false false true Results continue for 2 Results stopped for both 6 false true true Results stopped for both Results stopped for both
Hi @yuanliu , I used all your solutions to have this: | eventstats max(alert_level) as max_val BY title1 | stats values(eval(if(alert_level=max_val,title4,""))) AS title... See more...
Hi @yuanliu , I used all your solutions to have this: | eventstats max(alert_level) as max_val BY title1 | stats values(eval(if(alert_level=max_val,title4,""))) AS title4 max(alert_level) AS alert_level BY title1 Thank you for you all support. Ciao. Giuseppe
The savedsearch command has a method for passing variables to the search.  That should make it possible to pass different values for earliest and latest.  See the Search Reference manual for details.
The Splunk dev team is not here.  This is a Splunk community (user) site. The term 'search.log' is correct.  These files are not indexed, but are accessible via the Job Inspector. The cited docs li... See more...
The Splunk dev team is not here.  This is a Splunk community (user) site. The term 'search.log' is correct.  These files are not indexed, but are accessible via the Job Inspector. The cited docs links says that searches.log is no longer used.
OK. Please describe your ingestion process. Where do the events come from? How are they received/pulled? On which component? Where does the event stream go to from there? What components are involv... See more...
OK. Please describe your ingestion process. Where do the events come from? How are they received/pulled? On which component? Where does the event stream go to from there? What components are involved and in what order? Where are you putting your settings?
There are generally two methods - either add metadata during ingestion or dynamically clasify the sources during searching. The latter approach is usually easiest done with a lookup as @PaulPanther ... See more...
There are generally two methods - either add metadata during ingestion or dynamically clasify the sources during searching. The latter approach is usually easiest done with a lookup as @PaulPanther wrote. This way you don't have to touch your sources at all, you just have to keep the lookup contents up to date. The downside is that... you have to keep the lookup contents up to date and have it's desirable that you have a 1-1 mapping between an existing field (like host) and the information you're looking up. Otherwise you need to do some calculated fields evaluated conditionally... and it's getting messy and hard to maintain. Another approach is to add an indexed field directly at the source or at an intermediate forwarder. The IF approach of course requires you to have that intermediate HF to add a field with a different value for each group. (you can also do it directly at the receiving indexers but then you'd have a hard time differentiating between source - ingest-time lookup? Ugh). You might as well add the _meta setting to a particular input on the source forwarder. For example _meta = source_env::my_lab This will give each event ingested via input with this setting an additional indexed field called source_env with a value of "my_lab". It's convenient and it's very useful if you want to do some summaries with tstats but it also has downsides - you need to define it separately at each source (or at least in the [default] stanza for input as well as at [wineventlog] general stanza - default settings are not inherited to wineventlog!). And it must have a static value(s) defined. There is no way to define this setting dynamically like "source_hostname::$hostname" or something like that - you must define it explicitly. Another thing is that you can only have one _meta entry. Since there are no multiple _meta-something settings, multiple _meta definitions will overwrite each other according to normal precedence rules. So while you can define multiple indexed fields with _meta = app_name::testapp1 env::test_env But you can't define _meta = app_name::testapp1 in one config file and _meta = env::test_env in another. Only one of those fields would be defined (depending on the files precedence).
Thank you for the fast reply! This is rather unfortunate. Are there any aspirations to allow the use of variables in other actions in the future?
Unfortunately for browser tests you can only use the global variables to fill any fields on a web page with the "Fill in Field" action .  Feel free to create an idea as a feature enhancement request... See more...
Unfortunately for browser tests you can only use the global variables to fill any fields on a web page with the "Fill in Field" action .  Feel free to create an idea as a feature enhancement request at https://ideas.splunk.com/
Yes. Web interface is the only "standard" (not including any unpredictable things done by add-on developers) component which behaves differently. While all other "areas of activity" (inputs, outputs... See more...
Yes. Web interface is the only "standard" (not including any unpredictable things done by add-on developers) component which behaves differently. While all other "areas of activity" (inputs, outputs, inter-splunkd connections) require certs in a single-file form (from the top - subject cert, private key, certificate chain), web interface requires two separate files - one with the private key and another with the chained subject certificate. And TLS-protecting your web interface while desired as a general rule has nothing to do with inputs and outputs.
Thank you for the idea; this is a pretty easy solution. Obviously less complicated than I came up with =D
The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFIC... See more...
The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the root certificate for the CA)... -----END CERTIFICATE----- server certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...<Server Private Key – Passphrase protected> -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the certificate authority certificate)... -----END CERTIFICATE----- Check out: Configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect - Splunk Documentation Final configuration must look like: web.conf [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem sslPassword = <priv_key_passwd> server.conf [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = <priv_key_passwd>  
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Se... See more...
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Setup server.conf in the new instance with relevant changes ( pass4symkey in string format) Start the new CM Change the URL of the indexer cluster one by one. Finally, change the URL in the search head after the indexers are migrated. This had worked in the test environment but when it was time for the production setup, the indexers failed to connect and would keep stopping after changing to new CM.   Regards, Pravin  
Have you already checked followings docs? Troubleshoot the Splunk OpenTelemetry Collector — Splunk Observability Cloud documentation
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have... See more...
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have the benefit that you only create one alert because you could iterate over the mail addresses (e.g. $result.recipient$) in your alert. Define an automatic lookup in Splunk Web - Splunk Documentation  
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's n... See more...
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's not it.
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set th... See more...
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set this in props.conf. * Optional. Limits the amount of resources that are spent by PCRE when running patterns that do not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 transforms.conf - Splunk Documentation