All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In Splunk Cloud, search heads have the same list of index names as indexers so you can use REST without sending to the indexers. | rest splunk_server=local /services/data/indexes ...
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not c... See more...
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not come back from this nightly restart. When examining the journal with this command: journalctl -b -u sc4s We see this: Error response from daemon: pull access denied for splunk/scs, repository does not exist or may require 'docker login': denied: requested access to the resource is denied This problem could happen to ANYBODY at ANY TIME and it took us a while to complete work around it so I am documenting the whole story here.
TargetLocation always comes up as Pending which is not correct. Also I tried changing the for each to foreach sourcetype. will it work?  sourcetype names are xml  and raw_text. Please help to ... See more...
TargetLocation always comes up as Pending which is not correct. Also I tried changing the for each to foreach sourcetype. will it work?  sourcetype names are xml  and raw_text. Please help to add TargetLocation date.
Thanks.  I see that RFC 4108 specifies the same thing -- that the last line could end with or without a final CRLF: https://www.loc.gov/preservation/digital/formats/fdd/fdd000323.shtml In the Notes... See more...
Thanks.  I see that RFC 4108 specifies the same thing -- that the last line could end with or without a final CRLF: https://www.loc.gov/preservation/digital/formats/fdd/fdd000323.shtml In the Notes, General section at the end of the document, "The last record in a file may or may not end with a line break character."  
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    ... See more...
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | where count=0     But this doesn't work because running the first line on its own only shows the indexes that are not empty and nothing, not even count=0 for the empty index. I also tried    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | fillnull count value=0 | where count=0   But that doesn't work either. The problem is that if "index5", for example, is showing no results, "| tstats count..." doesn't return anything, not even a null result. So something like "| fillnull" is not working at the end because there is no "index5" row to "fillnull".  I have seen other solutions use    | rest /services/data/indexes ...   and join or append the searches to each other but since I'm on Splunk Cloud, it doesn't work due to the error "Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability".    The only working solution I have so far is to create an alert for each index I want to monitor with the following search   | tstats count where index=<MY_INDEX> | where count=0   but I would rather have a single alert running each time with a list that I can change if I need to than multiple searches competing for a timeslot and all that. I have considered other solutions like providing a lookup table with a list of indexes I want to search and using lookup to compare against the results but that seems too cumbersome.    Is there a way to trigger an alert for empty indexes from a single given list on Splunk Cloud?    
Mind you this RFC is informational and only aims to document common practices. It's by no means to be a standard.
Thanks.  I have submitted it as an idea at ideas.splunk.com.  https://ideas.splunk.com/ideas/APPSID-I-944
We can be more direct in our manipulation of the dashboard in newer versions of Splunk. The structure of the nested div elements for tables is as follows     [ .splunk-view .splunk-table   - ... See more...
We can be more direct in our manipulation of the dashboard in newer versions of Splunk. The structure of the nested div elements for tables is as follows     [ .splunk-view .splunk-table   -         [ .shared-reportvisualizer   - contains the table ]         [ .splunk-view .splunk-paginator   - contains the paginator ]     ] All we have to do is reverse the display order of the div elements in the .splunk-table container ... <row depends="$always_hide_css$"> <panel> <html> <style> div[id^="topPaginatorTable"] .splunk-table { display: flex; flex-wrap: wrap; } div[id^="topPaginatorTable"] .shared-reportvisualizer { position: relative; order: 2; } div[id^="topPaginatorTable"] .splunk-paginator { position: relative; order: 1; } </style> </html> <panel> <row> Then add id="topPaginatorTable1", id=topPaginatorTable2, etc, to each table in the dashboard where you want to move the paginator to the top. I like this method better as it leaves no padding artifacts in the case that no paginator is required, and rest of the box model of HTML formats correctly for width constrained panels.
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS c... See more...
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS configuration of the web interface using web.conf is successful. TLS configuration of forwarder to indexer has failed consistently using the indexer server.conf file and the forwarder server.conf file as detailed in the doc. Our deployment is very simple; 1 indexer and a collection of windows forwarders. Has anyone been able to get TLS working between forwarder - indexer on version 9+ ? Any tips on splunkd.log entries that may point to the issue(s)?   Thanks for any help. I will be out of office next week but will return Dec 30 and check this. Thanks again.  
Hi @joewetzel63, In the error message it complains about "/opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile" file. This tmp folder does not exist on default, that is why it cannot ... See more...
Hi @joewetzel63, In the error message it complains about "/opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile" file. This tmp folder does not exist on default, that is why it cannot create unix_hardware_error_tmpfile file. You can try creating /opt/splunkforwarder/var/run/splunk/tmp folder. When I checked the addon (v9.2.0) it uses correct path as "$SPLUNK_HOME/var/run/splunk/unix_hardware_error_tmpfile".  Can you confirm and try using the latest version of the addn? 
Akamai dashboard is viewable for sc - admins but not users. Only app with this issue.
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger... See more...
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger in the given time. I can't use Throttle since my alerts do not depend on a single host or event. For example: index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db |dedup 1 host state_desc | streamstats values(state_desc) as State by host | eval Estado=case( State!="ONLINE", "Critico", State="ONLINE", "Safe" ) | table Estado host State _time | where Estado="Critico" When the status of a Host changes to critical, it triggers the alert. For this reason, I cannot use Throttle because in the time span that this alert is silenced, one of the hosts may trigger, omitting the entire alert completely. My idea is to create logic based on the results of the last triggered alert and compare them with the current alert where if the host and status are the same, it remains unchanged. However, if the host and status are different from the previous one triggered, it should be triggered. I thought about using the data where it's stored, but I don't know how to search for this information, does anyone have an idea? e Any comment is greatly appreciated.
I didn't copy all the files and directories mentioned by you and also didn't put the old CM in maintenance mode but did change the URL and FDQN for all the instances. Probably the problem arose from ... See more...
I didn't copy all the files and directories mentioned by you and also didn't put the old CM in maintenance mode but did change the URL and FDQN for all the instances. Probably the problem arose from the fact that the production instance has a lot of moving data and not going into maintenance mode caused the problems. Even, the test site had moving data but did not have a lot of data like production. What was surprising is that there were no logs that showcase any exact reason of error. However, I used the techniques mentioned by you and was able to migrate CM to new hardware. Thanks to you @isoutamo     
Hi @Sailesh6891 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Did you copy all those files and directories which are mentioned on item 2.4 in referenced post? I’m not sure how it works and are there some additional stuff to do as you have changed the name/url fo... See more...
Did you copy all those files and directories which are mentioned on item 2.4 in referenced post? I’m not sure how it works and are there some additional stuff to do as you have changed the name/url for the new master? I prefer to use FQDN as all instance names (CNAME or A records) to avoid additional issues which can arise when there are too many changes at same time. Is your old master still available to test with it? Are there any reasonable error messages on CM’s or indexers’ logs to get more information what is the issue?
You're supposed to check the log for this search, not the general logs ingested into _internal. Log for a particular search is - as far as I remember - a part of the artifacts package from the search... See more...
You're supposed to check the log for this search, not the general logs ingested into _internal. Log for a particular search is - as far as I remember - a part of the artifacts package from the search and gets removed after the search outlives its retention. So search.log is the thing that you get to by clocking at Job -> Inspect Job and there you have the link to see the search.log And in your case it's probably an issue with permissions (you haven't exported the script itself properly from the app - I struggled with it for a long time myself; you can't do it via GUI, exporting lookup definition is not sufficient, you must export the script and allow reading)
Ahhhh... You had yet another field _called_ value. I suppose we all missed that and assumed "value" meant the value of one of the title* fields, not a separate field. *facepalm* In this case, you ca... See more...
Ahhhh... You had yet another field _called_ value. I suppose we all missed that and assumed "value" meant the value of one of the title* fields, not a separate field. *facepalm* In this case, you can still avoid using eventstats | sort - alert_level title1 | streamstats current=t dc(alert_level) as selector by title1 | where selector=1 | stats values(title4) as title4s by title1 Don't get me wrong - eventstats is a powerful and useful command but with some bigger datasets you might consider alternatives.
Hi @scelikok  Thanks a lot for your reply, it was most helpful, and it helped me finding a solution. However, I realised that the snippet I had provided had some subtle differences with the actual ... See more...
Hi @scelikok  Thanks a lot for your reply, it was most helpful, and it helped me finding a solution. However, I realised that the snippet I had provided had some subtle differences with the actual data, and so I had to slightly adapt your solution. That being said, I was under the impression that your regex was not quite right either as I ran it through regex101 first and it only matched the first xml block (I stripped the beginning of the square bracket line to emulate the line breaker in props.conf) So, to recap, here is a more accurate example of the log: [1][DATA]BEGIN --- - 06:03:09[012] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>value</tag1>   <nestedTag>     <tag2>another value</tag2>   </nestedTag> </root> [1][DATA]END --- - 06:03:09[012] [1][DATA]BEGIN --- - 07:03:09[123] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>some stuff</tag1>   <nestedTag>     <tag2>other stuff</tag2>   </nestedTag> </root> [1][DATA]END --- - 07:03:09[123] [1][DATA]BEGIN --- - 08:03:09[456] <?xml version="1.0" encoding="UTF-8"?> <root>   <tag1>some more data</tag1>   <nestedTag>     <tag2>fooband a bit more</tag2>   </nestedTag> </root> [1][DATA]END --- - 08:03:09[456] Here is the props.conf I ended up using (as per @scelikok's suggestion): [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml And here is the corresponding transforms.conf, slightly tweaked - I ended up being a bit more explicit on the end of the event and removed some of the capturing groups: [transform2xml] REGEX = ^([^\[]+)\[\d+\][\r\n]+(<\?xml.*>[^\[]+)\[1\]\[DATA\]END --- - [\d:]+\[\d+\][\r\n]* FORMAT = <time>$1</time>$2 DEST_KEY = _raw It may not be a perfect xml, but that it works as expected and the xml is now automatically parsed. Thanks again for your help @scelikok !
Hi Woodcock, May I please double check the nature of this setting as it stands today, say if I have the below: [tcpout] defaultGroup = group1, group2 blockOnCloning = [0] [tcpout:group1] server ... See more...
Hi Woodcock, May I please double check the nature of this setting as it stands today, say if I have the below: [tcpout] defaultGroup = group1, group2 blockOnCloning = [0] [tcpout:group1] server = server1:9997 blockOnCloning = [1] [tcpout:group2] server = server2:9997 blockOnCloning = [2] Would the outcomes be as follows, I want to check if it being in the main tcpout supersedes the separate groups but also want to make sure if one side collapses, the other is fine. ID 0 1 2 Outcome if Server 1 collapses Outcome if Server 2 collapses 1 true true true Results stopped for both Results stopped for both 2 true true false Results stopped for both Results continue for 1 3 true false false Results stopped for both Results stopped for both 4 false false false Results continue for 2 Results continue for 1 5 false false true Results continue for 2 Results stopped for both 6 false true true Results stopped for both Results stopped for both
Hi @yuanliu , I used all your solutions to have this: | eventstats max(alert_level) as max_val BY title1 | stats values(eval(if(alert_level=max_val,title4,""))) AS title... See more...
Hi @yuanliu , I used all your solutions to have this: | eventstats max(alert_level) as max_val BY title1 | stats values(eval(if(alert_level=max_val,title4,""))) AS title4 max(alert_level) AS alert_level BY title1 Thank you for you all support. Ciao. Giuseppe