All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest contain... See more...
The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest container image.  This had worked flawlessly since we first setup syslog inputs for the client.  However years later, somebody made a WAF change that blocked connectivity to github, which included our download URL round in the unit file (specifically, ghcr.io/splunk/splunk-connect-for-syslog/container3:latest) and did not properly warn or socialize this fact before doing so. This caused the sc4s service to be unable to restart because the systemd unit file downloads a fresh image every time before it starts, which it could no longer do. WARNING, if you setup SC4S the normal way, then you did so as user "root" so you will need to do all of this as user "root" also. The most immediate solution is to see if there is still an older image around to run by using this command: docker image ls You should see something like this: REPOSITORY                                                                                           TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container2:2     latest     SomeImageID2     SomeDate    SomeSizeGB If there is, you can modify the unit file by copying the "IMAGE ID" value (in this case "SomeImageID2") and changing this line: Environment="SC4S_IMAGE=https://ghcr.io/splunk/splunk-connect-for-syslog/container2:2:latest" To this: Environment="SC4S_IMAGE=SomeImageID2" And also commenting out this line, like this: #ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE Then you need to reload systemd like this: systemctl daemon-reload This should allow you to start your service immediately as normal: service sc4s start Now you have the problem of how do you get the latest image manually (now that the automatic download cannot work) which according to this link: https://splunk.github.io/splunk-connect-for-syslog/main/upgrade/ is now this: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest The following link gave us all of what we need but we had to do it a few times with various options mined from the comments to get it eactly right: https://stackoverflow.com/questions/37905763/how-do-i-download-docker-images-without-using-the-pull-command You will first have to install docker someplace that CAN get to the image URL.  If you can run a broswer there, just post the value in your browser and it should redirect to an actual page.  If you only have the CLI there, just use curl to test like this: curl ghcr.io/splunk/splunk-connect-for-syslog/container3:latest In our case, we just installed docker on a Windows laptop and then opened powershell to run these 2 commands: docker pull ghcr.io/splunk/splunk-connect-for-syslog/container3:latest docker image ls You should see something like this: REPOSITORY                                                                                       TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container3     latest     SomeImageID3     SomeDate    SomeSizeGB Next you need to export the image to a file like this: docker save SomeImageID3 --output DockerImageSC4S.tar Then transfer this to "/tmp" on your SC4S server host however you please and load it like this: docker load -i /tmp/DockerImageSC4S.tar Then, of course, you need to re-modify the unit file using the new "SomeImageID3" value instead of "SomeImageID2".
Have you look this answer https://community.splunk.com/t5/Alerting/Throttle-alerts-based-on-field-value/m-p/172536 if it fulfills your needs?
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replac... See more...
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replace …. before by with those other fields what you want to see. But probably this is not what you are looking for? Can you told what you need to know, not how you try to do it?
Thanks for response, There is a field called URI some URI is coming as duplicate. how i can adjust query so that duplicate URI won't come? 
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event... See more...
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event to find your sought for ones) 3. What do you mean by "filter duplicate URL"? You're counting by a triplet - url, status and lob, whatever that is. So you'll get a separate result for each combination of those three values. If you don't want to break it down by status, don't include that field in the BY clause.
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_St... See more...
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_Status  and its count should show. "*/prescriptions/eni/api/api-cw/*" (URI != "*/prescriptions/eni/api/api-cw/legacySession/cache*") | stats count by lob,URI,API_Staus Result is coming as below,  
sourcetype names are xml  and raw_text. That's why I mentioned that you need to adjust as I did not know your sourcetype names.  For this particular purpose, you can simply tweak sourcetype name t... See more...
sourcetype names are xml  and raw_text. That's why I mentioned that you need to adjust as I did not know your sourcetype names.  For this particular purpose, you can simply tweak sourcetype name to represent CopyLocation and TargetLocation - or any other name you want to use in foreach. | eval sourcetype = if(sourcetype == "raw_text", "CopyLocation", "TargetLocation") (Without mucking sourcetype value, you can also use "foreach raw_text xml" and get the correct results, then rename the fields.) Here is a complete emulation | makeresults | eval sourcetype = "raw_text", data = mvappend("2024-12-18 17:02:50, file_name=\"XYZ.csv\", file copy success", "2024-12-18 17:02:58, file_name=\"ABC.zip\", file copy success", "2024-12-18 17:03:38, file_name=\"123.docx\", file copy success", "2024-12-18 18:06:19, file_name=\"143.docx\", file copy success") | mvexpand data | eval _time = strptime(replace(data, ",.+", ""), "%F %T") | rename data AS _raw | extract | append [makeresults | eval sourcetype = xml, _raw = "2024-12-18 17:30:10 <FileTransfer status=\"success\"> <FileName>XYZ.csv</FileName> <FileName>ABC.zip</FileName> <FileName>123.docx</FileName> </FileTransfer>" | eval _time = strptime(replace(_raw, "<.+", ""), "%F %T")] ``` the above emulates sourcetype IN (CopLocation, TargetLocation) ``` | eval sourcetype = if(sourcetype == "raw_text", "CopyLocation", "TargetLocation") | eval target_log = replace(_raw, "^[^<]+", "") | spath input=target_log | mvexpand FileTransfer.FileName | eval FileName = coalesce(file_name, 'FileTransfer.FileName') | chart values(_time) over FileName by sourcetype | sort CopyLocation | foreach *Location [eval <<FIELD>> = strftime(<<FIELD>>, "%F %T")] | fillnull TargetLocation value=Pending  
Let me first try to understand the problem: You want to find servers whose end state is offline, but whose immediate previous reported state is not offline, i.e., those whose state newly becomes offl... See more...
Let me first try to understand the problem: You want to find servers whose end state is offline, but whose immediate previous reported state is not offline, i.e., those whose state newly becomes offline.  Is this correct?  In other words, given these mock events _time host state_desc 2024-12-20 18:00 host1 not online 2024-12-20 16:00 host2 not online 2024-12-20 14:00 host3 ONLINE 2024-12-20 12:00 host4 not online 2024-12-20 10:00 host0 not online 2024-12-20 08:00 host1 ONLINE 2024-12-20 06:00 host2 not online 2024-12-20 04:00 host3 not online 2024-12-20 02:00 host4 ONLINE 2024-12-20 00:00 host0 not online 2024-12-19 22:00 host1 not online 2024-12-19 20:00 host2 ONLINE 2024-12-19 18:00 host3 not online 2024-12-19 16:00 host4 not online 2024-12-19 14:00 host0 ONLINE 2024-12-19 12:00 host1 not online 2024-12-19 10:00 host2 not online 2024-12-19 08:00 host3 ONLINE 2024-12-19 06:00 host4 not online 2024-12-19 04:00 host0 not online 2024-12-19 02:00 host1 ONLINE 2024-12-19 00:00 host2 not online 2024-12-18 22:00 host3 not online 2024-12-18 20:00 host4 ONLINE 2024-12-18 18:00 host0 not online 2024-12-18 16:00 host1 not online 2024-12-18 14:00 host2 ONLINE 2024-12-18 12:00 host3 not online 2024-12-18 10:00 host4 not online 2024-12-18 08:00 host0 ONLINE 2024-12-18 06:00 host1 not online 2024-12-18 04:00 host2 not online 2024-12-18 02:00 host3 ONLINE 2024-12-18 00:00 host4 not online 2024-12-17 22:00 host0 not online You want alert on host1 and host4 only. To do this with streamstats, you will need to sort events this way and that.  I usually consider them costs. (And I am quite fuzzy in streamstats:-)  So, I consider this one of few good uses of transaction.  Something like   index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db | transaction host endswith=state_desc=ONLINE keepevicted=true | search eventcount = 1 state_desc != ONLINE   Here is an emulation of the mock data for you to play with and compare with real data.   | makeresults count=35 | streamstats count as state_desc | eval _time = relative_time(_time - state_desc * 7200, "-0h@h") | eval host = "host" . state_desc % 5, state_desc = if(state_desc % 3 > 0, "not online", "ONLINE") ``` the above emulates index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db ```   Output from the search is _time closed_txn duration eventcount field_match_sum host linecount state_desc 2024-12-20 18:00 0 0 1 1 host1 1 not online 2024-12-20 12:00 0 0 1 1 host4 1 not online The rest of your search is simply manipulation of display string.
In Splunk Cloud, search heads have the same list of index names as indexers so you can use REST without sending to the indexers. | rest splunk_server=local /services/data/indexes ...
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not c... See more...
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not come back from this nightly restart. When examining the journal with this command: journalctl -b -u sc4s We see this: Error response from daemon: pull access denied for splunk/scs, repository does not exist or may require 'docker login': denied: requested access to the resource is denied This problem could happen to ANYBODY at ANY TIME and it took us a while to complete work around it so I am documenting the whole story here.
TargetLocation always comes up as Pending which is not correct. Also I tried changing the for each to foreach sourcetype. will it work?  sourcetype names are xml  and raw_text. Please help to ... See more...
TargetLocation always comes up as Pending which is not correct. Also I tried changing the for each to foreach sourcetype. will it work?  sourcetype names are xml  and raw_text. Please help to add TargetLocation date.
Thanks.  I see that RFC 4108 specifies the same thing -- that the last line could end with or without a final CRLF: https://www.loc.gov/preservation/digital/formats/fdd/fdd000323.shtml In the Notes... See more...
Thanks.  I see that RFC 4108 specifies the same thing -- that the last line could end with or without a final CRLF: https://www.loc.gov/preservation/digital/formats/fdd/fdd000323.shtml In the Notes, General section at the end of the document, "The last record in a file may or may not end with a line break character."  
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    ... See more...
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | where count=0     But this doesn't work because running the first line on its own only shows the indexes that are not empty and nothing, not even count=0 for the empty index. I also tried    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | fillnull count value=0 | where count=0   But that doesn't work either. The problem is that if "index5", for example, is showing no results, "| tstats count..." doesn't return anything, not even a null result. So something like "| fillnull" is not working at the end because there is no "index5" row to "fillnull".  I have seen other solutions use    | rest /services/data/indexes ...   and join or append the searches to each other but since I'm on Splunk Cloud, it doesn't work due to the error "Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability".    The only working solution I have so far is to create an alert for each index I want to monitor with the following search   | tstats count where index=<MY_INDEX> | where count=0   but I would rather have a single alert running each time with a list that I can change if I need to than multiple searches competing for a timeslot and all that. I have considered other solutions like providing a lookup table with a list of indexes I want to search and using lookup to compare against the results but that seems too cumbersome.    Is there a way to trigger an alert for empty indexes from a single given list on Splunk Cloud?    
Mind you this RFC is informational and only aims to document common practices. It's by no means to be a standard.
Thanks.  I have submitted it as an idea at ideas.splunk.com.  https://ideas.splunk.com/ideas/APPSID-I-944
We can be more direct in our manipulation of the dashboard in newer versions of Splunk. The structure of the nested div elements for tables is as follows     [ .splunk-view .splunk-table   - ... See more...
We can be more direct in our manipulation of the dashboard in newer versions of Splunk. The structure of the nested div elements for tables is as follows     [ .splunk-view .splunk-table   -         [ .shared-reportvisualizer   - contains the table ]         [ .splunk-view .splunk-paginator   - contains the paginator ]     ] All we have to do is reverse the display order of the div elements in the .splunk-table container ... <row depends="$always_hide_css$"> <panel> <html> <style> div[id^="topPaginatorTable"] .splunk-table { display: flex; flex-wrap: wrap; } div[id^="topPaginatorTable"] .shared-reportvisualizer { position: relative; order: 2; } div[id^="topPaginatorTable"] .splunk-paginator { position: relative; order: 1; } </style> </html> <panel> <row> Then add id="topPaginatorTable1", id=topPaginatorTable2, etc, to each table in the dashboard where you want to move the paginator to the top. I like this method better as it leaves no padding artifacts in the case that no paginator is required, and rest of the box model of HTML formats correctly for width constrained panels.
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS c... See more...
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS configuration of the web interface using web.conf is successful. TLS configuration of forwarder to indexer has failed consistently using the indexer server.conf file and the forwarder server.conf file as detailed in the doc. Our deployment is very simple; 1 indexer and a collection of windows forwarders. Has anyone been able to get TLS working between forwarder - indexer on version 9+ ? Any tips on splunkd.log entries that may point to the issue(s)?   Thanks for any help. I will be out of office next week but will return Dec 30 and check this. Thanks again.  
Hi @joewetzel63, In the error message it complains about "/opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile" file. This tmp folder does not exist on default, that is why it cannot ... See more...
Hi @joewetzel63, In the error message it complains about "/opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile" file. This tmp folder does not exist on default, that is why it cannot create unix_hardware_error_tmpfile file. You can try creating /opt/splunkforwarder/var/run/splunk/tmp folder. When I checked the addon (v9.2.0) it uses correct path as "$SPLUNK_HOME/var/run/splunk/unix_hardware_error_tmpfile".  Can you confirm and try using the latest version of the addn? 
Akamai dashboard is viewable for sc - admins but not users. Only app with this issue.
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger... See more...
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger in the given time. I can't use Throttle since my alerts do not depend on a single host or event. For example: index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db |dedup 1 host state_desc | streamstats values(state_desc) as State by host | eval Estado=case( State!="ONLINE", "Critico", State="ONLINE", "Safe" ) | table Estado host State _time | where Estado="Critico" When the status of a Host changes to critical, it triggers the alert. For this reason, I cannot use Throttle because in the time span that this alert is silenced, one of the hosts may trigger, omitting the entire alert completely. My idea is to create logic based on the results of the last triggered alert and compare them with the current alert where if the host and status are the same, it remains unchanged. However, if the host and status are different from the previous one triggered, it should be triggered. I thought about using the data where it's stored, but I don't know how to search for this information, does anyone have an idea? e Any comment is greatly appreciated.