All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix... See more...
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix to remediate this vulnerability? OpenSSL SEoL (1.0.2.x) OpenSSL 1.0.2 < 1.0.2zk Vulnerability
Hello, we meet issue as unix and linux add-on is incompatible with rhel 9.4 ( cause of scripted input). Does Splunk PCI Compliance Add-on used are rhel 9.4 and above compatibles ? regards  
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardl... See more...
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardless of the date being between 8th to 14th.  Is this a shortcoming in Splunk or I'm doing something wrong?
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object ... See more...
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object should be recognized as a separate event, but the configuration is not breaking them apart as expected   I observed that each JSON object has a comma after the closing brace }, which appears to be causing the issue by preventing Splunk from treating each JSON object as a separate event. sample data :  { "timestamp":"1727962122", "phonenumber": "0000000" "appname": "cisco" }, { "timestamp":"1727962123", "phonenumber": "0000000" "appname": "windows" },  Error message : Error message : JSON StreamID:0 had parsing error: Unexpected character while looking for value comma ',' Thanks in advance
splunkで以下のSPLをジョブのバックグラウンドに送りました。 | metadata type=sourcetypes | search totalCount > 0 その後、こちらのサーチのジョブを削除したのですが、splunkのサーチ画面を更新(F5)すると再度先ほどのジョブが実行されています。 こちらのジョブを完全に削除するにはどうしたらいいですか?何度もジョブ実行されてしまいます。
We  found Visdom for Citrix VDI listing on splunkbase interesting, but not seeing how to download the app to review.   Is this app still availble and supported by the developer(s)? 
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunk... See more...
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunkfwd.  I also see that updating an existing UF keeps the user as splunk (this seems to work but not always).  This will means that new installations will use a different username than updated UF. This is a problem for me because I use scripts to make the permission changes to give splunk access to the appropriate log files.  I'm not finding a lot of guidance on how to keep this sane.  How have other organizations dealt with this? I'm tempted to uninstall UF and do a fresh install on every system.  That will force me to manage splunk servers differently than other linux servers, but that has to be less complicated than trying to keep track of which systems use splunk and which use splunkfwd.
Hi All,  I am having issues with DB connect version that I downloaded is having issues with sending data
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they ar... See more...
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they are grouped as "other" which dont want and need to display as separate ones. Also I am also ready to turn off the legend. The query used is  index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart values(sessions) BY _time, instance May I know which option in the chart will not collapse the values of the y axis?
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it lo... See more...
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it logged by the host.  I marked it as false because it should be "as soon as the event gets indexed by Splunk" instead of "as soon as the event gets logged by the host".  I have raised a question because I was not awarded marks for this question. But the counter was "Per-result triggering helps to achieve this". But isn't it basic that Splunk can only read the indexed data? Can anyone please verify if I'm correct?  Thanks in advance.
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR eve... See more...
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR events and watchlist hits. We want to correlate the watchlist hits to create incidents. Is this something Splunk Enterprise Security can do right out of the box, given access to the EDR data? If so, how can do we do this in the Splunk Enterprise Security dashboard?  
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I... See more...
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I set it so that if a field is not in the extracted list, it is automatically ignored? Is this possible? 
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly... See more...
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly rules? Is there anywhere to explain the existing anomaly categories are based on what or will be looking for what in the traffic? Threats: How long does it take to trigger threats after identifying anomalies? Is there any source I can rely on for creating threat rules? As I am creating rules and testing but with no results.
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if t... See more...
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if the data gets reduced below 10m i should receive an alert. But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time. So is there any way that i can trigger this alert after the search completed completely.
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegr... See more...
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1, manager:JohnDoe@email.com" I'd like to split each of these tags up into their own field/value, AND extract the first part of the tag as the field name. Result of new fields/values would look like this: avd="vm" dept="support services" cm-resource-parent="/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1" manager="JohnDoe@email.com" I've looked at a lot of examples with rex, MV commands, etc, but nothing that pulls the new field name out of the original field. The format of that Tags field is always the same as listed above, for all events. Thank you!
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled proces... See more...
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled processors.  Add following line as is in default-mode.conf.   #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor     NOTE:  PLEASE DON'T APPLY ON HF/SH/IDX/CM/DS. You want to use different app( not SplunkUniversalForwarder app) to push the change.
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog ... See more...
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog via TLS to the server running my Splunk Enterprise Instance?
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01  ... See more...
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01                        00:00:00                  def_02 x.x.x.y           00:00:11                  ghi_03                                                             jkl_04   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this. new hostname.csv file. ip                              mac                                       hostname x.x.x.x                  00:new:mac                            abc_01 x.x.y.new            00:00:00                                   def_02 x.x.x.y                  00:00:11                                    ghi_03 new.ip                new:mac                                      jkl_04   thank you for your help!!!
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed ... See more...
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed the : https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/BindSplunktoanIP It's related to editing the correct 'web.conf' file.  The document basically not telling which 'web.conf' file to edit. There are Eight 'web.conf' files in total. if you try coping 'web.conf' from 'Defaullt' folder into 'local' folder and editing 'mgmtHostPort' (with correct IP port etc,) it still does not work. Resolution : if you edit 'web.conf' file (for mgmtHostPort)  in the below location, it works perfectly i.e. you are able to launch the Splunk with 'IP address:8000' port. C:/Programme Files/Splunk/var/run/splunk/merged/web.conf However: If you restart Splunk Enterprise via the web console i.e. 'Restart button' within the application. The settings goes and you need to do it again. If you restart via Microsoft services>splunkd.service -- there is no problem. Environment Used : Splunk Enterprise - 9.3.1 (On-prem)    OS : Windows Server 2022 Also tried on Splunk Enterprise - 9.2.1  (On-prem) - Has same problem.    
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have h... See more...
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have happened and its risk score has remained the same. I have tried adjusting our base correlation search's throttling to throttle by risk object over every 7 days, because our correlation search goes back over the last 7 day's worth of alerts to determine whether or not to trigger a notable.  Which brings me to this question: do the underlying alerts (i.e., the alerts that contribute to generating a risk score which ultimately determines if a risk object is generated or not) also need to be throttled for the past 7 days? Right now the throttling settings for those alerts are set to throttle by username over the past 1 day.