All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 ... See more...
Here is old post where you can link to scripts which could get old versions to you https://community.splunk.com/t5/Installation/Need-Splunk-Universal-Forwarder-7-x/m-p/695726/highlight/true#M14117 https://github.com/ryanadler/downloadSplunk
you can ignore search= command and the reason i am using dedup is because there are large number of devices . like 1 device has like 20 25 events .   
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately curre... See more...
Additions to earlier answers. Splunk has https://voc.splunk.com/ Voice of the Customer program, where are some new beta versions which you could test and give feedback to Splunk. Unfortunately currently there is no OIDC related stuff available. Then there is Customer Advisory Programs where customers can give direct feedback to Splunk's PMs. There are some roadmap presentations in every session (kept quarterly or something like that). In those you have possibility to give direct hopes to PMs what and why you are needing this and that.
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Als... See more...
As @PickleRick said this is timezone issue. Are all those logs wrongly timed or only some? I mean that if your SC4S is in one TZ and you are collecting syslogs from several different locations.  Also are your splunk servers and SC4S in same TZ? There are at least two common syslog protocols: RFC3164 aka BSD syslog  RFC5424  The newer (RFC5424) contains TZ information on every event, but old one has only date and time, but not TZ information.  Check what those sources are used and if possible use RFC5424 version. If you cannot use that then you must add TZ information to those on SC4S or Splunk HEC side. Here is one instructions for it https://splunk.my.site.com/customer/s/article/Splunk-Connect-for-Syslog-Events
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love you... See more...
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love your input on the following: What would be a recommended build configuration (CPU, RAM, disc) to support this kind of usage in playbooks? Does allowing multiple users to run playbooks simultaneously change the sizing recommendations? Any experience with tuning playbook runners or autoscaling settings to handle user-driven playbook execution effectively? Any advice or sizing tips from your deployments would be much appreciated. Thanks in advance!
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see i... See more...
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see if the events are "properly" indexed into the future. 2. It seems like a timezone issue. What timezone your source is in? What timezone your SC4S runs in? What timezone your Splunk indexers (or HF if you're sending to HF) run in?
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search ... See more...
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search DeviceName=something If you add this condition to the initial search you will be processing just a small subset of your events, not the whole lot. 2. The use of dedup. Are you absolutely sure you want to use this command? It keeps just first result with given field(s).
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the ... See more...
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the device is active now . But this search shows Inactive . How can I get latest result . index=endpoint_defender source="AdvancedHunting-DeviceInfo" | dedup DeviceName | search DeviceType=Workstation OR DeviceType= Server | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | rename "3-Letter-Code" as CC | search DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR  SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when ... See more...
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when logs are first sent from FortiAnalyzer, they are not immediately searchable in Splunk. Instead, they only become searchable 7 hours later. This problem appears to be isolated to the FortiAnalyzer local logs. All other log sources collected via SC4S are working correctly, even the log forwarded to FortiAnalyzer then to Splunk, with their log timestamps and event times matching perfectly. How can we resolve this issue?
In addition to what @PickleRick already points out, you need to clearly describe your use case - as opposed to throwing complex SPL searches at volunteers if you want to receive concrete help.  What ... See more...
In addition to what @PickleRick already points out, you need to clearly describe your use case - as opposed to throwing complex SPL searches at volunteers if you want to receive concrete help.  What kind of data are in indices si_error and internal_statistics? (Exemplify dataset.) What exactly do you expect as a result of "append" - or any other command? (Expected results.) What is the logic that connects your dataset to expected results?  As @SanjayReddy notes, there is no reason to expect "append" to not "always giving me only one section of the results."  You didn't even show sample results from each search to prove that "append" or any other command should give you more than "only one section of the results." I want to dig deeper on logic.  @SanjayReddy assumes that you want two fields (columns) Scada_count and dda_count in the same rows.  However, your first search, if it has output, is dominated by | stats count as Scada_count by Area.  This means that if there are more than one values for Area, this search will have more than one row of Scada_count. (If there is ever only one value of Area, why bother group by Area?) On the other hand, the second search can ever only produce one row of dda_count.  What exactly do you expect append to achieve?
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC ... See more...
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC App using Token instead of credentials and configure Non-SSO Authentication?
Hello @livehybrid @PrewinThomas  Thanks a lot for your valuable replies i have tried the same but what's happening is , its fetching the values only one time but when clicked other values it's ... See more...
Hello @livehybrid @PrewinThomas  Thanks a lot for your valuable replies i have tried the same but what's happening is , its fetching the values only one time but when clicked other values it's getting pickedup those values. Providing my code for your reference, could you please take a look and provide your guidance on how to solve this { "type": "splunk.table", "showProgressBar": false, "showLastUpdated": false, "dataSources": { "primary": "ds_GCK97kyD" }, "options": { "backgroundColor": "> themes.defaultBackgroundColor", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByTheme)" }, "font": "monospace", "columnFormat": { "Severity": { "data": "> table | seriesByName(\"Severity\") | formatByType(SeverityColumnFormatEditorConfig)", "rowColors": "> table | seriesByName('Severity') | matchValue(SeverityRowColorsEditorConfig)" }, "Sev": { "width": 38, "data": "> table | seriesByName(\"Sev\") | formatByType(SevColumnFormatEditorConfig)", "rowColors": "> table | seriesByName('Sev') | matchValue(SevRowColorsEditorConfig)" }, "Role": { "width": 51 }, "AlertParams": { "width": 223 }, "EventID": { "width": 63 }, "Server": { "width": 104 }, "Team": { "width": 101 } } }, "context": { "SeverityColumnFormatEditorConfig": { "number": { "thousandSeparated": false, "unitPosition": "after" } }, "SeverityRowColorsEditorConfig": [ { "match": 1, "value": "#D41F1F" }, { "match": 2, "value": "#CBA700" }, { "match": 3, "value": "#118832" } ], "SevColumnFormatEditorConfig": { "number": { "thousandSeparated": false, "unitPosition": "after" } }, "SevRowColorsEditorConfig": [ { "match": 2, "value": "#D41F1F" }, { "match": 1, "value": "#CBA700" }, { "match": "", "value": "#118832" } ] }, "title": "", "eventHandlers": [ { "options": { "tokens": [ { "key": "row.event_id.value", "token": "eventid" } ] }, "type": "drilldown.setToken" } ] }   { "type": "splunk.markdown", "options": { "markdown": "selected eventid : $eventid$", "fontColor": "#ffffff", "fontSize": "custom", "customFontSize": 25 }, "context": {}, "showProgressBar": false, "showLastUpdated": false } I am using this markdown just to know whether the interaction is working fine or not , but my actual aim is to parse this table's eventid clicked value into below query `citrix_alerts` | fields - Component,Alert_type,Country,level,provider,message,alert_time | search event_id=$eventid$  
This should get you everything you need: index=_internal sourcetype=splunk_python error
This should get the failed sendmail items, but doesn't appear get the ones dropped by allowed email domains list not including the domain. Still researching that use case.   index=_internal source... See more...
This should get the failed sendmail items, but doesn't appear get the ones dropped by allowed email domains list not including the domain. Still researching that use case.   index=_internal sourcetype=splunk_python ("Name or service not known while sending mail to" OR "Connection timed out while sending mail to") some | rex maybe needed to make this more useful.
I know this is an older thread, but I am searching for a good way to get notifications for when and email fails to be sent as well. I did find you can see these in $SPLUNK_HOME/var/log/splunk/pyth... See more...
I know this is an older thread, but I am searching for a good way to get notifications for when and email fails to be sent as well. I did find you can see these in $SPLUNK_HOME/var/log/splunk/python.log.  Specifically for my use case it is around the allowed domain list not having the domain listed. If I find a good way to detect this within a standard or REST Splunk search I will reply.  Hope this helps some.
If you mean https://splunkbase.splunk.com/app/3681 it's not a Splunk-supported app so contacting Splunk about it is pointless. The contact tab in Splunkbase lists splunk-integrations@proofpoint.com a... See more...
If you mean https://splunkbase.splunk.com/app/3681 it's not a Splunk-supported app so contacting Splunk about it is pointless. The contact tab in Splunkbase lists splunk-integrations@proofpoint.com as the email for support. You can also try to raise a ticket in Proofpoint for the addon but I suppose that even if they don't reject it, it will get a lowest possible priority since it's not directly related to the service itself.
Hi @vis1519  Please could you check the server logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any specific log lines which report an error during startup? Or are there other errors dis... See more...
Hi @vis1519  Please could you check the server logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any specific log lines which report an error during startup? Or are there other errors displayed when you try and start the Splunk service?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am having the exact same issue with version 1.4.1 (also with 1.4.0). Page shows: /en-US/app/TA-Proofpoint-TAP/search Page blank: /en-US/app/TA-Proofpoint-TAP/inputs Page blank: /en-US/app/TA-Pr... See more...
I am having the exact same issue with version 1.4.1 (also with 1.4.0). Page shows: /en-US/app/TA-Proofpoint-TAP/search Page blank: /en-US/app/TA-Proofpoint-TAP/inputs Page blank: /en-US/app/TA-Proofpoint-TAP/configuration Has there been a solution to the problem? I have entered a Splunk support case but they are not of any help as usual. I've also sent email to Proofpoint but no reply yet. Thanks for any fix/thoughts/ideas!