All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see i... See more...
1. If the events are wrongly assigned timestamp they _are_ searchable but the default search range ends at "now" so those events do not fall into this range. Try searching with "latest=+12h" to see if the events are "properly" indexed into the future. 2. It seems like a timezone issue. What timezone your source is in? What timezone your SC4S runs in? What timezone your Splunk indexers (or HF if you're sending to HF) run in?
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search ... See more...
There are two things here which caught my attention. 1. You're doing some operations on your data (which prevent Splunk from auto-optimizing your search) and then way down the road you add | search DeviceName=something If you add this condition to the initial search you will be processing just a small subset of your events, not the whole lot. 2. The use of dedup. Are you absolutely sure you want to use this command? It keeps just first result with given field(s).
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealth... See more...
@SN1  You can sort _time and dedup device or you can use stats last() also, Eg: index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType=Workstation OR DeviceType=Server SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") DeviceName="bie-n1690.emea.duerr.int" | sort 0 - _time | dedup DeviceName | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex(Region, 0), "4LetCode"=mvindex('4LetCode', 0) | rename "3-Letter-Code" as CC | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the ... See more...
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the device is active now . But this search shows Inactive . How can I get latest result . index=endpoint_defender source="AdvancedHunting-DeviceInfo" | dedup DeviceName | search DeviceType=Workstation OR DeviceType= Server | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | rename "3-Letter-Code" as CC | search DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR  SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when ... See more...
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when logs are first sent from FortiAnalyzer, they are not immediately searchable in Splunk. Instead, they only become searchable 7 hours later. This problem appears to be isolated to the FortiAnalyzer local logs. All other log sources collected via SC4S are working correctly, even the log forwarded to FortiAnalyzer then to Splunk, with their log timestamps and event times matching perfectly. How can we resolve this issue?
In addition to what @PickleRick already points out, you need to clearly describe your use case - as opposed to throwing complex SPL searches at volunteers if you want to receive concrete help.  What ... See more...
In addition to what @PickleRick already points out, you need to clearly describe your use case - as opposed to throwing complex SPL searches at volunteers if you want to receive concrete help.  What kind of data are in indices si_error and internal_statistics? (Exemplify dataset.) What exactly do you expect as a result of "append" - or any other command? (Expected results.) What is the logic that connects your dataset to expected results?  As @SanjayReddy notes, there is no reason to expect "append" to not "always giving me only one section of the results."  You didn't even show sample results from each search to prove that "append" or any other command should give you more than "only one section of the results." I want to dig deeper on logic.  @SanjayReddy assumes that you want two fields (columns) Scada_count and dda_count in the same rows.  However, your first search, if it has output, is dominated by | stats count as Scada_count by Area.  This means that if there are more than one values for Area, this search will have more than one row of Scada_count. (If there is ever only one value of Area, why bother group by Area?) On the other hand, the second search can ever only produce one row of dda_count.  What exactly do you expect append to achieve?
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC ... See more...
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC App using Token instead of credentials and configure Non-SSO Authentication?
Hello @livehybrid @PrewinThomas  Thanks a lot for your valuable replies i have tried the same but what's happening is , its fetching the values only one time but when clicked other values it's ... See more...
Hello @livehybrid @PrewinThomas  Thanks a lot for your valuable replies i have tried the same but what's happening is , its fetching the values only one time but when clicked other values it's getting pickedup those values. Providing my code for your reference, could you please take a look and provide your guidance on how to solve this { "type": "splunk.table", "showProgressBar": false, "showLastUpdated": false, "dataSources": { "primary": "ds_GCK97kyD" }, "options": { "backgroundColor": "> themes.defaultBackgroundColor", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByTheme)" }, "font": "monospace", "columnFormat": { "Severity": { "data": "> table | seriesByName(\"Severity\") | formatByType(SeverityColumnFormatEditorConfig)", "rowColors": "> table | seriesByName('Severity') | matchValue(SeverityRowColorsEditorConfig)" }, "Sev": { "width": 38, "data": "> table | seriesByName(\"Sev\") | formatByType(SevColumnFormatEditorConfig)", "rowColors": "> table | seriesByName('Sev') | matchValue(SevRowColorsEditorConfig)" }, "Role": { "width": 51 }, "AlertParams": { "width": 223 }, "EventID": { "width": 63 }, "Server": { "width": 104 }, "Team": { "width": 101 } } }, "context": { "SeverityColumnFormatEditorConfig": { "number": { "thousandSeparated": false, "unitPosition": "after" } }, "SeverityRowColorsEditorConfig": [ { "match": 1, "value": "#D41F1F" }, { "match": 2, "value": "#CBA700" }, { "match": 3, "value": "#118832" } ], "SevColumnFormatEditorConfig": { "number": { "thousandSeparated": false, "unitPosition": "after" } }, "SevRowColorsEditorConfig": [ { "match": 2, "value": "#D41F1F" }, { "match": 1, "value": "#CBA700" }, { "match": "", "value": "#118832" } ] }, "title": "", "eventHandlers": [ { "options": { "tokens": [ { "key": "row.event_id.value", "token": "eventid" } ] }, "type": "drilldown.setToken" } ] }   { "type": "splunk.markdown", "options": { "markdown": "selected eventid : $eventid$", "fontColor": "#ffffff", "fontSize": "custom", "customFontSize": 25 }, "context": {}, "showProgressBar": false, "showLastUpdated": false } I am using this markdown just to know whether the interaction is working fine or not , but my actual aim is to parse this table's eventid clicked value into below query `citrix_alerts` | fields - Component,Alert_type,Country,level,provider,message,alert_time | search event_id=$eventid$  
This should get you everything you need: index=_internal sourcetype=splunk_python error
This should get the failed sendmail items, but doesn't appear get the ones dropped by allowed email domains list not including the domain. Still researching that use case.   index=_internal source... See more...
This should get the failed sendmail items, but doesn't appear get the ones dropped by allowed email domains list not including the domain. Still researching that use case.   index=_internal sourcetype=splunk_python ("Name or service not known while sending mail to" OR "Connection timed out while sending mail to") some | rex maybe needed to make this more useful.
I know this is an older thread, but I am searching for a good way to get notifications for when and email fails to be sent as well. I did find you can see these in $SPLUNK_HOME/var/log/splunk/pyth... See more...
I know this is an older thread, but I am searching for a good way to get notifications for when and email fails to be sent as well. I did find you can see these in $SPLUNK_HOME/var/log/splunk/python.log.  Specifically for my use case it is around the allowed domain list not having the domain listed. If I find a good way to detect this within a standard or REST Splunk search I will reply.  Hope this helps some.
If you mean https://splunkbase.splunk.com/app/3681 it's not a Splunk-supported app so contacting Splunk about it is pointless. The contact tab in Splunkbase lists splunk-integrations@proofpoint.com a... See more...
If you mean https://splunkbase.splunk.com/app/3681 it's not a Splunk-supported app so contacting Splunk about it is pointless. The contact tab in Splunkbase lists splunk-integrations@proofpoint.com as the email for support. You can also try to raise a ticket in Proofpoint for the addon but I suppose that even if they don't reject it, it will get a lowest possible priority since it's not directly related to the service itself.
Hi @vis1519  Please could you check the server logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any specific log lines which report an error during startup? Or are there other errors dis... See more...
Hi @vis1519  Please could you check the server logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any specific log lines which report an error during startup? Or are there other errors displayed when you try and start the Splunk service?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am having the exact same issue with version 1.4.1 (also with 1.4.0). Page shows: /en-US/app/TA-Proofpoint-TAP/search Page blank: /en-US/app/TA-Proofpoint-TAP/inputs Page blank: /en-US/app/TA-Pr... See more...
I am having the exact same issue with version 1.4.1 (also with 1.4.0). Page shows: /en-US/app/TA-Proofpoint-TAP/search Page blank: /en-US/app/TA-Proofpoint-TAP/inputs Page blank: /en-US/app/TA-Proofpoint-TAP/configuration Has there been a solution to the problem? I have entered a Splunk support case but they are not of any help as usual. I've also sent email to Proofpoint but no reply yet. Thanks for any fix/thoughts/ideas!
Ok, I have set maxHotSpanSecs 86400. I am seeing lots of warm buckets now. With and frozenTimePeriodInSecs 31536000, I think, I am seeing results that I was hoping for. Also, with the below search, ... See more...
Ok, I have set maxHotSpanSecs 86400. I am seeing lots of warm buckets now. With and frozenTimePeriodInSecs 31536000, I think, I am seeing results that I was hoping for. Also, with the below search, I am seeing data being rolled to frozen as well. index=_internal sourcetype=splunkd log_level=INFO component=BucketMover "freeze succeeded"  
Ok, just paste your dashboard source. It will be easier this way.
Yes I do - I get the input time range and use the earliest and latest functions 
That's an interesting case because generally the UF should have nothing to do with how the ausearch operates. It just spawns a child process, runs the script and whether ausearch does something succe... See more...
That's an interesting case because generally the UF should have nothing to do with how the ausearch operates. It just spawns a child process, runs the script and whether ausearch does something successfully or not is really its own responsibility. What I would try in case there is a difference - do a dump of environment variables and compare the environment from when you're spawning your script as an input with the one you're getting when the script is run by hand.
Obvious things first - you're not having a hardcoded earliest/latest parameters in your search, do you?