All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm new to Splunk and was wondering how to do a lookup table.  So what i'm trying to get is something like a lookup of:  index=_internal* log_level=WARN OR log_level=ERR host=XPxx9* OR host=GPxx7* O... See more...
I'm new to Splunk and was wondering how to do a lookup table.  So what i'm trying to get is something like a lookup of:  index=_internal* log_level=WARN OR log_level=ERR host=XPxx9* OR host=GPxx7* OR host=fsr*   but instead of listing like 30 of the host names with OR arguments, what's the ideal way to do it? can someone provide examples?
I am trying to search the Network Traffic data model, specifically blocked traffic, as follows: | tstats summariesonly=true allow_old_summaries=true count from datamodel="Network_Traffic"."All_Traff... See more...
I am trying to search the Network Traffic data model, specifically blocked traffic, as follows: | tstats summariesonly=true allow_old_summaries=true count from datamodel="Network_Traffic"."All_Traffic"."Traffic_By_Action"."Blocked_Traffic" and I get the following error: Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel Am I not chaining the child objects correctly in the search? Thx
Trying to route windows application logs to correct index based on event data. The scenario I have XmlWinEventLogs coming in normally using TA for windows and they head to the winos index.  I would l... See more...
Trying to route windows application logs to correct index based on event data. The scenario I have XmlWinEventLogs coming in normally using TA for windows and they head to the winos index.  I would like to redirect some specific events to a different index.   Props.conf [source::XmlWinEventLog:Application] TRANSFORMS-citrix_xa_index = citrix_xenapp_index Transforms.conf [citrix_xenapp_index] REGEX = Citrix DEST_KEY = _MetaData:Index FORMAT = citrixxa This should route all events containing "Citrix" to the citrixxa index.  My question is the order of operation?  Do Index redirects have to be performed before any other say sourcetype renaming that the windows TA does?
Hi all ..I need a help on a query ... My query looks like this    Index=* ......... | Eventstats count as total_count | where log!=error | eventstats count as success_count ....      Issue is t... See more...
Hi all ..I need a help on a query ... My query looks like this    Index=* ......... | Eventstats count as total_count | where log!=error | eventstats count as success_count ....      Issue is that , where command doesn't filter properly .. and I get both total and success count as same even though there is log=error events(it doesn’t remove log=error events) ... I tried using search command ..tried match statement ... Still everything gives both total and success count as same . I’m finding this weird. Is there anyway I can try to filter ?
Splunk Version: 8.0.2007.1 Instance:  Search Head App AIX or other apps   Problem:  After updating an alert's saved search, the saved search reverts after updating the alert's cron job or othe... See more...
Splunk Version: 8.0.2007.1 Instance:  Search Head App AIX or other apps   Problem:  After updating an alert's saved search, the saved search reverts after updating the alert's cron job or other settings.   Nitty Gritty:  This only occurs when the saved search is modified and saved in a different browser tab, and then, the alert is updated in the original tab where the alert is modified.  Confused, don't worry, I have an example below.   Example:  User modifies saved search and cron job of alert in "two different browser tabs": User opens alert-1 in App in browser tab 1 User opens search in second tab (through right-click -> open in new tab) User updates search, runs search and then saves search under alert-1 name User closes search tab (tab 2) or leaves both tabs open User goes back to tab 1 to update cron job of alert (or other configuration on alert) User saves alert settings. User wants to verify that alert saved search is correct by opening up second tab (right-click on open in search -> new tab) User finds that search string has reverted to original search
We would have noticed Dropdown input in Simple XML dashboards with 100K or more results kind of makes the dropdown un-responsive. They take longer to open and difficult to scroll through the values. ... See more...
We would have noticed Dropdown input in Simple XML dashboards with 100K or more results kind of makes the dropdown un-responsive. They take longer to open and difficult to scroll through the values. Ideal fix would be to use Simple XML JS Extension and Splunk JS Stack to load partial results using AJAX kind of framework. This question is to document how to solve this using Simple XML and CSS extension.
Hello good folks SELECT eventTimestamp FROM transactions WHERE application = "MyPROD" and eventTimestamp BETWEEN '2020-09-20T10:28:30.186Z' and '2020-09-20T10:28:40.186Z' I get the data success... See more...
Hello good folks SELECT eventTimestamp FROM transactions WHERE application = "MyPROD" and eventTimestamp BETWEEN '2020-09-20T10:28:30.186Z' and '2020-09-20T10:28:40.186Z' I get the data successfully, but the time is exactly 6 hours behind. What time zone or format is 2020-09-20T10:28:40.186Z The letters 'T' and 'Z' represent the Zulu time format, but I can't understand the results. The results are exactly 6 hours behind the expected.
I'm attempting to use the address_in_network function to compare results of a Splunk query against a custom list, and use matches to remove items from action_results.data of a that query, so that the... See more...
I'm attempting to use the address_in_network function to compare results of a Splunk query against a custom list, and use matches to remove items from action_results.data of a that query, so that the remainder of the query results are easily accessible in following blocks. I've got the logic of accessing action_results.data, the custom list, and address_in_network all figured out - but I'm having a hard time figuring out exactly how to either remove items directly from action_results.data, or return my list of IP addresses in a type that a filter block can make use of, so that later blocks could just access filtered-data directly. My variable created for output, Build_IP_Whitelist__tofilter, is assigned a type of None in the code framework that I can't edit. I went ahead and cast it to a list and used append to build out that list, which returns without error from my custom function. The problem arises when I try to use that list for comparison in a following filter block:   Wed Sep 23 2020 11:59:11 GMT-0400 (Eastern Daylight Time): phantom.condition(): condition 1 to evaluate: LHS: Build_IP_Whitelist:custom_function:tofilter OPERATOR: != RHS: Execute_External_IP_Query:action_result.data.*.dest_ip Wed Sep 23 2020 11:59:11 GMT-0400 (Eastern Daylight Time): phantom.condition(): ERROR: LHS of this condition statement is a list data type while RHS is not. For this expression and data types, '!=' is not a supported data operator. Use 'in' or 'not in' operators   There's got to be a better data structure to fit my list of whitelist IPs into, but I'm having a hard time finding it in the documentation. Any pointers on that specifically, or a better approach to the general question?
Hello, We are using the Splunk app for checkpoint to ingest checkpoint logs via a heavy forwarder. The host is always reported as the management server and we want to override that with the IPs of ... See more...
Hello, We are using the Splunk app for checkpoint to ingest checkpoint logs via a heavy forwarder. The host is always reported as the management server and we want to override that with the IPs of the actual firewalls. I created the following files in the local folders on the heavy forwarder: props.conf [cp_log] TRANSFORMS-host_override = host_override transforms.conf [host_override] REGEX = origin=([^|]+) DEST_KEY = MetaData:Host FORMAT = host::$1 Restarted Splunk but there's no change, the host value remains the same. btool shows that the local props and transforms files are applied. I can even see the Field transformation on the heavy forwarder UI. I've also checked that the regex works fine and extracts the correct values. Any ideas? Thank you!
Hi, I've a scenario where our organisation is supposed to only send logs from servers to clients indexers. We have decided to use UF and deployment server. We need to know what are known downtimes... See more...
Hi, I've a scenario where our organisation is supposed to only send logs from servers to clients indexers. We have decided to use UF and deployment server. We need to know what are known downtimes, performance issues  for for UFs and deployment servers. For example incase there may be any downtime while upgrade of UFs or any maintenance aspects.   Are there any exceptions with capabilities of UF to forward logs like for certain application (commonly used) logs cannot be forwarded since they are in xyz format..... For example incase there may be any downtime while upgrade of UF. We need this information for certain agreements with the customer. Can anyone enlist few points here.
I installed Splunk Add-on for microsoft cloud services in splunk cloud. I am splunk cloud admin. When I installed App it keeps on spinning wheel and showing as loading but never getting loaded so tha... See more...
I installed Splunk Add-on for microsoft cloud services in splunk cloud. I am splunk cloud admin. When I installed App it keeps on spinning wheel and showing as loading but never getting loaded so that I can configure and add inputs.What can be the issue. Can I unistall this App and reinstall, if yes how.
Can you tell me the search query to find what has been downloaded ? Also i see few discrepencies with code . Path for code : /apps/splunk/share/splunk/search_mrsparkle/exposed/js/views/deploymentser... See more...
Can you tell me the search query to find what has been downloaded ? Also i see few discrepencies with code . Path for code : /apps/splunk/share/splunk/search_mrsparkle/exposed/js/views/deploymentserver/ClientsSummary.js For phoneHome in 24 Hours — 24 hrs is calculated based on datenow functionality but if you see the code for TotalDownloads .. its directly calculated the code for 1 hr (3600 sec)- It does not use the functionality of datenow( did not mention which 24 hour) ? Is the data calculated correctly ?  PhoneHome Now : ================== var numSecondsIn24Hours = 24 * 60 * 60; if( !Date.now) { Date.now = function() {return new Date().getTime();} ; } var epoch_time = parseInt(Date.now()/1000, 10) - numSecondsIn24Hours; var phonedHomeClients = new ClientsCollection(); var that = this; phonedHomeClients.fetch({ data: { minLatestPhonehomeTime: epoch_time, count: 1 } , success: function(clients, response){ var numClients = 0; if (clients.length > 0) { numClients = clients.first().paging.get('total'); } that.$('#numPhonedHomeClients').html(numClients); that.$('#phonedHomeLabel').html(i18n.ungettext("Client", "Clients", numClients)); } }); TotalDownloads : ================== //Get number of downloads in the last hour var recentDownloads = new RecentDownloadsCollection(); var numSecondsInOneHour = 3600; recentDownloads.fetch({ data: { count: 1, maxAgeSecs: numSecondsInOneHour } , success: function(recentDownloads, response){ var numDownloads = "N/A"; if (recentDownloads.length > 0) { numDownloads = recentDownloads.first().entry.content.get('count'); that.$('#downloadsLabel').html(i18n.ungettext("Total download", "Total downloads", numDownloads)); } that.$('#numDownloadsInLastHour').html(numDownloads); } });
Hello people, we are trying to set up a cluster using 2 differents interfaces where> inteface A --> web access interface B --> all the other communications How can i do this? i have been trying to... See more...
Hello people, we are trying to set up a cluster using 2 differents interfaces where> inteface A --> web access interface B --> all the other communications How can i do this? i have been trying to use SPLUNK_BINDIP with mngmtHostPort but it doesnt work at all
Hi,  I have a table like this :  I want to group by day and tried the commande | bucket span=1d field_date but without success.  How can I do ? 
Hi, I am trying to find unique id's the have 3 letters followed by 6 numbers for example   bhg111111   My issue is I want to not count duplicates and would like to do this within the regex itsel... See more...
Hi, I am trying to find unique id's the have 3 letters followed by 6 numbers for example   bhg111111   My issue is I want to not count duplicates and would like to do this within the regex itself (not dedup by field.   is this possible   thanks,
Hello Everyone, I would like to monitor the Application log file using AppDynamics log Extension and publish the below results to the application team. I want to check for the line POA-*.pdf file s... See more...
Hello Everyone, I would like to monitor the Application log file using AppDynamics log Extension and publish the below results to the application team. I want to check for the line POA-*.pdf file starts and POA-*.pdf ends. If the job successfully ends then I want to print output as POA-*.pdf file is success If the Job fails(if it not able to find POA-*.pdf ends line) then print output as POA-*.pdf file is failed. sample logs:  ***************Processing of file POA-002308981.11111.pdf starts ************** 1:00:13,626 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml] 11:00:13,627 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy] 11:00:13,627 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource  ***************Processing of file POA-0022236.3479237.pdf ends ************** Is something we can achieve this monitoring using AppDynamics log monitoring extension. Thanks Selvan
I need a search that shows me the count of the produtcs weekly    products       countfromweek1    countfromweek2     date  product1                   10                                 6         ... See more...
I need a search that shows me the count of the produtcs weekly    products       countfromweek1    countfromweek2     date  product1                   10                                 6                        2020-09-07 to 2020-09-13 product2                   20                                16                      2020-09-14 to 2020-09-21
Hi All, I am having a set of logs from one of my ticketing system.  I want to extract the Host name of the device that caused issue from the description.  Critical_DISK: ABC - /var is at 99 % . Cr... See more...
Hi All, I am having a set of logs from one of my ticketing system.  I want to extract the Host name of the device that caused issue from the description.  Critical_DISK: ABC - /var is at 99 % . Critical_DISK: DEF - /var/log is at 85 % . Critical_DISK: GHI - /var/log is at 90 % . Critical_DISK: JKL-MNO-PQR - /var/log is at 73 % . Critical_DISK: JKL-MNO-PQR - /var/log is at 85 % . Critical_DISK: JKL-MNO-PQR - /var/log is at 87 % . Critical_DISK: hkgtelpac-sg1.hkg2.oss - /var/log is at 85 % . [VMware vCenter - Alarm alarm.HostConnectivityAlarm] Host abcdefgh.in.reach.com in TGCN_PAD is not responding [zenoss] AB-CD-EFG 10.111.122.33 is DOWN! [zenoss] QWERTYU disk space threshold: 98.1% used (8.1GB free) [zenoss] asedfrt 10.20.30.40 is DOWN!   These are some sample description.  I used this rex statement : | rex field=_raw "^[^\\]\\n]*\\]\\s+(?P<HostName>\\w+)"  but it is not properly extracting the hostname.  Can someone please help me with this. 
We upgraded the McAfee ePO from 5.9 to 5.10 after that splunk integration was broken, so i checked some articles and prepared the below query, when i use the rising column in Splunk db query , we fac... See more...
We upgraded the McAfee ePO from 5.9 to 5.10 after that splunk integration was broken, so i checked some articles and prepared the below query, when i use the rising column in Splunk db query , we face the below error.  Error: " Msg 8114, Level 16, State 5, Line 1 Error converting data type varchar to bigint." Please help me on how to resolve this error. Db Query : SELECT  [EPOEvents].[ReceivedUTC] as [timestamp],  [EPOEvents].[AutoID],     [EPOEvents].[ThreatName] as [signature],     [EPOEvents].[ThreatType] as [threat_type],     [EPOEvents].[ThreatEventID] as [signature_id],     [EPOEvents].[ThreatCategory] as [category],     [EPOEvents].[ThreatSeverity] as [severity_id],  [EPOEvents].[DetectedUTC] as [detected_timestamp],     [EPOEvents].[TargetFileName] as [file_name],     [EPOEvents].[AnalyzerDetectionMethod] as [detection_method],     [EPOEvents].[ThreatActionTaken] as [vendor_action],     CAST([EPOEvents].[ThreatHandled] as int) as [threat_handled],     [EPOEvents].[TargetUserName] as [logon_user],  [EPOComputerPropertiesMT].[UserName] as [user],     [EPOComputerPropertiesMT].[DomainName] as [dest_nt_domain],  [EPOEvents].[TargetHostName] as [dest_dns],     [EPOEvents].[TargetHostName] as [dest_nt_host],  [EPOComputerPropertiesMT].[IPHostName] as [fqdn],     [dest_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),4,1))) ),     [EPOComputerPropertiesMT].[SubnetMask] as [dest_netmask],     [EPOComputerPropertiesMT].[NetAddress] as [dest_mac],     [EPOComputerPropertiesMT].[OSType] as [os],     [EPOComputerPropertiesMT].[OSVersion] as [os_version],     [EPOComputerPropertiesMT].[OSBuildNum] as [os_build],     [EPOComputerPropertiesMT].[TimeZone] as [timezone],  [EPOEvents].[SourceHostName] as [src_dns],     [src_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),4,1))) ),  [EPOEvents].[SourceMAC] as [src_mac],     [EPOEvents].[SourceProcessName] as [process],     [EPOEvents].[SourceURL] as [url],     [EPOEvents].[SourceUserName] as [source_logon_user],  [EPOEvents].[AnalyzerName] as [product],     [EPOEvents].[AnalyzerVersion] as [product_version],     [EPOEvents].[AnalyzerEngineVersion] as [engine_version],     [EPOEvents].[AnalyzerDATVersion] as [dat_version] FROM "ePO_server"."dbo"."EPOEvents" LEFT JOIN "ePO_server"."dbo"."EPOLeafNodeMT" ON [EPOEvents].[AgentGUID] = [EPOLeafNodeMT].[AgentGUID] LEFT JOIN "ePO_server"."dbo"."EPOProdPropsView_VIRUSCAN" as [EPOProdPropsView_VIRUSCAN] ON [EPOLeafNodeMT].[AutoID] = [EPOProdPropsView_VIRUSCAN].[LeafNodeID] LEFT OUTER JOIN "ePO_server"."dbo"."EPOProdPropsView_THREATPREVENTION" ON [EPOLeafNodeMT].[AutoID] = [EPOProdPropsView_THREATPREVENTION].[LeafNodeID] LEFT JOIN "ePO_server"."dbo"."EPOComputerPropertiesMT" ON [EPOLeafNodeMT].[AutoID] = [EPOComputerPropertiesMT].[ParentID] LEFT JOIN "ePO_server"."dbo"."EPOEventFilterDesc" ON [EPOEvents].[ThreatEventID] = [EPOEventFilterDesc].[EventId] AND ([EPOEventFilterDesc].[Language]='0409') WHERE [EPOEvents].[AutoID] > ? ORDER BY [EPOEvents].[AutoID] ASC;
Hi Team, I am trying to onboard Reports data to splunk available under "Airwatch Workspace one UEM">Monitor>Reports & Analytics>List View>All Reports>"Application Details by Device". Please suggest me... See more...
Hi Team, I am trying to onboard Reports data to splunk available under "Airwatch Workspace one UEM">Monitor>Reports & Analytics>List View>All Reports>"Application Details by Device". Please suggest me the best way to onboard these reports data to splunk.