All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Below is the log events that I have. One has max_amount value and one has empty value. I want to find out the events that have transaction_amount > max_amount.      [Date=2022-07-29, max_amount... See more...
Below is the log events that I have. One has max_amount value and one has empty value. I want to find out the events that have transaction_amount > max_amount.      [Date=2022-07-29, max_amount=100, transaction_amount=120] [Date=2022-07-29, max_amount=100, transaction_amount=90] [Date=2022-07-29, transaction_amount=120]     I tried transaction_amount>max_amount but not working. I guess it is due to some records having no max_amount value.     index=<table_name> transaction_amount>max_amount | bucket Date span=day | fillnull value=null max_amount | stats count by Date, max_amount, transaction_amount      How to get the record #1?
Hello, We have a few types of logs generated with different time zones. Are there any ways SPLUNK can modify the time zones associated with the logs entries to a one time zone (EST) so we can map a... See more...
Hello, We have a few types of logs generated with different time zones. Are there any ways SPLUNK can modify the time zones associated with the logs entries to a one time zone (EST) so we can map all logs to one time zone. DS Logs:             2021-07-28 16:57:00,526 GMT Security Logs:     2021-07-28 16:15:49,430 EST Audit Logs :   Wed 2021 May 28, 16:58:11:430 Any recommendations will be highly appreciated. Thank you!
Hi Team, Can someone clarify me how exactly the licensing calculation works in Appd to monitor Application, Database, server, network, Log, RUM, Synthetics and Microservices. Gone through the docum... See more...
Hi Team, Can someone clarify me how exactly the licensing calculation works in Appd to monitor Application, Database, server, network, Log, RUM, Synthetics and Microservices. Gone through the documentation but didn't get the complete details. Thanks, Schandup
I have this table, but I want to make a timechart that in the span=5m, I have 2 cols like the pics above.
We are trying to generate an  API keys in order for Terraform to create dashboards. Anyone had idea on getting/have example on the API's keys? Thank you.  
Here is the sample data set: ENTITY_NAME REPLICATION_OF VALUE server1 BackupA 59 server2 BackupB 28 server3 backup_noenc_h1 54 server3 backup_utility... See more...
Here is the sample data set: ENTITY_NAME REPLICATION_OF VALUE server1 BackupA 59 server2 BackupB 28 server3 backup_noenc_h1 54 server3 backup_utility_h1 96 server4 backup_noenc_h2 40 server4 backup_utility_h2 700   I want to be able to use the number display visualization to display entity_name, replication_of, and latest value for each record. I've tried these: | stats latest(VALUE) by REPLICATION_OF ENTITY_NAME | chart latest(VALUE) by REPLICATION_OF ENTITY_NAME | chart latest(VALUE) over REPLICATION_OF by ENTITY_NAME Ultimately I want something that looks like this, but not sure if you can display three data series in a number display. If this isn't possible, what would be the best way to visualize a data set like this?      
I got some embedded XML in a Syslog message.  I have no access to get under the bonnet in an admin sense.  I need to "grok" the message - ideally into stages  1 - extract xml 2 - parse xml, split... See more...
I got some embedded XML in a Syslog message.  I have no access to get under the bonnet in an admin sense.  I need to "grok" the message - ideally into stages  1 - extract xml 2 - parse xml, split up with eval or something I have seen a bunch of stuff around props.conf - but I guess I need to go to one of the "collector" nodes so it parses at source? 
I have metrics that are basically _time host1 monitor_count=2 _time host1 monitor_count=1 This is over different hosts and dynamic monitor_count values.  What I want to do is make a query that c... See more...
I have metrics that are basically _time host1 monitor_count=2 _time host1 monitor_count=1 This is over different hosts and dynamic monitor_count values.  What I want to do is make a query that counts the amount of times the monitor_count depreciated over a given time range. So if host 1 throttles back and forth between 2 and 1, how many times did that happen? I'm trying many options of streamstats with window=2 earliest(monitor_count) as prev_count by host, but that doesn't seem to be working.  When it drops from 2 to 1, a 1 is recorded for previous and current to that time range.
Hi All, Our Client has sell off some part of it to another company, Here I am using "CL"  as our client "ZX" as new company who bought this. "CL" was worried about, as part of migration "ZX" will... See more...
Hi All, Our Client has sell off some part of it to another company, Here I am using "CL"  as our client "ZX" as new company who bought this. "CL" was worried about, as part of migration "ZX" will use Quest ODM for migrating M365 data  (Email, OneDrive, SharePoint and Teams) from "CL" tenant to the "ZX" tenant. A cloud only Service Accounts, CL-ZX-QuestODM, will be created to support the ODM migration. Permissions for this service account will be limited to specific mailboxes, OneDrive, SharePoint and Teams sites that are associated with "ZX". The expectation is that "ZX" will only be copying data associated with "ZX" that has been approved by "CL". This account should not be used to upload any data to the "CL" Does anyone have any recommendations for a use cases or controls related to this Service Account? For example: if be it copying "CL" data, uploading any data or accessing other M365 Services (i.e. security portals, etc.) could it trigger an alert?   Thanks in advance. Your answer will be pretty much helpful for me.
Is it necessary to put `shebang` on custom Python script that will be executed by `splunk`? The reason why I ask is because `shebang` is `#!/usr/local/bin/python` but we know that Spunk uses the one ... See more...
Is it necessary to put `shebang` on custom Python script that will be executed by `splunk`? The reason why I ask is because `shebang` is `#!/usr/local/bin/python` but we know that Spunk uses the one $SPLUNK_HOME/bin/python3.   Thanks in advance.
So here at work we have been using Sumo for a couple years now but are moving to Splunk.  I have been looking for ways of moving the log/event data. Now I know I can export a search from Sumo into a ... See more...
So here at work we have been using Sumo for a couple years now but are moving to Splunk.  I have been looking for ways of moving the log/event data. Now I know I can export a search from Sumo into a CSV then import it.  However Im unable to see all the indexes to import the data to on the Splunk side. There's also the issue of the host change. Maybe Splunk is just unable to maintain the original host: values from the imported data, but if that's true I'd need to validate it for the boss.   So anyway, Im asking here for advice on the proper/accepted/best way to accomplish moving historical data from Sumo into Splunk.  And what the stipulations are on which indexes show up as destinations.  And lastly.. why can't splunk respect the host values of the imported records?   Thanks!!
Basically my query should search an index for an ip in the last 4 hours and return 1 event. Then it should left join on IP to a second index and search for results over the last 7 days. The IP i am... See more...
Basically my query should search an index for an ip in the last 4 hours and return 1 event. Then it should left join on IP to a second index and search for results over the last 7 days. The IP i am searching exists in both indexes. Why are no results being returned? earliest=-4h latest=now() index=data1 Source_Network_Address=10.1.1.1 | head 1 | rename Source_Network Address as IP | join type=left IP max=5 [search earliest=-7d latest=now() index=data2 | fields IP, DNS] | table index, _time, IP, DNS
This is my example log file: -- Daily Prod Started 7/28/2022 12:36:05 PM 0.762 sec -- BegMo='06/01/2022' 7/28/2022 12:36:05 PM 0.049 sec -- BegDate='06/01/2022' 7/28/2022 12:36:05 PM 0 sec --... See more...
This is my example log file: -- Daily Prod Started 7/28/2022 12:36:05 PM 0.762 sec -- BegMo='06/01/2022' 7/28/2022 12:36:05 PM 0.049 sec -- BegDate='06/01/2022' 7/28/2022 12:36:05 PM 0 sec -- EndDate='07/28/2022' 7/28/2022 12:36:05 PM 0 sec -- EndMidNight='07/29/2022' 7/28/2022 12:36:05 PM 0 sec -- Data Collection Start=7/28/2022 12:36:05 PM 7/28/2022 12:36:05 PM 0 sec How do I pick up the timestamp on lines 2-5 - where there is a date with quotes, and lines 1 and 6, where there is not?  
I am trying to create a logic to choose a value to use from multiple fields based on a priority I can define. I have 3 fields which may have values in them and I want to create a 4th field to represe... See more...
I am trying to create a logic to choose a value to use from multiple fields based on a priority I can define. I have 3 fields which may have values in them and I want to create a 4th field to represent the best best choice of the 3. I always trust field3 more than field2 and always trust field2 more than field1.  I want the logic to be -  - if field3 has value, always use it -if field3 has no value, use field2's value -if field3 and field2 have no values, use field1's value - if fields3, 2 and 1 all have no values, leave blank (or "unknown", etc.) These are 3 examples of what this may look like and what I want to see field4 be based on the presence of values in the other fields. Example 1 field1=<value1> field2=<value2> field3=<value3> field4=<value3> Example 2 field1=<value1> field2=<value2> field3= field4=<value2> Example3 field1=<value1> field2= field3= field4=<value1>   I feel like this is probably a pretty pretty simple eval command, but I can't seem to find an example. Thank you in advance!          
Hi All, I already have a search that gives me a result.  But what I desire to have is we want the results only if another event is NOT true for the user. So for example below gives me result: E... See more...
Hi All, I already have a search that gives me a result.  But what I desire to have is we want the results only if another event is NOT true for the user. So for example below gives me result: EventID=4625 earliest=-4h@h latest=-3h@h | table User IPAddress EventID Message Desire is to only show results if there was no 4724 for a specific period.  Would I do it something like this? EventID=4625 earliest=-4h@h latest=-3h@h | table User IPAddress EventID Message earliest=-4h@h latest=-3h@h | append [search NOT EventID=4724 earliest=-7d@d latest=now ]    
I have been asked to check with Splunk Support on whether we can run 2 different Splunk add-ins for "Splunk Add-on for Microsoft Cloud Services". Can we have one connect to Azure Commercial while the... See more...
I have been asked to check with Splunk Support on whether we can run 2 different Splunk add-ins for "Splunk Add-on for Microsoft Cloud Services". Can we have one connect to Azure Commercial while the other connects to Azure Government event hubs? Or is this a case in which we would need 2 separate splunk servers to support that? What else could we do? IE.  could we set it up on the heavy forwarder in the FTI subscription for Government for server 1 and use the existing server for commercial?
This is my 2nd follow-up regarding this solution:  https://community.splunk.com/t5/Alerting/How-can-I-query-to-get-all-alerts-which-are-configured/m-p/...   My question now is about the search fiel... See more...
This is my 2nd follow-up regarding this solution:  https://community.splunk.com/t5/Alerting/How-can-I-query-to-get-all-alerts-which-are-configured/m-p/...   My question now is about the search field (that contains the actual Splunk query behind each alert).  Does this field require any special handling? If I need to use this field for filtering purposes inside a search command, would it be different than using any other field like title. Or can I simply use something like following:   |rest/servicesNS/-/-/saved/searches | search alert.track=1 AND title="prefix*" AND search="index=someindex*"    
Hi All,   We have a requirement where the end user would be uploading CSV to our HF, and from there, jobs would process it. In the case of Lookup Editor, it gives a view to all the CSVs, whic... See more...
Hi All,   We have a requirement where the end user would be uploading CSV to our HF, and from there, jobs would process it. In the case of Lookup Editor, it gives a view to all the CSVs, which is contrary to what we are trying to restrict the view.   An alternate idea we came up with is to create a custom page that has an upload button and upload it, but we are struggling how to link JS code that uploads files to the backend. 
Hi Splunkers, I have a simple drilldown on my Splunk dashboard that links to an external website. How can I get Splunk to log the URL that was clicked by the user ?  I would like to see a log of ... See more...
Hi Splunkers, I have a simple drilldown on my Splunk dashboard that links to an external website. How can I get Splunk to log the URL that was clicked by the user ?  I would like to see a log of all the URLs clicked by each user for audit purpose.   Regards.
Hi Splunkers, I have a simple drilldown configured that links to an external website. The link generated by the drilldown has data clearly visible in the URL like http[:]//site.com/name=joe Is it... See more...
Hi Splunkers, I have a simple drilldown configured that links to an external website. The link generated by the drilldown has data clearly visible in the URL like http[:]//site.com/name=joe Is it possible to POST data to an external website using drilldown I would prefer my url to be http[/]site.com and the name=joe to be set as POST parameter. Regards.