All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a field "blockedUri" which can contain two types of value (string or URL). Below is an example :   blockedUri = eval blockedUri = https://analytics.google.com/sample.js   I need a sp... See more...
Hi, I have a field "blockedUri" which can contain two types of value (string or URL). Below is an example :   blockedUri = eval blockedUri = https://analytics.google.com/sample.js   I need a splunk search query that will trim and return the only hostname of the value if it's a URL or if it is a normal string simply return the string.   The result should be as below :   eval analytics.google.com   Thanks in advance
I am getting following PCF metric log every 15 seconds. How should I visualize these data? I need to do a calculation base on these raw data, eg, in order to get disk usage i need to calculate disk ... See more...
I am getting following PCF metric log every 15 seconds. How should I visualize these data? I need to do a calculation base on these raw data, eg, in order to get disk usage i need to calculate disk / disk_quota. ``` Nov 20 2020 02:19:44 [0] [gauge@47450 name="disk" value="1.75566848e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="memory_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="disk_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="cpu" value="0.49678180215706796" unit="percentage"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="memory" value="1.43845562e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="absolute_usage" value="2.4535017473e+11" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="absolute_entitlement" value="1.446439661673e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="container_age" value="5.5075313206719e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="spike_end" value="1.605783762e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:44 [0] [gauge@47450 name="spike_start" value="1.605783717e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="memory" value="1.43845562e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="disk" value="1.75566848e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="memory_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="disk_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="cpu" value="0.42428616676912556" unit="percentage"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="absolute_usage" value="2.45413034799e+11" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="absolute_entitlement" value="1.4468287599674e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="container_age" value="5.5090128695397e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="spike_start" value="1.605783717e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:19:59 [0] [gauge@47450 name="spike_end" value="1.605783762e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="memory_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="disk_quota" value="1.073741824e+09" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="cpu" value="0.4235502016391223" unit="percentage"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="memory" value="1.43845562e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="disk" value="1.75566848e+08" unit="bytes"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="container_age" value="5.5105736349872e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="absolute_usage" value="2.45479141051e+11" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="absolute_entitlement" value="1.4472386628647e+13" unit="nanoseconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="spike_start" value="1.605783717e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" Nov 20 2020 02:20:14 [0] [gauge@47450 name="spike_end" value="1.605783762e+09" unit="seconds"] app_id="00522f59-af6f-4b41-bdb8-afd804098ba3" ```  
Hi all,  I am trying to create a timechart that divides the data by 12 hour shifts. I have | timechart span = 12h (followed by all the data) How do I make each span start at 0600 and 1800?  Th... See more...
Hi all,  I am trying to create a timechart that divides the data by 12 hour shifts. I have | timechart span = 12h (followed by all the data) How do I make each span start at 0600 and 1800?  Thanks!
Hi Expert Team,  We're trying to integrate AppDynamics with ServiceNow (Orlando version).  I got/installed a plugin from ServiceNow store and in step 2 - downloaded the zip file, when trying to ... See more...
Hi Expert Team,  We're trying to integrate AppDynamics with ServiceNow (Orlando version).  I got/installed a plugin from ServiceNow store and in step 2 - downloaded the zip file, when trying to run AppDynamics-CMDB-win file it throws me below error. Download and install the Data Sync Utility   Thanks, Sandeep ^ edited by @Ryan.Paredez - this comment was originally posted on How do I use AppDynamics with ServiceNow? 
I have a time picker in one of my dashboards and want the time picker to only display "Date Range".  I have been successful in writing HTML code to remove all the other ones but cannot figure out how... See more...
I have a time picker in one of my dashboards and want the time picker to only display "Date Range".  I have been successful in writing HTML code to remove all the other ones but cannot figure out how to get "Date & Time Range" to be removed as well.   Here is the code I have currently:       <html> <style> div[data-test-panel-id="presets"] { display: none; } div[data-test-panel-id="relative"] { display: none; } div[data-test-panel-id="dateandtimerange"] { display: none; } div[data-test-panel-id="realTime"] { display: none; } div[data-test-panel-id="advanced"] { display: none; } </style> </html>       I am assuming my name for "Date & Time Range" is incorrect but no clue as to how it should be written.  Does anyone know what the panel id for that is?
Splunk would not automatically extract fields from my application log files that have Key-Value Pairs (KVP) delimited with TAB characters.  https://docs.rapid7.com/insightops/json-kvp-parsing Would ... See more...
Splunk would not automatically extract fields from my application log files that have Key-Value Pairs (KVP) delimited with TAB characters.  https://docs.rapid7.com/insightops/json-kvp-parsing Would Custom Parsing need to be configured on the Splunk Forwarder side? or is there some other efficient way to get automatic field extraction of KVP data? Thanks Ram Thanks for sharing any document  
Hello,   I am looking for a way to reduce our license usage by eliminating duplicate events being forwarded from a windows DC. For example, event id 4624 (successful logon) generates a handful of e... See more...
Hello,   I am looking for a way to reduce our license usage by eliminating duplicate events being forwarded from a windows DC. For example, event id 4624 (successful logon) generates a handful of events in windows for every logon and all of those are being sent to Splunk. Is there a regex we can use in the input.conf file to only allow one event to be logged maybe based on the timestamp or something? Here is what we are currently using: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 0 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=false index = wineventlog ignoreOlderThan = 2d current_only=1 whitelist = EventCode = "4624" blacklist1 = EventCode="4624" Message=".*[\S\s]*Account\sName:\s+[\S+]+[\$]"   thanks!
Hi there, I've configured custom application logs to go to Splunk with .ps1 script. The problem is - some logs are missing... After some troubleshoot I found there is something in the message prope... See more...
Hi there, I've configured custom application logs to go to Splunk with .ps1 script. The problem is - some logs are missing... After some troubleshoot I found there is something in the message property that makes it fail, as if I exclude message all events are processed (yet useless). My guess is - there is something considered as exit character in the message that fails to be ingested.  Have nothing set in props.conf   Sample message that gets processed: Feature audited:                   Scheduled Task Type of Change:                   Edit Scheduled Task Changed by:                          DOMAIN\svc_landesk Date of change:                    11/19/2020 13:56:17 Changed on machine:         SERVERVLANDE01 Item name:                            Run After Image - 11/19/2020 1:54:40 PM Old value:                              Feature Specific Data: Data too big.  See equivalent event in the database.   Sample message that fails and doesnt show up in splunk: Feature audited:                   Scheduled Task Type of Change:                   Start Scheduled Task Changed by:                          DOMAIN\svc_landesk Date of change:                    11/19/2020 13:56:17 Changed on machine:         SERVERVLANDE01 Item name:                            Old value:                              Feature Specific Data: <ExportableChange xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" />
I am using the rest services within the search to get information on alerts that have triggered. I am trying to piece together alert information and can find most of it. What I am unable to find; may... See more...
I am using the rest services within the search to get information on alerts that have triggered. I am trying to piece together alert information and can find most of it. What I am unable to find; maybe not knowing all the fields, is the trigger time. I see that fired_alerts has the next_scheduled_time but I don't see that it has the triggered time. I do not have access to _index so I am working on getting some of this information here if possible.
Hi everyone! In my logs coming in, I log the duration for a job to complete, for several different jobs. Example of duration: 1:17:42   -----> this would be 1 hr, 17 minutes, 42 seconds How can i c... See more...
Hi everyone! In my logs coming in, I log the duration for a job to complete, for several different jobs. Example of duration: 1:17:42   -----> this would be 1 hr, 17 minutes, 42 seconds How can i calculate the average duration for each job?  I want something like this:  Jobs       |.      Duration ___________________ Job 1.     |.    0:17:41 Job 2.     |.     1:41:16 Job 3.      |.    1:05:13  
Greetings, I am having issues connecting to my LM.  The systems are in two different networks and I have confirmed via nmap and curl that I am able to connect to the remote LM box from the LS box.  ... See more...
Greetings, I am having issues connecting to my LM.  The systems are in two different networks and I have confirmed via nmap and curl that I am able to connect to the remote LM box from the LS box.  I have also confirmed via netstat that the LS box is listening on  :8089 I am receiving this error when attempting to connected to the LM via splunk web: Splunkd daemon is not responding: ('Error connecting to /services/licenser/localslave/license: The read operation timed out',) Yes the LM system is online.  Yes the License is Enterprise. The only think I see in splunkd.log on the LM is the following: WARN HttpListener - Socket error from LS.ip:46822 while idling: Read Timeout I have checked the pass4SymKey as well and they match between the LM and LS. Is it possible that the ciphers are not the same between the two and the LM is not properly decrypting the password for auth?  Or would I see ssl/auth errors in the logs?   Thanks  
Let's say I have these events: Index = A, Member = 1111, Cart Id = 1 Index = A, Member = 2222, Cart Id = 2   And these events DID NOT have a member ID field Index = A, Associate = Bill, Cart Id ... See more...
Let's say I have these events: Index = A, Member = 1111, Cart Id = 1 Index = A, Member = 2222, Cart Id = 2   And these events DID NOT have a member ID field Index = A, Associate = Bill, Cart Id = 1 Index = A, Associate = Carl, Cart Id = 1 Index = A, Associate = Rick, Cart Id = 2   I want to display this: Associate  Member Bill                1111 Carl              1111 Rick             2222   How would I do that?
*I would typically use the map command for this, but it's currently broken and support is working to fix it That being said, I'm trying to take a value from search1, pass it to search2 , grab  a fie... See more...
*I would typically use the map command for this, but it's currently broken and support is working to fix it That being said, I'm trying to take a value from search1, pass it to search2 , grab  a field from that 2nd search, and also pass that to 3rd search. Hopefully one of you lovely people can point me in the right direction. IE: index=foo | rex field field1 index=boo field2=$field1$ | table src_ip index=bar src_ip=$src_ip$ | stats  values(domain) etc etc   Any help on this would be supremely appreciated
Hi at all, is there something that already parsed syslogs from Tiesse systems (Levanto and/or Imola)? Levanto are switches and Imola are routers. Ciao and thanks. Giuseppe
Hi at all, I have a strange problem: I have syslogs from an appliance Imola Tiesse (never known before now!). in these syslogs there isn't the hostname and Splunk assigns as hostname the ip addres... See more...
Hi at all, I have a strange problem: I have syslogs from an appliance Imola Tiesse (never known before now!). in these syslogs there isn't the hostname and Splunk assigns as hostname the ip address but i need it, so I trid to insert    host = tiesse_hostname   in each inputs.conf's stanza, but host remains the ip address. I tried overriding with props.conf    [host::my_host] TRANSFORMS-my_host = my_host   and transforms.conf   [My_host] DEST_KEY = MetaData:Host REGEX = . FORMAT = host::tiesse_hostname    but the results is always the same. Can anyone give me an idea how to solve my problem? Ciao and thanks. Giuseppe
Is there a way you can export Dashboards in any format like JSON or XML? ^ Edited by @Ryan.Paredez to include the location of the original comment. Note: This comment was split off into its own pos... See more...
Is there a way you can export Dashboards in any format like JSON or XML? ^ Edited by @Ryan.Paredez to include the location of the original comment. Note: This comment was split off into its own post from this original post: How to export the list of all Custom Dashboards in the controller into a file?
I have the EVENT_TIMESTAMP_UTC field with the values of -   2020-11-19 13:50:08.393085 2020-11-19 13:50:08.3517 2020-11-19 13:50:08.306023 2020-11-19 13:50:08.238995 2020-11-19 13:50:08.16885   I... See more...
I have the EVENT_TIMESTAMP_UTC field with the values of -   2020-11-19 13:50:08.393085 2020-11-19 13:50:08.3517 2020-11-19 13:50:08.306023 2020-11-19 13:50:08.238995 2020-11-19 13:50:08.16885   I would like to create a new time field and treat the data as in the UTC time-zone. 
The search job completes however the table panel never displays on the main page under 'Data Tracking'  Any ideas where to look??
Hi guys, I have a question about a particular situation with a costumer, maybe the question was answer in another post but I did not find it, so, The customer wants get VMware premium solution, howe... See more...
Hi guys, I have a question about a particular situation with a costumer, maybe the question was answer in another post but I did not find it, so, The customer wants get VMware premium solution, however, they have a interrogate, they want the possibility to add new machines or devices to predict a future growth, however they ask us if is possible made it? Someone to knows the best way to do? Thanks for your time.        
Hi Team, We asked our Linux Team and they said that the hyperthreading is enabled across all Clustered Indexers. This Indexers are all Physical servers. Our issue is that we have High CPU Usage on o... See more...
Hi Team, We asked our Linux Team and they said that the hyperthreading is enabled across all Clustered Indexers. This Indexers are all Physical servers. Our issue is that we have High CPU Usage on our Indexers so we wanted to check if Splunk is utilizing the benefit of hyperthreading and how we can verify and check it. Do we have additional step to configure on Splunk side to use the hyperthreading of the Linux server?   Any ideas or SPL queries you can share that would be helpful. Thanks