All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick  But how can I get the data like below image   
This has nothing to do with the command being run from CLI. It means that the index you're trying to dbinspect is a smartstore-enabled index and therefore you only can do limited dbinspect on it. Se... See more...
This has nothing to do with the command being run from CLI. It means that the index you're trying to dbinspect is a smartstore-enabled index and therefore you only can do limited dbinspect on it. See the docs on dbinspect command for details.
Like @bowesmana says but do you really need the $env$ in the field name? Wouldn't  | stats dc(hostname) by host_environment make more sense in a dashboard? 
First, when illustrating structured data, please post compliant raw text.  In your case, a compliant JSON should be   { "application": "app1", "feature": "feature1", "timestamp": "01/29/20... See more...
First, when illustrating structured data, please post compliant raw text.  In your case, a compliant JSON should be   { "application": "app1", "feature": "feature1", "timestamp": "01/29/2025 23:02:00 +0000", "users": [ { "userhost": "client1", "username": "user1" }, { "userhost": "client2", "username": "user2" } ] }   The trick here is to reach into the JSON array to perform mvexpand and ignore Splunk's default flattening of array.   | spath path=users{} | mvexpand users{} | spath input=users{}   Your sample data will give application feature timestamp userhost username users{} app1 feature1 01/29/2025 23:02:00 +0000 client1 user1 { "userhost": "client1", "username": "user1" } app1 feature1 01/29/2025 23:02:00 +0000 client2 user2 { "userhost": "client2", "username": "user2" } Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "{ \"application\": \"app1\", \"feature\": \"feature1\", \"timestamp\": \"01/29/2025 23:02:00 +0000\", \"users\": [ { \"userhost\": \"client1\", \"username\": \"user1\" }, { \"userhost\": \"client2\", \"username\": \"user2\" } ] }" | spath ``` data emulation above ```    
Your question seems to be really about extracting user ID.  In other words, given the illustrated data, you want _raw restoftext trx user 10 1/17/2025 15:03:20 account record user100 does n... See more...
Your question seems to be really about extracting user ID.  In other words, given the illustrated data, you want _raw restoftext trx user 10 1/17/2025 15:03:20 account record user100 does not exist account record user100 does not exist 10 user100 12 1/17/2025 15:03:20 login as admin login as admin 12 admin  Is this correct? The above can be achieve by regex.  But you have to enumerate all those login events and write alternative expressions.  For these two,   | rex "^(?<trx>\d+)\s+\S+\s+\S+\s+(?<restoftext>.+)" | rex field=restoftext "(login as|account record) (?<user>\S+)"   Here is an emulation for you to play with and compare with real data   | makeresults format=csv data="_raw 10 1/17/2025 15:03:20 account record user100 does not exist 12 1/17/2025 15:03:20 login as admin, raising privileges" ``` data emulation above ```   Hope this helps.
im trying to write a splunk search to extract the user id and time of a login.  log sample below:   trx# datetime                           remaining text in event 10    1/17/2025 15:03:20   acco... See more...
im trying to write a splunk search to extract the user id and time of a login.  log sample below:   trx# datetime                           remaining text in event 10    1/17/2025 15:03:20   account record user100 does not exist 12   1/17/2025  15:03:20   login as admin, raising privileges   both results represent a successful login. both results have the same datetime but different trx# (10 or 12) ive tried streamstats count  by _time which generates a count for each result. the issue is how do I isolate the first result (trx# =10) so i can extract the userid (user100)? the streamstats  command doesnt always assign the same count value (1 or 2) to the two logs.    Thanks in advance for your help.    
If you want to do this at search time, create a composite field and expand that, e.g. | eval composite_field=mvzip('users{}.userhost', 'users{}.username', "###") | fields - users{}* | mvexpand compo... See more...
If you want to do this at search time, create a composite field and expand that, e.g. | eval composite_field=mvzip('users{}.userhost', 'users{}.username', "###") | fields - users{}* | mvexpand composite_field | rex field=composite_field "(?<userhost>.*)###(?<username>.*)" | fields - composite_field it will only zip correctly if there are exactly equal elements in each of the MV fields.  
Just adding the translated text here: The Splunk Enterprise upgrade procedure states that the automatic startup setting should be disabled, but why is it necessary to disable the automatic startup s... See more...
Just adding the translated text here: The Splunk Enterprise upgrade procedure states that the automatic startup setting should be disabled, but why is it necessary to disable the automatic startup setting?
Splunk Enterpriseのアップグレード手順の中に自動起動設定を無効化するとありますが、どのような理由で自動起動設定の無効化が必要なのでしょうか。
Hi Aedah, you are trying to upload an xlsx file which is not text. Export the file in excel as csv and upload the csv instead. cheers, MuS
I dont get why the uploaded data is displayed like this. I am unable to create dashboards as it is not identifying all the data available in the file.  
I have data that looks something like this, coming in as JSON: time, application, feature, username, hostname The problem is that username and hostname are nested arrays, like this:     { a... See more...
I have data that looks something like this, coming in as JSON: time, application, feature, username, hostname The problem is that username and hostname are nested arrays, like this:     { application: app1 feature: feature1 timestamp: 01/29/2025 23:02:00 +0000 users: [ { userhost: client1 username: user1 } { userhost: client2 username: user2 } ] }     and when the event shows up in splunk, userhost and username are converted to multi-value fields. _time application feature users{}.username users{}.userhost 01/29/2025 23:02:00 app1 feature1 user1 user2 client1 client2   I need an SPL method to convert these into individual events for the purposes of a search, so that I can perform ldap lookups on each hostname. mvexpand only works on one field at a time and doesn't recognize users or users{} as valid input, which loses the relationship between user1:client1 and user2:client2. How can I convert both arrays to individual events by array index, so that I preserve the relationship between username and hostname, like this: _time application feature users{}.username users{}.userhost 01/29/2025 23:02:00 app1 feature1 user1 client1 01/29/2025 23:02:00 app1 feature1 user2 client2
Generally speaking, Splunk is not good at reporting on something that doesn't exist, so if a transaction in not in ABC nor in XYZ, then Splunk doesn't know about it so can't report that it is missing... See more...
Generally speaking, Splunk is not good at reporting on something that doesn't exist, so if a transaction in not in ABC nor in XYZ, then Splunk doesn't know about it so can't report that it is missing from both - unless you have a list of transactions from somewhere else.
| eval date_hour=strftime(_time, "%H") | eval date_wday=strftime(_time, "%A") | chart dc(RMI_MastIncNumb) AS incidents over date_wday by date_hour useother=f limit=0 | eval wd=lower(date_wday) |... See more...
| eval date_hour=strftime(_time, "%H") | eval date_wday=strftime(_time, "%A") | chart dc(RMI_MastIncNumb) AS incidents over date_wday by date_hour useother=f limit=0 | eval wd=lower(date_wday) | eval sort_field=case(wd=="monday",2, wd=="tuesday",3, wd=="wednesday",4, wd=="thursday",5, wd=="friday",6, wd=="saturday",7, wd="sunday", 1) | sort sort_field | fields - sort_field wd
Thanks - this is what I was after! Yeah I was getting a list of every sessionId but I was trying to find a way to get a count of each unique ID.  Cheers, Ryan
If you want the Total to represent the Total of the individual MINUTE in the period, then it will always be different to the max of all the minutes in the period for each LPAR. e.g.  Time,TRX,TRX2,... See more...
If you want the Total to represent the Total of the individual MINUTE in the period, then it will always be different to the max of all the minutes in the period for each LPAR. e.g.  Time,TRX,TRX2,STM,Total 00:01,4,5,6,15 00.02,1,1,16,18 When this is summed over the hour it will be 00.00,4,5,16,18 and of course 18 is less than the sum of 4+5+16 - so you can't have it both ways - in your original post, you wanted the total to reflect the sum of LPAR for the biggest MINUTE, so it can't also represent the sum of the biggest DAY unless you do another addtotals to a new field which is the total for the day.
I'm trying to get a table or heatmap of a count of incidents that occur by day and hour....My results make sense, except I'm only getting hours 8 through 20. I know incidents occur round the clock. S... See more...
I'm trying to get a table or heatmap of a count of incidents that occur by day and hour....My results make sense, except I'm only getting hours 8 through 20. I know incidents occur round the clock. So, I should be seeing a count for each hour. Any suggestions?     | eval date_hour=strftime(_time, "%H") | eval date_wday=strftime(_time, "%A") | chart dc(RMI_MastIncNumb) AS incidents over date_wday by date_hour useother=f | eval wd=lower(date_wday) | eval sort_field=case(wd=="monday",2, wd=="tuesday",3, wd=="wednesday",4, wd=="thursday",5, wd=="friday",6, wd=="saturday",7, wd="sunday", 1) | sort sort_field | fields - sort_field wd      
OK, so it's getting created in the stats command. So don't use that technique. Do something like | stats dc(hostname) as server_count | eval n=if($env|s$="*", "ALL", $env|s$) | eval server_{n}_count... See more...
OK, so it's getting created in the stats command. So don't use that technique. Do something like | stats dc(hostname) as server_count | eval n=if($env|s$="*", "ALL", $env|s$) | eval server_{n}_count=server_count | fields - server_count but do you really need the $env$ in the field name? Is this dashboard studio - you can probably assign an additional token $env_name$ based on the selected NAME of the environment rather than the token value. I'm not familiar enough with DS to say how to do this, but you can then use $env$ as the search constraint and $env_name$ as the server name.
Hi @bowesmana,  Thanks for your help. Is there a way to keep the LPAR values when the Total max is reached? As you can see in my graph, the Total curve should not be exceeded by the combination o... See more...
Hi @bowesmana,  Thanks for your help. Is there a way to keep the LPAR values when the Total max is reached? As you can see in my graph, the Total curve should not be exceeded by the combination of my 3 LPARs. How can I do that? Thanks!
You have to split by index as well. Try this | tstats count where index IN (network, proxy) by _time span=1h index | timechart span=1h max(count) by index The tstats will give you an index column a... See more...
You have to split by index as well. Try this | tstats count where index IN (network, proxy) by _time span=1h index | timechart span=1h max(count) by index The tstats will give you an index column as well as count, then the timechart will convert that to a timechart. Note that you need to use max(count) here. Note you can also do this simply with tstats using prestats and chart, i.e. | tstats prestats=t count where index IN (network, proxy) by _time span=1h index | chart count by _time index This way you just use chart count and you don't need the max.