All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, Today i observed a weird behavior while using eval and kvstore lookup file, below is my case: My query is : | eval fieldname2=if(fieldname1="abc","Starting",fieldname2) | lookup lo... See more...
Hello Everyone, Today i observed a weird behavior while using eval and kvstore lookup file, below is my case: My query is : | eval fieldname2=if(fieldname1="abc","Starting",fieldname2) | lookup local=true mylookup Key as fieldname2 OUTPUT fieldname3 When i run this search i am getting quite less number of matches and when i run below i get more matches  | eval fieldname2=if(fieldname1="abc","Starting",fieldname2) | lookup local=true mylookup.csv Key as fieldname2 OUTPUT fieldname3 mylookup & mylookup.csv both have same set of data and the only difference is first one is a kv store and the next one is a lookup file. Till we upgrade to 7.3.5 from 7.1.6 it was working without any issues, but post upgrade observed this weird behavior. May be someone can help me to find out what i am missing here?? Thanks in advance!! Regards, BK
When i try to find the difference between two epoc 1)find the days range i get blank values 2) and i need to filter only records where days =0 | eval printedA_epoch=strptime(printedtimestrampA... See more...
When i try to find the difference between two epoc 1)find the days range i get blank values 2) and i need to filter only records where days =0 | eval printedA_epoch=strptime(printedtimestrampA,"%Y-%m-%dT%H:%M:%S.%Q"),printedB_epoch=strptime(printedtimestrampB,"%Y-%m-%dT%H:%M:%S.%Q") | eval indextime =_indextime | eval diffA=indextime-printedA_epoch, diffB=indextime-printedB_epoch | eval daysA= round((diffA/86400),0) , daysB= round((diffB/86400),0) | table host,printedA_epoch,printedB_epoch,indextime,diffA,diffB,daysA,daysB        
Hi, I have two different time values 2020-06-24 03:07:39,997Z 2020-06-24 03:07:39.990Z The first value has a comma(,) and the second value has a dot(.) How can I parse both the values. Any documentati... See more...
Hi, I have two different time values 2020-06-24 03:07:39,997Z 2020-06-24 03:07:39.990Z The first value has a comma(,) and the second value has a dot(.) How can I parse both the values. Any documentation on this?
i'm trying to sum one of the fields values based on the other field values. For example Source    Remediated      Space_id          A               45                       156          B        ... See more...
i'm trying to sum one of the fields values based on the other field values. For example Source    Remediated      Space_id          A               45                       156          B                46                       199          B               98                        233          B                8                          233          A                9                          156          D                3                          148   So here i want to sum  the Remediated Values with the corresponding  space_id values. if space_id is 233 then want to add 98+8. Result should be like. Source    Remediated      Space_id          A                54                      156          B                46                      199          B               106                     233          D                3                         148 is this possible .  Please help me.
i want to see which all feeds are there which are not getting used in any use case or in any report/alert or anything. kindly suggest me the way to check those indexes   Thanks in advance.
  I want to correlate the login events of aws console to login events of cyberark. people login to aws console via cyberark. so need to correlate the login events of aws with cyberark, that if peo... See more...
  I want to correlate the login events of aws console to login events of cyberark. people login to aws console via cyberark. so need to correlate the login events of aws with cyberark, that if people are login to aws are they logining through cyberark or not.
Hello, We have a situation, we have database servers which are at different locations without any inter connectivity between them. We got 5 DB licenses. If we install default database agents on thes... See more...
Hello, We have a situation, we have database servers which are at different locations without any inter connectivity between them. We got 5 DB licenses. If we install default database agents on these servers then at a time only one DB agent is Active & others are going into passive mode.  we need some help in either creating customized DB agent for each location or information about following exact steps to achieve monitoring of all these 5 servers. Kindly Help with same Regards, Cloud Support.
Hello, I am looking for some help on status evaluation. What I am trying to do is create a eval column where you either have true of false based on FROZEN_PCT or REFRIGERATOR_PCT Status = False  ... See more...
Hello, I am looking for some help on status evaluation. What I am trying to do is create a eval column where you either have true of false based on FROZEN_PCT or REFRIGERATOR_PCT Status = False  (This occurs when the capacity has reached 70% and has not decreased to 50% yet) The value will not increase after it reached 70% or greater. Highlighted in red Status = True (This occurs if the value is less than .7 and lowered  has below .5 after going false) Everything else not highlighted Let me know if you can help!   UPDATED_TS Date Hour TIME FROZEN_PCT REFRIGERATOR_PCT 6/10/20 19:44 6/10/2020 19 19:44 4.70% 33.63% 6/10/20 19:35 6/10/2020 19 19:35 8.77% 33.17% 6/10/20 19:35 6/10/2020 19 19:35 8.77% 37.43% 6/10/20 19:25 6/10/2020 19 19:25 8.77% 37.66% 6/10/20 18:44 6/10/2020 18 18:44 8.77% 41.49% 6/10/20 18:43 6/10/2020 18 18:43 4.70% 37.66% 6/10/20 18:39 6/10/2020 18 18:39 4.70% 36.58% 6/10/20 18:38 6/10/2020 18 18:38 4.70% 37.28% 6/10/20 18:23 6/10/2020 18 18:23 21.44% 41.55% 6/10/20 18:22 6/10/2020 18 18:22 21.44% 49.19% 6/10/20 17:47 6/10/2020 17 17:47 21.44% 49.19% 6/10/20 17:42 6/10/2020 17 17:42 21.44% 58.00% 6/10/20 17:27 6/10/2020 17 17:27 21.44% 59.22% 6/10/20 17:25 6/10/2020 17 17:25 21.44% 61.80% 6/10/20 16:54 6/10/2020 16 16:54 21.44% 61.80% 6/10/20 16:54 6/10/2020 16 16:54 21.44% 62.29% 6/10/20 16:52 6/10/2020 16 16:52 21.44% 63.95% 6/10/20 16:50 6/10/2020 16 16:50 21.44% 69.11% 6/10/20 16:45 6/10/2020 16 16:45 21.44% 73.59% 6/10/20 16:37 6/10/2020 16 16:37 16.74% 67.68% 6/10/20 16:33 6/10/2020 16 16:33 16.74% 55.12% 6/10/20 16:12 6/10/2020 16 16:12 0.00% 51.22% 6/10/20 15:55 6/10/2020 15 15:55 0.00% 59.01% 6/10/20 15:39 6/10/2020 15 15:39 0.00% 50.19% 6/10/20 15:36 6/10/2020 15 15:36 0.00% 51.29% 6/10/20 15:30 6/10/2020 15 15:30 0.00% 49.64% 6/10/20 14:59 6/10/2020 14 14:59 0.00% 49.27% 6/10/20 14:59 6/10/2020 14 14:59 0.00% 46.02% 6/10/20 14:53 6/10/2020 14 14:53 0.00% 54.06% 6/10/20 14:18 6/10/2020 14 14:18 0.00% 46.43% 6/10/20 14:00 6/10/2020 14 14:00 0.00% 38.64% 6/10/20 13:44 6/10/2020 13 13:44 0.00% 38.64% 6/10/20 13:34 6/10/2020 13 13:34 2.56% 40.42% 6/10/20 12:25 6/10/2020 12 12:25 2.56% 39.32% 6/10/20 12:01 6/10/2020 12 12:01 2.56% 38.61% 6/10/20 11:43 6/10/2020 11 11:43 2.56% 38.61%   
require([
Hi, i need index time and host time to repeat for each data for host, printedA_epoch & printedb_epoch, how can i achieve it   Thanks, Karuna  
Hello everyone,  I recently downloaded Splunk Enterprise the free version. I am following a training course where I need to upload a log file and I should be able to see number of events indexed in ... See more...
Hello everyone,  I recently downloaded Splunk Enterprise the free version. I am following a training course where I need to upload a log file and I should be able to see number of events indexed in my system. However, I am not getting any data on my splunk. The data seems to not be indexing properly and keeps saying "waiting for data..". What should I do to fix this?
When multivalue field is given as field-list for transaction, transaction does not attempt to combine the events despite the events have common multivalue field. Example Query:     | makeresults ... See more...
When multivalue field is given as field-list for transaction, transaction does not attempt to combine the events despite the events have common multivalue field. Example Query:     | makeresults count=4 | streamstats count | eval abc="123" | eval def=if(count!=2, "456", null()) | eval ghi=if(count!=1, "789", null()) | eval abc=mvdedup(mvappend(abc, def, ghi)) | transaction abc keeporphans=1 keepevicted=1     I'd expect all 4 events to be combined to 1 as all events have common value of "123". However this is not the case. Is there any way to make this happen?
Hello, I have an inputlookup table (test.csv) with a few columns including 7 columns (for 7 days of the week) as shown below. FILENAME Monday Tuesday Wednesday Thursday Friday Saturday Su... See more...
Hello, I have an inputlookup table (test.csv) with a few columns including 7 columns (for 7 days of the week) as shown below. FILENAME Monday Tuesday Wednesday Thursday Friday Saturday Sunday abc 1 2 3 4 5 X X xyz 11 2 30 4 5 X X 123 111 2 300 40 5 X X   I need to pull the column corresponding to the execution day. For example, if i execute it on 6/23/2020 (date being Wednesday), I should get something like this. FILENAME Count abc 3 xyz 30 123 300   If I run this search on 6/27/2020, being a Saturday, I should get something like this - FILENAME Count abc X xyz X 123 X   I tried something like this but it isn't working - | inputlookup test.csv |  eval wkday = strftime(now(),"%A") | eval Count = {wkday}   Any help would be greatly appreciated.
So i have this search:     index="sense_power_monitor" | where 'usage_info.solar_w'>=0 | bin _time span=1h | stats count as samples sum(usage_info.solar_w) as watt_sum by _time | eval kW_Sum=... See more...
So i have this search:     index="sense_power_monitor" | where 'usage_info.solar_w'>=0 | bin _time span=1h | stats count as samples sum(usage_info.solar_w) as watt_sum by _time | eval kW_Sum=watt_sum/1000 | eval avg_kWh=kW_Sum/samples |stats sum(avg_kWh)     which returns: 47.56   And i have this search:     index="sense_power_monitor" | where 'usage_info.d_w'>=0 | bin _time span=1h | stats count as samples sum(usage_info.d_w) as watt_sum by _time | eval kW_Sum=watt_sum/1000 | eval avg_kWh=kW_Sum/samples |stats sum(avg_kWh)      which returns: 74.73 I know i can get the percentage difference between these two search results by 47.56/74.73*100 = 63.64% How can I do one search that gives me that final percent?
When I run Get Users against the group named G-SomeGroup it returns just 1 result. The group contains 3 members I can see from using PowerShell's Get-ADGroupMembers cmdlet that the group contains 3 ... See more...
When I run Get Users against the group named G-SomeGroup it returns just 1 result. The group contains 3 members I can see from using PowerShell's Get-ADGroupMembers cmdlet that the group contains 3 users. I'm running PowerShell as the same AD user I've configured the LDAP asset in Phantom to use. The users in G-SomeGroup are direct members - no not members via nesting. If I query G-SomeOtherGroup I see hundreds of members. Any suggestions? Or logs to check?
Hello, I'm looking for help showing the Uptime/downtime percentage for my Universal Forwarders (past 7 days) : I've seen many people trying to solve a similar use case on Answers but haven't quite... See more...
Hello, I'm looking for help showing the Uptime/downtime percentage for my Universal Forwarders (past 7 days) : I've seen many people trying to solve a similar use case on Answers but haven't quite seen what I'm looking for yet..  I've been testing the below query and my thinking was to calculate the difference in minutes between a host's timestamp for eval field Action = "Splunkd Shutdown" - "Action = "Splunkd Starting".  Then sum the total in minutes divided by the total minutes in 1 week (10080) to get the uptime?  There are problems with this logic though because if the last time a host shutdown is not within your search window you won't get an accurate metric.  I'm open to a discussion to see how this can be monitoring most accurately. This query returns the host and timestamp for when splunkd shut down and another event with timestamp when Splunkd started. index=_internal source="*SplunkUniversalForwarder*\\splunkd.log" (event_message="*Splunkd starting*" OR event_message="*Shutting down splunkd*") | eval Action = case(like(event_message, "%Splunkd starting%"), "Splunkd Starting", like(event_message, "%Shutting down splunkd%"), "Splunkd Shutdown") | stats count by host, _time, Action
Hi Team, I tried all possibilities to extract the data from index which are matched field values with lookup table . the requirement is to pull the existing fields in  index=xxxxxx sourcetype=yyyy,... See more...
Hi Team, I tried all possibilities to extract the data from index which are matched field values with lookup table . the requirement is to pull the existing fields in  index=xxxxxx sourcetype=yyyy,  I can see many fields but  would like  path: /vol/xxxxxx/xxxxxxxx-lun0_xxxxxxxx/uswilo60-00.lun. we have number events but we only need 300 lunid along the some other filelds, like the highlighted part we have a lot but we need to pull the data of only 300-requored Lun.   I have created lookup table for those 300 lun but how to extract based on only these 300, we should pull path,volume,host,name….those exist in index but in lookup we are having only one column that lun.   could any one help on this  
I have a dataset and looking for a timechart span=1d for the data I have from a location.  | table _time Cases Hospitalizations ICO Recoveries Deaths I want to display all these in one chart y exis... See more...
I have a dataset and looking for a timechart span=1d for the data I have from a location.  | table _time Cases Hospitalizations ICO Recoveries Deaths I want to display all these in one chart y exis over the past 7 days.     
Hi all, We are Partner of splunk: Where can i find Pre-Sales Engineer courses and exams.  I need to follow them for our partner program. Thank you, Pieter
I´m trying to clone events that originate from splunk connector for kubernetes using the following configuration in props.conf and transforms.conf:   [(?::){0}*] TRANSFORMS-logs_java=clone_trace ... See more...
I´m trying to clone events that originate from splunk connector for kubernetes using the following configuration in props.conf and transforms.conf:   [(?::){0}*] TRANSFORMS-logs_java=clone_trace [trace] TRANSFORMS-indice=indice_trace tranforms.conf [clone_trace] REGEX=. CLONE_SOURCETYPE=trace [indice_trace] REGEX=. DEST_KEY=_MetaData:Index FORMAT=jaeger_trace   For some reason some events are not beeing cloned.  Any ideas?