All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Using dedup on _raw is always going to give you problems! Imagine what is going on in the server trying to hold all that data, so it can determine whether the next event is a duplicate of one seen be... See more...
Using dedup on _raw is always going to give you problems! Imagine what is going on in the server trying to hold all that data, so it can determine whether the next event is a duplicate of one seen before. The transaction command is also going to give you false information working on that dataset - it has memory limitations and you will not know that it is failing, it just won't give you correct results. Let's wind back - what information do you need to know about duplicates and what do you want to show as a result of duplicates? You could do something like | eval sha=sha256(_raw) | fields - _raw | stats count by _time sha | where count > 1 where you turn _raw into a hash and then stats on the hash to find the duplicate count of hashes. You could include  | eval sha=sha256(_raw) | stats count values(_raw) as rawVals by _time sha | where count > 1 so that you can see the raw values. Do this on a small dataset before you suddenly jump to 1.3TB of data.  
It's looks to me like lines are wrapped by the terminal rather than newlines being added.  In that case, no special props.conf settings are needed.  You should have success with [vsi_esxi_syslog] LI... See more...
It's looks to me like lines are wrapped by the terminal rather than newlines being added.  In that case, no special props.conf settings are needed.  You should have success with [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation... See more...
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation earliest=-15m latest=now source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" | stats dc(kubernetes.pod_name) as pod_count" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unbalanced quotes.</msg> </messages> </response>   sometimes i dont see the result either: curl -k -u user:password https://10.236.141.0:8089/services/search/jobs/export -d search="search index=address-validation earliest=-15m latest=now source=eventhub://sams-jupiter-prod-wus-logs-premium-1.servicebus.windows.net/address-validation; | stats dc(kubernetes.pod_name) as pod_count"   <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder /> </meta> <messages>   <msg type="INFO">Your timerange was substituted based on your search string</msg> </messages>   </results>
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstyp... See more...
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) |eval minsElapsed=round(secondsElapsed/60,0)| timechart median(minsElapsed) by LOB. Suppose LOB has string values like :  "A", "B", "C", "D" ,"E","F","G" ,"H", currently , all values will be shown in the Y axis on the right side , how can I combine "A","B","C" as "A" , "D","E","F" as "E" and "G","H" as "G", so , the right side Y axis has only three values and won't affect the correctness of the dashboard. Actually , I am not sure whether should I call this right side colourful column Y axis.           Thanks a lot !
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validat... See more...
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation earliest=-15m latest=now source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" | stats dc(kubernetes.pod_name) as pod_count" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unbalanced quotes.</msg> </messages> </response>   sometimes i dont see the result either: curl -k -u user:password https://10.236.141.0:8089/services/search/jobs/export -d search="search index=address-validation earliest=-15m latest=now source=eventhub://sams-jupiter-prod-wus-logs-premium-1.servicebus.windows.net/address-validation; | stats dc(kubernetes.pod_name) as pod_count" "<?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder /> </meta> <messages> <msg type="INFO">Your timerange was substituted based on your search string</msg> </messages> </results>"
Hi @scout29  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello @_splunkker,  Have you figured out how to do this?  
Can you give more details?
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |tab... See more...
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |table titleband adminDescription cn co company dcName department description displayName division eventtype georegion givenName host locationCode mail mailNickname sAMAccountName title userAccountControl userAccountPropertyFlag userPrincipalName
We want to implement MFA for login to our Splunk enterprise serves. currently we are using LDAP authentication method.  is it possible to implement MFA on top of LDAP? 
if i am using preset time frames (Ex: last 60 minutes), then getting -->2023-10-25T13:34:05.040-04:00 Relative (Ex: 30 Minutes Ago) --> 2023-10-25T13:37:03.206-04:00 Real time (Ex: 30 Minutes Ago) ... See more...
if i am using preset time frames (Ex: last 60 minutes), then getting -->2023-10-25T13:34:05.040-04:00 Relative (Ex: 30 Minutes Ago) --> 2023-10-25T13:37:03.206-04:00 Real time (Ex: 30 Minutes Ago) --> No search results returned Date Range (Ex: since 10/24/2023) --> No search results returned Date & Time Range (Ex: 10/24/2023 12AM - 10/24/2023 1AM) --> No search results returned Advanced --> (Ex: Earliest=-1h@h Latest=@h) --> 2023-10-25T12:59:59.762-04:00 Here is the sample userActions{} object userActions: [ [-] { [-] apdexCategory: TOLERATING application: xxxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0535 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 4208 domCompleteTime: 4585 domContentLoadedTime: 4492 domain: xxxx doubleProperties: [ [+] ] duration: 4589 endTime: 1698253596232 firstInputDelay: 1 firstPartyBusyTime: 1618 firstPartyResources: 46 frontendTime: 1387 internalApplicationId: APPLICATION-99C2CEC2F57DD796 javascriptErrorCount: 0 keyUserAction: false largestContentfulPaint: 3926 loadEventEnd: 4589 loadEventStart: 4588 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: xxxx.aspx navigationStart: 1698253591643 networkTime: 1235 requestErrorCount: 0 requestStart: 775 responseEnd: 3202 responseStart: 2742 serverTime: 1967 speedIndex: 3956 startTime: 1698253591643 stringProperties: [ [+] ] targetUrl: xxxx.aspx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 4166  
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate ... See more...
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate number of the exact number of duplicate events indexing to Splunk.....ended up having to use the transaction command. To do this, ran a search for a 24-hour period that ran in about 16 minutes to get the total number of events, and data set size. Then ran the same search with the dedup command to reduce out all the duplicate events.....  | dedup _time _raw The problem is the dedup command commences to use up all the available memory until Splunk kills the search with the message "Your search has been terminated.  This is most likely due to an out of memory condition." The data set for the 24-hour search period is 13 billion+ events, and the data set size is 1.6 TB's. The 3 SH's each have 374 GB's of memory. Those are usually at 14%-16% memory usage.  Using the higher number of 16% that leaves 314 GB's of memory available when the search starts. The search with dedup commences to use that 314 GB's of available memory over about a 1 hour and 40 minute period until all used up, and Splunk kills the search when memory is near 100% utilized.       So these are the 2 reasons that dedup could be using all remaining available memory:  The dedup command is designed this way to use a larger and larger amount of memory as the data set increases in size / number of events. There is a defect with the dedup  command in SE 9.5.0 Can someone explain which of the 2 reasons it is ?  
Hi, Why I'm not seeing the field IDS_Attacks.sourcetype field in the datamodel  ? 
Ideally the value is updated by another program
Hi, Need below search into a web datmodel search  index=es_web action=blocked host= * sourcetype= * | stats count by category | sort 5 -count thanks
Hi @scout29 , please try something like this: | tstats count where index=abc BY host | append [ | inputlookup hosts.csv | eval count=0 | fields host count] | stats sum(count) AS total BY host | wh... See more...
Hi @scout29 , please try something like this: | tstats count where index=abc BY host | append [ | inputlookup hosts.csv | eval count=0 | fields host count] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
https://community.splunk.com/t5/Splunk-Search/How-can-I-add-metadata-to-events-at-the-forwarder/m-p/666062/highlight/false#M228500 For the metadata add Thanks @rphillips_splk 
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to... See more...
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to change the value Mik with another value. Can I use python Api ? Splunk Cli ? Rest api ? It seem's to be a lot of solutions but for the moment I can't find one. Thanks
Hi @ivan123357 , I'd try something like this: index=custom (evt_id=1 OR evt_id=2) earliest=-5m latest=-7m | stats last(evt_id) AS evt_id earliest(_time) AS earliest latest(_time) AS lates... See more...
Hi @ivan123357 , I'd try something like this: index=custom (evt_id=1 OR evt_id=2) earliest=-5m latest=-7m | stats last(evt_id) AS evt_id earliest(_time) AS earliest latest(_time) AS latest BY user_id | where evt_id=1 OR (latest-earliest>300) Ciao. Giuseppe
ne in the future, this is the final query I went with. I was trying to group any event in a certain index and sourcetype.  index=test sourcetype=test2 source=* | rex field=test_city "(?<city>[A-... See more...
ne in the future, this is the final query I went with. I was trying to group any event in a certain index and sourcetype.  index=test sourcetype=test2 source=* | rex field=test_city "(?<city>[A-Za-z]+)_$" | eval has_true_port = case( port_123="True" OR port_139="True" OR port_21="True" OR port_22="True" OR port_25="True" OR port_3389="True" OR port_443="True" OR port_445="True" OR port_53="True" OR port_554="True" OR port_80="True", "Yes", true(), "No" ) | where has_true_port = "Yes" | stats values(port_123) as port_123, values(port_139) as port_139, values(port_21) as port_21, values(port_22) as port_22, values(port_25) as port_25, values(port_3389) as port_3389, values(port_443) as port_443, values(port_445) as port_445, values(port_53) as port_53, values(port_554) as port_554, values(port_80) as port_80 values(city) as City by destination, test_src_ip | eval open_ports = if(port_123="True", "123,", "") . if(port_139="True", "139,", "") . if(port_21="True", "21,", "") . if(port_22="True", "22,", "") . if(port_25="True", "25,", "") . if(port_3389="True", "3389,", "") . if(port_443="True", "443,", "") . if(port_445="True", "445,", "") . if(port_53="True", "53,", "") . if(port_554="True", "554,", "") . if(port_80="True", "80,", "") | eval open_ports = rtrim(open_ports, ",") | table destination, test_src_ip City open_ports The result looks a bit like this:   Basically, this combines each open port into one row while also sorting by destination ip and source IP