All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are also facing the same issue had to revert back to 2022.5 and no updates we can find on Alamofire compatibility in past months as well.
Hello bowesmana, The transaction command worked.  Memory was at 16% when the search was started, and the search ran for 72 hours with the transaction command, but memory utilization stayed at 16% ev... See more...
Hello bowesmana, The transaction command worked.  Memory was at 16% when the search was started, and the search ran for 72 hours with the transaction command, but memory utilization stayed at 16% every time checked. So the transaction command doesn't have the huge memory requirements issue that the dedup command has. The overall count needed was all the events in that 24-hour period, and then all the events in that same 24-hour period minus exact duplicate events. As mentioned, was able to get the count of all the events, minus the duplicates, using the transaction command. So all is good. The overall reason for this post was to find out if the dedup command possibly had a defect in SE 9.5.0, and you answered that the dedup command is designed that way.  Although, that means the dedup command is basically useless with larger data sets.
Thanks for your reply .  I added this eval statement in to the search . The result is different . It is supposed to combine different LOBs results into one result . but the max value of the blue col... See more...
Thanks for your reply .  I added this eval statement in to the search . The result is different . It is supposed to combine different LOBs results into one result . but the max value of the blue column at OCT 10 is a lot less then the green one 33 of the previous screenshot. The green column's value should be included in the blue column now. so  , the max should be the same.  No sure why the result is different now.    
Have you tried something like this (assuming ServiceDown is a string)? index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | dedup ```add a field that contains device name``` | where (trap="Servi... See more...
Have you tried something like this (assuming ServiceDown is a string)? index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | dedup ```add a field that contains device name``` | where (trap="ServiceDown" AND _time <= relative_time(now(), "-5m"))  
I don't know why you're not seeing the sourcetype field.  Every event should have that field.
  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) | eval m... See more...
  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) | eval minsElapsed=round(secondsElapsed/60,0) | eval LOB=case(in(LOB,"A","B","C"),"A",in(LOB,"D","E","F"),"E",in(LOB,"G","H","I"),"G") | timechart median(minsElapsed) by LOB That's a literal interpretation of your example, hopefully you can work it from there.
Using dedup on _raw is always going to give you problems! Imagine what is going on in the server trying to hold all that data, so it can determine whether the next event is a duplicate of one seen be... See more...
Using dedup on _raw is always going to give you problems! Imagine what is going on in the server trying to hold all that data, so it can determine whether the next event is a duplicate of one seen before. The transaction command is also going to give you false information working on that dataset - it has memory limitations and you will not know that it is failing, it just won't give you correct results. Let's wind back - what information do you need to know about duplicates and what do you want to show as a result of duplicates? You could do something like | eval sha=sha256(_raw) | fields - _raw | stats count by _time sha | where count > 1 where you turn _raw into a hash and then stats on the hash to find the duplicate count of hashes. You could include  | eval sha=sha256(_raw) | stats count values(_raw) as rawVals by _time sha | where count > 1 so that you can see the raw values. Do this on a small dataset before you suddenly jump to 1.3TB of data.  
It's looks to me like lines are wrapped by the terminal rather than newlines being added.  In that case, no special props.conf settings are needed.  You should have success with [vsi_esxi_syslog] LI... See more...
It's looks to me like lines are wrapped by the terminal rather than newlines being added.  In that case, no special props.conf settings are needed.  You should have success with [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation... See more...
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation earliest=-15m latest=now source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" | stats dc(kubernetes.pod_name) as pod_count" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unbalanced quotes.</msg> </messages> </response>   sometimes i dont see the result either: curl -k -u user:password https://10.236.141.0:8089/services/search/jobs/export -d search="search index=address-validation earliest=-15m latest=now source=eventhub://sams-jupiter-prod-wus-logs-premium-1.servicebus.windows.net/address-validation; | stats dc(kubernetes.pod_name) as pod_count"   <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder /> </meta> <messages>   <msg type="INFO">Your timerange was substituted based on your search string</msg> </messages>   </results>
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstyp... See more...
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) |eval minsElapsed=round(secondsElapsed/60,0)| timechart median(minsElapsed) by LOB. Suppose LOB has string values like :  "A", "B", "C", "D" ,"E","F","G" ,"H", currently , all values will be shown in the Y axis on the right side , how can I combine "A","B","C" as "A" , "D","E","F" as "E" and "G","H" as "G", so , the right side Y axis has only three values and won't affect the correctness of the dashboard. Actually , I am not sure whether should I call this right side colourful column Y axis.           Thanks a lot !
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validat... See more...
Need help in the splunk api curl query, i am seeing the below error.  curl -k -u apiuser:password "https://10.236.141.0:8089/services/search/jobs/export" -d search="search index=address-validation earliest=-15m latest=now source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" | stats dc(kubernetes.pod_name) as pod_count" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unbalanced quotes.</msg> </messages> </response>   sometimes i dont see the result either: curl -k -u user:password https://10.236.141.0:8089/services/search/jobs/export -d search="search index=address-validation earliest=-15m latest=now source=eventhub://sams-jupiter-prod-wus-logs-premium-1.servicebus.windows.net/address-validation; | stats dc(kubernetes.pod_name) as pod_count" "<?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder /> </meta> <messages> <msg type="INFO">Your timerange was substituted based on your search string</msg> </messages> </results>"
Hi @scout29  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello @_splunkker,  Have you figured out how to do this?  
Can you give more details?
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |tab... See more...
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |table titleband adminDescription cn co company dcName department description displayName division eventtype georegion givenName host locationCode mail mailNickname sAMAccountName title userAccountControl userAccountPropertyFlag userPrincipalName
We want to implement MFA for login to our Splunk enterprise serves. currently we are using LDAP authentication method.  is it possible to implement MFA on top of LDAP? 
if i am using preset time frames (Ex: last 60 minutes), then getting -->2023-10-25T13:34:05.040-04:00 Relative (Ex: 30 Minutes Ago) --> 2023-10-25T13:37:03.206-04:00 Real time (Ex: 30 Minutes Ago) ... See more...
if i am using preset time frames (Ex: last 60 minutes), then getting -->2023-10-25T13:34:05.040-04:00 Relative (Ex: 30 Minutes Ago) --> 2023-10-25T13:37:03.206-04:00 Real time (Ex: 30 Minutes Ago) --> No search results returned Date Range (Ex: since 10/24/2023) --> No search results returned Date & Time Range (Ex: 10/24/2023 12AM - 10/24/2023 1AM) --> No search results returned Advanced --> (Ex: Earliest=-1h@h Latest=@h) --> 2023-10-25T12:59:59.762-04:00 Here is the sample userActions{} object userActions: [ [-] { [-] apdexCategory: TOLERATING application: xxxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0535 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 4208 domCompleteTime: 4585 domContentLoadedTime: 4492 domain: xxxx doubleProperties: [ [+] ] duration: 4589 endTime: 1698253596232 firstInputDelay: 1 firstPartyBusyTime: 1618 firstPartyResources: 46 frontendTime: 1387 internalApplicationId: APPLICATION-99C2CEC2F57DD796 javascriptErrorCount: 0 keyUserAction: false largestContentfulPaint: 3926 loadEventEnd: 4589 loadEventStart: 4588 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: xxxx.aspx navigationStart: 1698253591643 networkTime: 1235 requestErrorCount: 0 requestStart: 775 responseEnd: 3202 responseStart: 2742 serverTime: 1967 speedIndex: 3956 startTime: 1698253591643 stringProperties: [ [+] ] targetUrl: xxxx.aspx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 4166  
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate ... See more...
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate number of the exact number of duplicate events indexing to Splunk.....ended up having to use the transaction command. To do this, ran a search for a 24-hour period that ran in about 16 minutes to get the total number of events, and data set size. Then ran the same search with the dedup command to reduce out all the duplicate events.....  | dedup _time _raw The problem is the dedup command commences to use up all the available memory until Splunk kills the search with the message "Your search has been terminated.  This is most likely due to an out of memory condition." The data set for the 24-hour search period is 13 billion+ events, and the data set size is 1.6 TB's. The 3 SH's each have 374 GB's of memory. Those are usually at 14%-16% memory usage.  Using the higher number of 16% that leaves 314 GB's of memory available when the search starts. The search with dedup commences to use that 314 GB's of available memory over about a 1 hour and 40 minute period until all used up, and Splunk kills the search when memory is near 100% utilized.       So these are the 2 reasons that dedup could be using all remaining available memory:  The dedup command is designed this way to use a larger and larger amount of memory as the data set increases in size / number of events. There is a defect with the dedup  command in SE 9.5.0 Can someone explain which of the 2 reasons it is ?  
Hi, Why I'm not seeing the field IDS_Attacks.sourcetype field in the datamodel  ? 
Ideally the value is updated by another program