All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunkerarijit  I could see that  this is a known issue with latest version of ES and already reported in Splunk and they have provided the workaround as well. Please refer below doc for more... See more...
Hi @splunkerarijit  I could see that  this is a known issue with latest version of ES and already reported in Splunk and they have provided the workaround as well. Please refer below doc for more info https://docs.splunk.com/Documentation/ES/7.3.2/RN/KnownIssues  If this helps, please upvote or accept solution if it solved
@souha  Splunk SOAR (On-premises) supports these operating systems and versions: Red Hat Enterprise Linux 7.6 through 7.9 Red Hat Enterprise Linux 8.0 and any of the minor versions of 8. ... See more...
@souha  Splunk SOAR (On-premises) supports these operating systems and versions: Red Hat Enterprise Linux 7.6 through 7.9 Red Hat Enterprise Linux 8.0 and any of the minor versions of 8. You can use the most recent minor release of RHEL 8 that is available at the time of the Splunk SOAR (On-premises) release. Amazon Linux 2 Oracle Linux 8 If you are unable to use any of these then you should raise a support case to see if they can help.  I think you could edit the install script to allow for another *nix OS but then you would be out of any support entitlement. 
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7... See more...
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7 and 8, and the provided  installation for splunk soar are supported on these versions only. What should I do?
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_coun... See more...
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_country_1_name to sample_1_country_99_name. Example: sample_1_country sample_2_country sample_99_country sample_37_country Denmark Chile Thailand Croatia Result sample_country_name Denmark, Chile, Thailand, Croatia Thanks!  
Hi @nabeel652 , for my knowledge, you can schedule a search using cron on Tuesday, but not on the second Tuesday. To do this, the only way is adding a contrain to the search. Ciao. Giuseppe
Unfortunately  it is not a fixed term or field.   It is just a random term for a search.  Similar to using a search in MS Word for "FOO" in a 10,000 page document.  Now I am trying to figure out how ... See more...
Unfortunately  it is not a fixed term or field.   It is just a random term for a search.  Similar to using a search in MS Word for "FOO" in a 10,000 page document.  Now I am trying to figure out how to make that useful in the table as a result.  I have tried an input file this morning but not familiar with working with that.   Table desired.... Environment userid option abc defgh THE TERM
Hello This really sums it all up to me.  index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | stats latest(_time) as lastSeen by hostname, sourceIp, fwdType, guid, vers... See more...
Hello This really sums it all up to me.  index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | stats latest(_time) as lastSeen by hostname, sourceIp, fwdType, guid, version, build, os, arch | eval lastSeenFormatted = strftime(lastSeen, "%Y-%m-%d %H:%M:%S") | eval timeDifferenceSec = now() - lastSeen | eval timeSinceLastSeen = tostring(floor(timeDifferenceSec / 3600)) . "h " . tostring(round((timeDifferenceSec % 3600) / 60)) . "m" | table hostname, sourceIp, fwdType, guid, version, build, os, arch, lastSeenFormatted, timeSinceLastSeen
Hi @Yim  Are you trying to extract it from the field or raw data ?  Please send me the sample data and elaborate it what you are trying to achieve as an output. 
Start diagnosis with this: | tstats count where index=* by index Is "myindex" in the list?
The problem here is unclear requirement: What is the logic to collapse the three rows after dedup into that single row? As @gcusello speculates, the three rows have common values of identity.  Is t... See more...
The problem here is unclear requirement: What is the logic to collapse the three rows after dedup into that single row? As @gcusello speculates, the three rows have common values of identity.  Is this correct? Such should be be stated explicitly. The mock data also shows identical first and last for the three rows.  Is this always true?  Such should be stated explicitly, too. More intricately, the mock data contains different values of extensionAttribute11 and extensionAttribute10.  What are the criteria of choosing one or another from these differing values in the collapsed table?  Volunteers here cannot read minds. extensionAttribute10 in one of the three rows is blank; that in the rest rows is the same value.  One can reasonably speculate that you want the non-blank value to be used in the collapsed table.  But is this speculation correct?  Are all non-blank values identical?  Again, do not make volunteers read your mind. Additionally, what is the logic to determine which value remains with field name email, which goes to email2, email3, etc.? In the following example, I'll take arbitrary selection among emails (5), take every value of extensionAttribute11 (3), and take affirmative in (4).  You get email extensionAtttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com user@domain.com User Surname USurname userT0@domain.com userT1@domain.com This the search   index=collect_identities sourcetype=ldap:query user | stats values(*) as * by first last identity | eval idx = mvrange(1, mvcount(email)) | eval json = json_object() | foreach idx mode=multivalue [eval ordinal = <<ITEM>> + 1, json = json_set(json, "email" . ordinal, mvindex(email, <<ITEM>>))] | spath input=json | eval email = mvindex(email, 0) | table email extension* first last identity email*   (Of course, you can reduce extensionAttribute11 to one value if you know the logic.)  Here is an emulation. Play with it and compare with real data.   | makeresults format=csv data="email, extensionAttribute10, extensionAttribute11, first, last, identity user@domain.com, , user@consultant.com, User, Surname, USurname userT1@domain.com, user@domain.com, user@domain.com, User, Surname, USurname userT0@domain.com, user@domain.com, user@domain.com, User, Surname, USurname" ``` the above emulates index=collect_identities sourcetype=ldap:query user ```    
If you mean sending to two output groups from a single forwarder - that works until one of them gets blocked. Then both stop. It's by design.
Is this table what you are looking for? sn_vul_detection sn_vul_vulnerable_item 2233 2000 Here is a quick cheat:   | rex mode=sed "s/:\s*(\d+)\n/=\1\n/g" | extract | stats sum(sn_vul_*... See more...
Is this table what you are looking for? sn_vul_detection sn_vul_vulnerable_item 2233 2000 Here is a quick cheat:   | rex mode=sed "s/:\s*(\d+)\n/=\1\n/g" | extract | stats sum(sn_vul_*) as sn_vul_*   If you must have that colon-separated notation, add   | foreach * [eval notation = mvappend(notation, "<<FIELD>>: " . <<FIELD>>)]   Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval data = mvappend("2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 Total records archived: 2167 Total related records archived: 1167", "2024-10-29 20:13:17 (337) worker.0 worker.0 txid=YYYY JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1066 Total records archived: 2066 Total related records archived: 1066") | mvexpand data | rename data as _raw | eval _time = strptime(replace(_raw, "^(\S+ \S+).*", "\1"), "%F %T") ``` data emulation above ```  
After testing UF output cloning, it was found that it is impossible to achieve true same data distribution across multiple clusters! Is there any good solution for dual writing? most urgent!
After testing UF output cloning, it was found that it is impossible to achieve true same data distribution across multiple clusters! Is there any good solution for dual writing? most urgent!
i was able to achieve this using  return $search_ticket   Thanks.
hello, Is there any good solution to the problem of cloning into multiple groups and receiving copies of data from the indexer, but not necessarily with precision?
Hi Mario, Yes, mvn was not installed. We were able to successfully install the extension after mvn installation. But then we faced another issue, the metrics were not populating in AppD. We raised a... See more...
Hi Mario, Yes, mvn was not installed. We were able to successfully install the extension after mvn installation. But then we faced another issue, the metrics were not populating in AppD. We raised a support ticket for this. As per current update on the case, the EC2 instance in which the extension is installed uses IMDSv2. The extension does not support IMDSv2 and that is the potential reason for metrics. This detail was not mentioned anywhere in AppDynamics documentation. And we ran into this roadblock. Working with support team to get a work around. Regards Fadil
So, you are indirectly confirming that location information does not exist in index data.  Have you tried the search I gave above?
Hi, how did you extract the extension? Did you use maven to create the build which outputs the zip file for you to be extracted?
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly... See more...
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly stops receiving and forwarding the logs. The only way to resolve this is by restarting the HF, after which everything works fine again, but the problem eventually recurs. Could anyone advise on: Possible causes for this intermittent log collection issue Any specific configurations to keep the Syslog input stable Troubleshooting steps or recommended best practices to prevent having to restart the HF frequently Any insights or similar experiences would be much appreciated! Thank you!