All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It's an interesting thought, though the same issue is occuring on 9.0.1 for me but on Server2022
The event you have chosen to show does not match "Message=.*" so you won't get apiName extracted, therefore your chart will return no results (at least for this event). Your lookup appears to use "C... See more...
The event you have chosen to show does not match "Message=.*" so you won't get apiName extracted, therefore your chart will return no results (at least for this event). Your lookup appears to use "Client" as a field name, whereas your event appears to use "client" - fieldnames are case sensitive so these are two different fields. I hope this helps you resolve your issue.
One thing - 9.1 introduced the wec_event_format parameter for windows event inputs. It can cause your events to not be ingested at all if misconfigured but maybe it can cause other problems. You can ... See more...
One thing - 9.1 introduced the wec_event_format parameter for windows event inputs. It can cause your events to not be ingested at all if misconfigured but maybe it can cause other problems. You can fiddle with forwarded events format in subscription setting and adjust this parameter accordingly.
Hi @dwraesner , open a non technocal ticket to Splunk Support. Ciao. Giuseppe
Great solution. Chipping in my 10 cents: #input_link_split_by.input-link button[aria-checked="true"] { background-color: green !important; } If you want to keep highlighted the last selection... See more...
Great solution. Chipping in my 10 cents: #input_link_split_by.input-link button[aria-checked="true"] { background-color: green !important; } If you want to keep highlighted the last selection on the tabs to make it more visible     
As I always say, what search is best data analytics solution depends on data.  While this is as true as what @PickleRick explained in general, it is even more true with an ambiguous case as yours.  A... See more...
As I always say, what search is best data analytics solution depends on data.  While this is as true as what @PickleRick explained in general, it is even more true with an ambiguous case as yours.  A discussion last year has showed me possibilities that I hadn't known before.  But whether it will help your use case depends on a lot of things.  So, let me first put out some qualifiers that immediately come to mind.  There can be many others. Does every field of interest appear in every event in which sourcetype_1_primary, sourcetype_2_primary, or sourcetype_3_primary is present? Are sourcetype_1_primary, sourcetype_2_primary, and sourcetype_3_primary already extracted at search time, i.e., your <initial search> does not have to extract any of them? Gain from such optimization also depends on how many calculations are to be performed between index search and stats. This is not to say that failing these qualifiers will preclude potential benefits from similar strategies, but the following is based on them. The idea is to limit search intervals using subsearches.  For this to work, of course, employed subsearches must be extremely light.  Hence tstats.  Here is a little demonstration. original with time filters index=_introspection component=* earliest=-4h | stats latest(*) as * by component index=_introspection component=* earliest=-4h     [tstats max(_time) as latest where index=_introspection earliest=-4h     by component index     | eval earliest = latest - 0.1, latest = latest + 0.1] | stats latest(*) as * by component I tested them on a standalone instance on my laptop.  That is to say there are few events (only 10 components); instead of 0.1s shifts, I use 1s. Even so, the baseline is extremely unstable, ranging from 0.76s to 1.8s.  The biggest gain I saw was from 1.8s to 0.6s.  Smaller gains were like from 0.75s to 0.68s. Back to your correlation search.  Assuming your <initial search> is a combined search, try something like this:   (sourcetype=sourcetype_1 sourcetype_1_primary=* [tstats max(_time) as latest where sourcetype=sourcetype_1 by sourcetype_1_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) OR (sourcetype=sourcetype_2 sourcetype_2_primary [tstats max(_time) as latest where sourcetype=sourcetype_2 by sourcetype_2_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) OR (sourcetype=sourcetype_3 sourcetype_3_primary=* [tstats max(_time) as latest where sourcetype=sourcetype_3 by sourcetype_3_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) | fields _time, xxx, xxx, <pick your required fields> | eval coalesced_primary_key=coalesce(sourcetype_1_primary, sourcetype_2_primary, sourcetype_3_primary) | stats latest(*) AS * by coalesced_primary_key  
Hi @verizonrap2017, The command output should match the information you were provided and be otherwise self-explanatory. A warm bucket with data integrity enabled should have the following files in... See more...
Hi @verizonrap2017, The command output should match the information you were provided and be otherwise self-explanatory. A warm bucket with data integrity enabled should have the following files in rawdata: journal.zst (if zstd compression is used) l1Hashes_0_<instance_guid>.dat l2Hash_0_<instance_guid>.dat slicemin.dat slicesv2.dat Calling check-integrity against an unmodified zstd rawdata journal: $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Integrity check succeeded on bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 Total buckets checked=1, succeeded=1, failed=0 ... Calling check-integrity against a recompressed zstd rawdata journal: $ cp journal.zst journal.zst.backup $ zstd -d journal.zst $ zstd journal $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Error reading compressed journal /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0/rawdata/journal.zst while streaming: single-segment zstd compressed block in frame was 352493 bytes long (max should be 131072) Error parsing rawdata inside bucket path="/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0": msg="Error reading compressed journal /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0/rawdata/journal.zst while streaming: single-segment zstd compressed block in frame was 352493 bytes long (max should be 131072)" Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0, Reason=Journal has no hashes. Total buckets checked=1, succeeded=0, failed=1 ... Calling check-integrity against a recompressed zstd streamed rawdata journal: $ cp journal.zst journal.zst.backup $ zstd -d journal.zst $ cat journal | zstd --no-check - -o journal.zst $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Integrity check failed for bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0, Reason=Hash of journal slice# 1 did not match the expected value in l1Hashes_0_<instance_guid>.dat Total buckets checked=1, succeeded=0, failed=1 ... Irrespective of how the rawdata journal or hashes are modified, if the calculated hashes do match the saved hashes, the integrity check fails. If your rawdata journal and hashes are stored together, I wouldn't trust them for evidence of compromise. While a failed integrity check does indicate a problem with either the rawdata journal or hashes, a successful integrity check only confirms that the current rawdata journal and hashes are in agreement. If both were compromised, you would have no way of knowing using only the integrity check.
The only workaround I have been able to find is to clone the scheduled searches into another app (custom app) and run them from there. This avoids the tag visibility issue and the dashboards display ... See more...
The only workaround I have been able to find is to clone the scheduled searches into another app (custom app) and run them from there. This avoids the tag visibility issue and the dashboards display properly. Unfortunately I never heard from the developer so that's it for now. 
According to Splunk on our case, version 9.2.2 will have a fix for this and it'll be released on 5/24. They also have a custom build available that'll solve it were going to try next week.
I have moved the lookup statement to the end after chart. Here is the latest query that I used. After I move the lookup at the end, I see no data. index=application_na sourcetype=my_logs:hec sourc... See more...
I have moved the lookup statement to the end after chart. Here is the latest query that I used. After I move the lookup at the end, I see no data. index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* | rex field=message "Message=.* \((?<apiName>\w+?) -" | chart count over client by apiName | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region Here is the sample events that I am working with, if that helps: Time Event 4/27/24 5:30:37.182 AM { "client":"ClientA", "msgtype":"WebService", "priority":2, "interactionid":"1DD6AA27-6517-4D62-84C1-C58CA124516C", "seq":15831, "threadid":23, "message":"TimeMarker: WebService: Sending result @110ms. (retrievePaymentsXY - ID1:123 ID2:ClientId|1 ID3:01/27/2024-04/27/2024)", "userid":"Unknown" }   My_Client_Mapping lookup table has details of my clients like Client ClientId ClientName Region ClientA 1 Client A Eastern ClientB 2 Client B Eastern ClientC 3 Client C Western    
Hi @R_Ramanan, Can you provide a small set of sample data? If a, b, c, ..., g are only related to par2, par3, par4, ..., par12 by par1, then par1 is likely your only filterable parameter.
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email fro... See more...
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email from the AppInspect email group at Splunk stating the App passed, and has qualified for Splunk Cloud compatibility. It has now been 3 weeks since the App passed, but it has not been un-archived / reinstated. Found some instructions that state to click the "Reinstate App" button under your App profile's "Manage App", but do not see that button available. Can anyone post how to get an App unarchived / reinstated.        
Hi @hettervik, The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard: | savedsearch "Incident ... See more...
Hi @hettervik, The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard: | savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="" owner_filter="" urgency_filter="" tag_filter="" type_filter="" disposition_filter="" I don't believe this is directly documented, but all Splunk ES components are shipped in a source-readable form (saved searches, dashboards, Python modular inputs, etc.). The searches may be discussed in the latest revision of the Administering Splunk Enterprise Security course; my last training course was circa Splunk ES 6.1. As a starting point, I would expand the time range and review Notable Event Suppressions under Configure > Incident Management > Notable Event Suppressions. See https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables#Create_and_manage_notable_event_suppressions for more information. Following that, I would verify the get_notable_index macro value and permissions haven't been modified. The macro is defined in the SA-ThreatIntelligence app: [get_notable_index] definition = index=notable with the following default export settings: [] access = read : [ * ], write : [ admin ] export = system
Hi @Jasmine, You can assign the field value to a temporary field first, and then use the rex command to extract the value you want: index="aaa" (source="/test/log/testing.log") host IN (host1) c=... See more...
Hi @Jasmine, You can assign the field value to a temporary field first, and then use the rex command to extract the value you want: index="aaa" (source="/test/log/testing.log") host IN (host1) c=* | eval DB=if(c=="I", 'attr.namespace', 'attr.ns') | rex field=DB "(?<DB>[^\.]*)" | table DB | dedup DB
@Sumi Kindly go through the below links and understand about the .pid  The absence of the splunkd.pid file in the /opt/splunkforwarder/var/run/splunk directory can indeed cause issues with Splunk st... See more...
@Sumi Kindly go through the below links and understand about the .pid  The absence of the splunkd.pid file in the /opt/splunkforwarder/var/run/splunk directory can indeed cause issues with Splunk startup. https://community.splunk.com/t5/Getting-Data-In/Splunk-is-not-starting-due-to-presence-of-PID-file-Why/m-p/152053   
splunkd.pid file is completely missing from cd /opt/splunkforwarder/var/run/splunk path, kindly suggest how can this be reoslved.
In the below query if c= I,  the reg expression is | rex field=attr.namespace "(?<DB>[^\.]*)" if c= other than "I" then rex would be | rex field=attr.ns "(?<DB>[^\.]*)"   index="aaa" (source="/tes... See more...
In the below query if c= I,  the reg expression is | rex field=attr.namespace "(?<DB>[^\.]*)" if c= other than "I" then rex would be | rex field=attr.ns "(?<DB>[^\.]*)"   index="aaa" (source="/test/log/testing.log") host IN(host1) c=N | rex field=attr.ns "(?<DB>[^\.]*)" | table DB| dedup DB  how can i adjust the query?
Hi @splunky_diamond , good for you, see next time! let me know if I can help you more, or, please, accept one answer (eventually your last) for the other people of Community. Ciao and happy splunk... See more...
Hi @splunky_diamond , good for you, see next time! let me know if I can help you more, or, please, accept one answer (eventually your last) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I checked, it does not apply to Security Posture, but I found something, we can add the time range to that dashboard:  I just need to figure out how to bind it to my specific dashboard, and it s... See more...
I checked, it does not apply to Security Posture, but I found something, we can add the time range to that dashboard:  I just need to figure out how to bind it to my specific dashboard, and it should work!
Hi @splunky_diamond, see in [Incident Management > Incident Review Settings] As I said, in this form you can configure the default Time Picker for the Incident Review dashboard, see (I'm not sure!)... See more...
Hi @splunky_diamond, see in [Incident Management > Incident Review Settings] As I said, in this form you can configure the default Time Picker for the Incident Review dashboard, see (I'm not sure!)  if the same setting is applied also to Security Posture. Ciao. Giuseppe