All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers  The current version of splunk universal forwarder ... See more...
Hello,  There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers  The current version of splunk universal forwarder is 7.0.3.0 We want to rise it to the 9.21 Would that version works for the time being with Splunk Enterprise 7.3.1? I know it would be better first upgrade the enterprise, as best practice is to use indexers with versions that are the same or higher than forwarder versions. (but there is hesitation to upgrade indexers first, as it's used also for data from production) But would it be possible to do forwarders first?  Edit: Upgrade was succesfull  
Check for the existence of a field with the isnotnull() function. eval Error=case(isnotnull(attr.error), 'attr.error', isnotnull(attr.error.errmsg), 'attr.error.errmsg') or use the coalesce() funct... See more...
Check for the existence of a field with the isnotnull() function. eval Error=case(isnotnull(attr.error), 'attr.error', isnotnull(attr.error.errmsg), 'attr.error.errmsg') or use the coalesce() function, which does the tests for you and selects the first listed field that is not null. eval Error=coalesce('attr.error','attr.error.errmsg')  
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg.  i have tried the below code. only one case works other c... See more...
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg.  i have tried the below code. only one case works other case fails. please advise eval Error=case(NOT attr.error =="*", 'attr.error',NOT attr.error.errmsg =="*", 'attr.error.errmsg')  
Hello Giuseppe, Thanks, and will do this Monday.  Best regards, Dennis
Assuming all your "dynamic" fields follow naming convention, try this | foreach rw* [| eval maxelements=if(isnull(maxelements),mvcount('<<FIELD>>'),if(maxelements<mvcount('<<FIELD>>'),mvcount('<... See more...
Assuming all your "dynamic" fields follow naming convention, try this | foreach rw* [| eval maxelements=if(isnull(maxelements),mvcount('<<FIELD>>'),if(maxelements<mvcount('<<FIELD>>'),mvcount('<<FIELD>>'),maxelements))] | eval row=mvrange(0,maxelements) | mvexpand row | foreach rw* [| eval "<<FIELD>>"=mvindex('<<FIELD>>',row)] | fields - maxelements row
Hi @sanjai, If your original values come from separate events, then a simple table may be all you need: | table ds_file_path rwws01 rwmini01 However, the x-axis is a bit wordy. Can you provid... See more...
Hi @sanjai, If your original values come from separate events, then a simple table may be all you need: | table ds_file_path rwws01 rwmini01 However, the x-axis is a bit wordy. Can you provide a mock sample of your original data and a drawing of your target visualization?
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01  rw... See more...
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01  rwmini01 ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.56 0.98 0.99 5.99 9.04 8.05 5.09 5.66 7.99 8.99   In this output chart table, the fields rwws01 and rwmini01 are dynamic, so hardcoding them isn't feasible. The current output format is causing challenges in visualizing the data into a line chart. My requirement is get output  : ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.98 5.99 \\swmfs\orca_db_january_2024\topo\raster.ds   0.99 3.56 \\swmfs\orca_db_january_2024\topo\raster.ds   0.56 4.78 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 9.08 \\swmfs\orca_db_january_2024\topo\raster.ds NULL( or 0) 2.98 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 5.88   I tried different commands and function, but nothing gave me the desired output, I'm seeking suggestions on how to achieve this single value field format or alternative functions and commands to achieve this output and create a line chart effectively. Your insights and guidance would be greatly appreciated! Thank you.
It's an interesting thought, though the same issue is occuring on 9.0.1 for me but on Server2022
The event you have chosen to show does not match "Message=.*" so you won't get apiName extracted, therefore your chart will return no results (at least for this event). Your lookup appears to use "C... See more...
The event you have chosen to show does not match "Message=.*" so you won't get apiName extracted, therefore your chart will return no results (at least for this event). Your lookup appears to use "Client" as a field name, whereas your event appears to use "client" - fieldnames are case sensitive so these are two different fields. I hope this helps you resolve your issue.
One thing - 9.1 introduced the wec_event_format parameter for windows event inputs. It can cause your events to not be ingested at all if misconfigured but maybe it can cause other problems. You can ... See more...
One thing - 9.1 introduced the wec_event_format parameter for windows event inputs. It can cause your events to not be ingested at all if misconfigured but maybe it can cause other problems. You can fiddle with forwarded events format in subscription setting and adjust this parameter accordingly.
Hi @dwraesner , open a non technocal ticket to Splunk Support. Ciao. Giuseppe
Great solution. Chipping in my 10 cents: #input_link_split_by.input-link button[aria-checked="true"] { background-color: green !important; } If you want to keep highlighted the last selection... See more...
Great solution. Chipping in my 10 cents: #input_link_split_by.input-link button[aria-checked="true"] { background-color: green !important; } If you want to keep highlighted the last selection on the tabs to make it more visible     
As I always say, what search is best data analytics solution depends on data.  While this is as true as what @PickleRick explained in general, it is even more true with an ambiguous case as yours.  A... See more...
As I always say, what search is best data analytics solution depends on data.  While this is as true as what @PickleRick explained in general, it is even more true with an ambiguous case as yours.  A discussion last year has showed me possibilities that I hadn't known before.  But whether it will help your use case depends on a lot of things.  So, let me first put out some qualifiers that immediately come to mind.  There can be many others. Does every field of interest appear in every event in which sourcetype_1_primary, sourcetype_2_primary, or sourcetype_3_primary is present? Are sourcetype_1_primary, sourcetype_2_primary, and sourcetype_3_primary already extracted at search time, i.e., your <initial search> does not have to extract any of them? Gain from such optimization also depends on how many calculations are to be performed between index search and stats. This is not to say that failing these qualifiers will preclude potential benefits from similar strategies, but the following is based on them. The idea is to limit search intervals using subsearches.  For this to work, of course, employed subsearches must be extremely light.  Hence tstats.  Here is a little demonstration. original with time filters index=_introspection component=* earliest=-4h | stats latest(*) as * by component index=_introspection component=* earliest=-4h     [tstats max(_time) as latest where index=_introspection earliest=-4h     by component index     | eval earliest = latest - 0.1, latest = latest + 0.1] | stats latest(*) as * by component I tested them on a standalone instance on my laptop.  That is to say there are few events (only 10 components); instead of 0.1s shifts, I use 1s. Even so, the baseline is extremely unstable, ranging from 0.76s to 1.8s.  The biggest gain I saw was from 1.8s to 0.6s.  Smaller gains were like from 0.75s to 0.68s. Back to your correlation search.  Assuming your <initial search> is a combined search, try something like this:   (sourcetype=sourcetype_1 sourcetype_1_primary=* [tstats max(_time) as latest where sourcetype=sourcetype_1 by sourcetype_1_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) OR (sourcetype=sourcetype_2 sourcetype_2_primary [tstats max(_time) as latest where sourcetype=sourcetype_2 by sourcetype_2_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) OR (sourcetype=sourcetype_3 sourcetype_3_primary=* [tstats max(_time) as latest where sourcetype=sourcetype_3 by sourcetype_3_primary | eval earliest = latest - 0.1, latest = latest + 0.1]) | fields _time, xxx, xxx, <pick your required fields> | eval coalesced_primary_key=coalesce(sourcetype_1_primary, sourcetype_2_primary, sourcetype_3_primary) | stats latest(*) AS * by coalesced_primary_key  
Hi @verizonrap2017, The command output should match the information you were provided and be otherwise self-explanatory. A warm bucket with data integrity enabled should have the following files in... See more...
Hi @verizonrap2017, The command output should match the information you were provided and be otherwise self-explanatory. A warm bucket with data integrity enabled should have the following files in rawdata: journal.zst (if zstd compression is used) l1Hashes_0_<instance_guid>.dat l2Hash_0_<instance_guid>.dat slicemin.dat slicesv2.dat Calling check-integrity against an unmodified zstd rawdata journal: $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Integrity check succeeded on bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 Total buckets checked=1, succeeded=1, failed=0 ... Calling check-integrity against a recompressed zstd rawdata journal: $ cp journal.zst journal.zst.backup $ zstd -d journal.zst $ zstd journal $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Error reading compressed journal /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0/rawdata/journal.zst while streaming: single-segment zstd compressed block in frame was 352493 bytes long (max should be 131072) Error parsing rawdata inside bucket path="/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0": msg="Error reading compressed journal /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0/rawdata/journal.zst while streaming: single-segment zstd compressed block in frame was 352493 bytes long (max should be 131072)" Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0, Reason=Journal has no hashes. Total buckets checked=1, succeeded=0, failed=1 ... Calling check-integrity against a recompressed zstd streamed rawdata journal: $ cp journal.zst journal.zst.backup $ zstd -d journal.zst $ cat journal | zstd --no-check - -o journal.zst $ /opt/splunk/bin/splunk check-integrity -bucketPath /opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0 ... Operating on: idx= bucket='/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0' Integrity check failed for bucket with path=/opt/splunk/var/lib/splunk/checkme/db/db_1715457510_1715457464_0, Reason=Hash of journal slice# 1 did not match the expected value in l1Hashes_0_<instance_guid>.dat Total buckets checked=1, succeeded=0, failed=1 ... Irrespective of how the rawdata journal or hashes are modified, if the calculated hashes do match the saved hashes, the integrity check fails. If your rawdata journal and hashes are stored together, I wouldn't trust them for evidence of compromise. While a failed integrity check does indicate a problem with either the rawdata journal or hashes, a successful integrity check only confirms that the current rawdata journal and hashes are in agreement. If both were compromised, you would have no way of knowing using only the integrity check.
The only workaround I have been able to find is to clone the scheduled searches into another app (custom app) and run them from there. This avoids the tag visibility issue and the dashboards display ... See more...
The only workaround I have been able to find is to clone the scheduled searches into another app (custom app) and run them from there. This avoids the tag visibility issue and the dashboards display properly. Unfortunately I never heard from the developer so that's it for now. 
According to Splunk on our case, version 9.2.2 will have a fix for this and it'll be released on 5/24. They also have a custom build available that'll solve it were going to try next week.
I have moved the lookup statement to the end after chart. Here is the latest query that I used. After I move the lookup at the end, I see no data. index=application_na sourcetype=my_logs:hec sourc... See more...
I have moved the lookup statement to the end after chart. Here is the latest query that I used. After I move the lookup at the end, I see no data. index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* | rex field=message "Message=.* \((?<apiName>\w+?) -" | chart count over client by apiName | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region Here is the sample events that I am working with, if that helps: Time Event 4/27/24 5:30:37.182 AM { "client":"ClientA", "msgtype":"WebService", "priority":2, "interactionid":"1DD6AA27-6517-4D62-84C1-C58CA124516C", "seq":15831, "threadid":23, "message":"TimeMarker: WebService: Sending result @110ms. (retrievePaymentsXY - ID1:123 ID2:ClientId|1 ID3:01/27/2024-04/27/2024)", "userid":"Unknown" }   My_Client_Mapping lookup table has details of my clients like Client ClientId ClientName Region ClientA 1 Client A Eastern ClientB 2 Client B Eastern ClientC 3 Client C Western    
Hi @R_Ramanan, Can you provide a small set of sample data? If a, b, c, ..., g are only related to par2, par3, par4, ..., par12 by par1, then par1 is likely your only filterable parameter.
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email fro... See more...
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email from the AppInspect email group at Splunk stating the App passed, and has qualified for Splunk Cloud compatibility. It has now been 3 weeks since the App passed, but it has not been un-archived / reinstated. Found some instructions that state to click the "Reinstate App" button under your App profile's "Manage App", but do not see that button available. Can anyone post how to get an App unarchived / reinstated.        
Hi @hettervik, The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard: | savedsearch "Incident ... See more...
Hi @hettervik, The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard: | savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="" owner_filter="" urgency_filter="" tag_filter="" type_filter="" disposition_filter="" I don't believe this is directly documented, but all Splunk ES components are shipped in a source-readable form (saved searches, dashboards, Python modular inputs, etc.). The searches may be discussed in the latest revision of the Administering Splunk Enterprise Security course; my last training course was circa Splunk ES 6.1. As a starting point, I would expand the time range and review Notable Event Suppressions under Configure > Incident Management > Notable Event Suppressions. See https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables#Create_and_manage_notable_event_suppressions for more information. Following that, I would verify the get_notable_index macro value and permissions haven't been modified. The macro is defined in the SA-ThreatIntelligence app: [get_notable_index] definition = index=notable with the following default export settings: [] access = read : [ * ], write : [ admin ] export = system