All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the c... See more...
A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the columns that are multi-worded | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())]    
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My m... See more...
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My mistake again.  When using the same output name, the second lookup overrides the first.  Use outputnew in the second. index=A sourcetype="Any" | fields "IP address" Hostname OS | dedup "IP address" Hostname OS | eval Hostname = lower(Hostname) | lookup inventory.csv Reporting_Host as Hostname output Reporting_Host as match | lookup inventory.csv Reporting_Host as "IP address" OUTPUTNEW Reporting_Host as match | eval match = if(isnull(match), "missing", "ok") | table Hostname "IP address" OS match  
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous... See more...
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous non-zero? Why is that a valid representation of your data? If you mean to simply connect non-zero values with a line, just set those 0 to null. index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())] (Two pointers: When using timechart, there is no need to sort _time.  Also I don't see a point of count(fields.status.name) when groupby is the field itself. Then, in Visualization -> Format, set Null values to connect Here is an emulation. index=_internal sourcetype=splunkd thread_name=* earliest=-1h@h latest=-0h@h-30m | timechart count by thread_name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 100, sqrt(<<FIELD>>), 0)] ``` the above emulates index="jsm_issues" | dedup _time key | timechart count by fields.status.name ``` Without setting 0 to null: Set 0 to null without connecting dots Connect the dots  
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: In... See more...
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: Index A contains around 70k assets, which serves as our asset inventory. Some hosts in this index have multiple IP addresses assigned to them. Index B has just the hostname, but this can include a mix of IP addresses, FQDNs, and hostnames. When I ran the query, the first lookup compared the Reporting_Host with the hostnames in Index A and determined whether there was a match. The second lookup compared the Reporting_Host against the IP addresses in Index A to check for matches. However, when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally, since a host in Index A has multiple IP addresses, the query gives a match for the IP address that corresponds, but for the remaining IP addresses associated with that host, it shows them as missing.
You can still use token in that where clause.  In fact, where in an inputlookup uses the same syntax as search term, unlike the where command that requires an eval expression.
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given v... See more...
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given value but my search assumes that you want to group by pv_number, which is stated in the OP.) Look for the earliest value of state, and the latest. Compare earliest value and latest value. Only print those where the two equal. Have you tried my search?  Also play with my emulation (that should run in any instance), and examine output with and without that where filter.  As my code indicates, I use thread_name to fake pv_number, date_minute to fake state.  They may have different values from your real data, but the principle is the same.
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token... See more...
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token to this query as well.  
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. ... See more...
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. [afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk start --accept-license --answer-yes Error calling execve(): Permission denied Error launching systemctl show command: Permission denied This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a.deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Can't run "btool server list clustering --no-log": Permission denied [afmpcc-prabdev@sgmtihfsv001 splunk]$[afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk btool server list clustering --no-log execve: Permission denied while running command /mnt/splunk/splunk/bin/btool [afmpcc-prabdev@sgmtihfsv001 splunk]$
So Inline searches would not work in this scenario
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who ar... See more...
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who are viewing this dashboard are third party and people that we do not want to have access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input.    
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mention... See more...
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mentioning streamstats. I think i did not put my original request properly, let me try again. so when the search is executed (now), we need data from two point in times, from now and two hours ago.   If I'm running a search at 16:05:02, first set will have data values of pv_number (example ext034)  and "state" value (6) at that point-in-time  (from two hours ago, so 14:05:02) In the second set of data values,  pv_number (if its still exist in this point of time @ 16:05:02) AND still has "state" value (6), then want to see the table showing pv_number and both times along with previous and current state. Hope It helps..
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by... See more...
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by sourcetype | fields sourcetype ]) | stats sum(b) by st | eval GB = round('sum(b)'/1073741824,2) | fields st, GB   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do... See more...
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do with this is get a timechart with the status at each time point, however, I have an issue of the "blank" time events being filled in with zeros, whereas I need the last valid value instead.  My naive query is: index="jsm_issues" | sort -_time | dedup _time key | timechart count(fields.status.name) by fields.status.name Which gives me:   How can I query to get these zeros filled in with the last valid count ticket statuses? Some things I've tried with no success: Some filldown kludges usenull=f on the timechart A million other suggestions on this forum that usually involve a simpler query     Any suggestions?  Thanks!
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= b... See more...
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= because: Could not get info for non-existent user="" 10-04-2024 16:55:01.935 +0300 ERROR UserManagerPro [5280 indexerPipe_0] - user="" had no roles   I checked that this couple first appeared 5 days ago, but this fact can't help me because I don't remember what I changed in the exact day. I also tried to find some helpful "nearby" events that can help me to understand the root case, but didn't observe anything interesting. Which ways do I have to investigate this case? Maybe I can "rise" log policy to DEBUG lvl? If I can, what should I change and where? Little more information: I have searchhead cluster with LDAP authorization And also indexer cluster only with local users
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data str... See more...
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data structure! Anyway, if you want a table with the volume of all the sourcetypes the only way is to filter your search selecting sourcetypes in an input to have only a subset of your sourcetypes. Otherwise, is there a rule to group your sourcetypes (e.g. part of the name, or source or index)? Ciao. Giuseppe
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Py... See more...
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Python helper functions https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/PythonHelperFunctions 
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your source... See more...
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your sourcetype here> | stats sum(b) | eval GB = round('sum(b)'/1073741824,2) | fields GB The issue is  I have a list of 1200 sourcetypes . please suggest me how can I adjust the entire list into this query   
worked for me. beautiful solution. thanks a lot
Oh, wait--I see, you are using the TA. splunk-otel-collector.conf is probably not the place for this scenario. I'll have to look in to this more.
This is exactly what I was looking for, I can do my difference operations this way. Thank you for your help. Sincerely, Rajaion