All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can still use token in that where clause.  In fact, where in an inputlookup uses the same syntax as search term, unlike the where command that requires an eval expression.
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given v... See more...
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given value but my search assumes that you want to group by pv_number, which is stated in the OP.) Look for the earliest value of state, and the latest. Compare earliest value and latest value. Only print those where the two equal. Have you tried my search?  Also play with my emulation (that should run in any instance), and examine output with and without that where filter.  As my code indicates, I use thread_name to fake pv_number, date_minute to fake state.  They may have different values from your real data, but the principle is the same.
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token... See more...
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token to this query as well.  
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. ... See more...
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. [afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk start --accept-license --answer-yes Error calling execve(): Permission denied Error launching systemctl show command: Permission denied This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a.deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Can't run "btool server list clustering --no-log": Permission denied [afmpcc-prabdev@sgmtihfsv001 splunk]$[afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk btool server list clustering --no-log execve: Permission denied while running command /mnt/splunk/splunk/bin/btool [afmpcc-prabdev@sgmtihfsv001 splunk]$
So Inline searches would not work in this scenario
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who ar... See more...
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who are viewing this dashboard are third party and people that we do not want to have access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input.    
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mention... See more...
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mentioning streamstats. I think i did not put my original request properly, let me try again. so when the search is executed (now), we need data from two point in times, from now and two hours ago.   If I'm running a search at 16:05:02, first set will have data values of pv_number (example ext034)  and "state" value (6) at that point-in-time  (from two hours ago, so 14:05:02) In the second set of data values,  pv_number (if its still exist in this point of time @ 16:05:02) AND still has "state" value (6), then want to see the table showing pv_number and both times along with previous and current state. Hope It helps..
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by... See more...
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by sourcetype | fields sourcetype ]) | stats sum(b) by st | eval GB = round('sum(b)'/1073741824,2) | fields st, GB   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do... See more...
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do with this is get a timechart with the status at each time point, however, I have an issue of the "blank" time events being filled in with zeros, whereas I need the last valid value instead.  My naive query is: index="jsm_issues" | sort -_time | dedup _time key | timechart count(fields.status.name) by fields.status.name Which gives me:   How can I query to get these zeros filled in with the last valid count ticket statuses? Some things I've tried with no success: Some filldown kludges usenull=f on the timechart A million other suggestions on this forum that usually involve a simpler query     Any suggestions?  Thanks!
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= b... See more...
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= because: Could not get info for non-existent user="" 10-04-2024 16:55:01.935 +0300 ERROR UserManagerPro [5280 indexerPipe_0] - user="" had no roles   I checked that this couple first appeared 5 days ago, but this fact can't help me because I don't remember what I changed in the exact day. I also tried to find some helpful "nearby" events that can help me to understand the root case, but didn't observe anything interesting. Which ways do I have to investigate this case? Maybe I can "rise" log policy to DEBUG lvl? If I can, what should I change and where? Little more information: I have searchhead cluster with LDAP authorization And also indexer cluster only with local users
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data str... See more...
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data structure! Anyway, if you want a table with the volume of all the sourcetypes the only way is to filter your search selecting sourcetypes in an input to have only a subset of your sourcetypes. Otherwise, is there a rule to group your sourcetypes (e.g. part of the name, or source or index)? Ciao. Giuseppe
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Py... See more...
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Python helper functions https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/PythonHelperFunctions 
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your source... See more...
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your sourcetype here> | stats sum(b) | eval GB = round('sum(b)'/1073741824,2) | fields GB The issue is  I have a list of 1200 sourcetypes . please suggest me how can I adjust the entire list into this query   
worked for me. beautiful solution. thanks a lot
Oh, wait--I see, you are using the TA. splunk-otel-collector.conf is probably not the place for this scenario. I'll have to look in to this more.
This is exactly what I was looking for, I can do my difference operations this way. Thank you for your help. Sincerely, Rajaion  
Hi, I think you can set variables like HTTP_PROXY, HTTPS_PROXY, and NO_PROXY in your splunk-otel-collector.conf file so they only apply to the collector and not the entire server.
The usual consultant's answer - "it depends". The most often used field extraction - the search-time extractions are defined on the search-head tier because... tada! they happen during search time ... See more...
The usual consultant's answer - "it depends". The most often used field extraction - the search-time extractions are defined on the search-head tier because... tada! they happen during search time (actually their definitions are replicated internally to indexers so that searches can run properly but these are internal intricacies you don't have to concern yourself with at this point ;-)). But if you want to create so-called "indexed fields" (which isn't often done but the possibility is there), you have to define them in ingest-time which means either on indexers or on any other "heavy" component your events go through first.
OK. We're getting somewhere Assuming you had a typo and that's indeed a valid json you can extract values from the log.message field. The issue I still have with your data is that it's "half-pre... See more...
OK. We're getting somewhere Assuming you had a typo and that's indeed a valid json you can extract values from the log.message field. The issue I still have with your data is that it's "half-pregnant" - it seems to have some structure to it but it's not kept strictly (I have the same problem with CEF for example). You have some header, then some key=value pairs. There are several issues with those key=value pairs.. What if the value contains the equal sign? What if the value contains a space? It seems that comma-space is a multivalued field separator but is it?
I ran into the same issue. Waiting for a resolution as well.