All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So Inline searches would not work in this scenario
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who ar... See more...
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who are viewing this dashboard are third party and people that we do not want to have access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input.    
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mention... See more...
Hi yuanliu Firstly thanks for looking into it and helping with the SPL query.  It was pleasing to see someone responding I felt like I should buy a coffee I apologize for my mistake of mentioning streamstats. I think i did not put my original request properly, let me try again. so when the search is executed (now), we need data from two point in times, from now and two hours ago.   If I'm running a search at 16:05:02, first set will have data values of pv_number (example ext034)  and "state" value (6) at that point-in-time  (from two hours ago, so 14:05:02) In the second set of data values,  pv_number (if its still exist in this point of time @ 16:05:02) AND still has "state" value (6), then want to see the table showing pv_number and both times along with previous and current state. Hope It helps..
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by... See more...
I hope this search query proves useful to you. index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st IN ([ search index=<your_index> | stats count by sourcetype | fields sourcetype ]) | stats sum(b) by st | eval GB = round('sum(b)'/1073741824,2) | fields st, GB   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do... See more...
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do with this is get a timechart with the status at each time point, however, I have an issue of the "blank" time events being filled in with zeros, whereas I need the last valid value instead.  My naive query is: index="jsm_issues" | sort -_time | dedup _time key | timechart count(fields.status.name) by fields.status.name Which gives me:   How can I query to get these zeros filled in with the last valid count ticket statuses? Some things I've tried with no success: Some filldown kludges usenull=f on the timechart A million other suggestions on this forum that usually involve a simpler query     Any suggestions?  Thanks!
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= b... See more...
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= because: Could not get info for non-existent user="" 10-04-2024 16:55:01.935 +0300 ERROR UserManagerPro [5280 indexerPipe_0] - user="" had no roles   I checked that this couple first appeared 5 days ago, but this fact can't help me because I don't remember what I changed in the exact day. I also tried to find some helpful "nearby" events that can help me to understand the root case, but didn't observe anything interesting. Which ways do I have to investigate this case? Maybe I can "rise" log policy to DEBUG lvl? If I can, what should I change and where? Little more information: I have searchhead cluster with LDAP authorization And also indexer cluster only with local users
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data str... See more...
Hi @sverdhan , if you have more than 1200 sourcetypes probably there's an error in your system design because 1200 sourcetypes aren't manageable, and my hint is to analyze and redesign your data structure! Anyway, if you want a table with the volume of all the sourcetypes the only way is to filter your search selecting sourcetypes in an input to have only a subset of your sourcetypes. Otherwise, is there a rule to group your sourcetypes (e.g. part of the name, or source or index)? Ciao. Giuseppe
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Py... See more...
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Python helper functions https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/PythonHelperFunctions 
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your source... See more...
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your sourcetype here> | stats sum(b) | eval GB = round('sum(b)'/1073741824,2) | fields GB The issue is  I have a list of 1200 sourcetypes . please suggest me how can I adjust the entire list into this query   
worked for me. beautiful solution. thanks a lot
Oh, wait--I see, you are using the TA. splunk-otel-collector.conf is probably not the place for this scenario. I'll have to look in to this more.
This is exactly what I was looking for, I can do my difference operations this way. Thank you for your help. Sincerely, Rajaion  
Hi, I think you can set variables like HTTP_PROXY, HTTPS_PROXY, and NO_PROXY in your splunk-otel-collector.conf file so they only apply to the collector and not the entire server.
The usual consultant's answer - "it depends". The most often used field extraction - the search-time extractions are defined on the search-head tier because... tada! they happen during search time ... See more...
The usual consultant's answer - "it depends". The most often used field extraction - the search-time extractions are defined on the search-head tier because... tada! they happen during search time (actually their definitions are replicated internally to indexers so that searches can run properly but these are internal intricacies you don't have to concern yourself with at this point ;-)). But if you want to create so-called "indexed fields" (which isn't often done but the possibility is there), you have to define them in ingest-time which means either on indexers or on any other "heavy" component your events go through first.
OK. We're getting somewhere Assuming you had a typo and that's indeed a valid json you can extract values from the log.message field. The issue I still have with your data is that it's "half-pre... See more...
OK. We're getting somewhere Assuming you had a typo and that's indeed a valid json you can extract values from the log.message field. The issue I still have with your data is that it's "half-pregnant" - it seems to have some structure to it but it's not kept strictly (I have the same problem with CEF for example). You have some header, then some key=value pairs. There are several issues with those key=value pairs.. What if the value contains the equal sign? What if the value contains a space? It seems that comma-space is a multivalued field separator but is it?
I ran into the same issue. Waiting for a resolution as well.
index=oncall_prod originOnCall="Prod" incidentNumber=497764 | sort _time desc | rex field=entityDisplayName "(?<Priorité>..) - (?<Titre>.*)" | eval startAlert = if(alertType == "CRITICAL", _time, ""... See more...
index=oncall_prod originOnCall="Prod" incidentNumber=497764 | sort _time desc | rex field=entityDisplayName "(?<Priorité>..) - (?<Titre>.*)" | eval startAlert = if(alertType == "CRITICAL", _time, "") | eval startAlert = strftime(startAlert,"%Y-%m-%d %H:%M:%S ") | eval ackAlert = if(alertType == "ACKNOWLEDGEMENT", _time, "") | eval ackAlert = strftime(ackAlert,"%Y-%m-%d %H:%M:%S ") | eval endAlert = if(alertType == "RECOVERY", _time, "") | eval endAlert = strftime(endAlert,"%Y-%m-%d %H:%M:%S ") | eventstats values(startAlert) as startAlert, values(ackAlert) as ackAlert, values(endAlert) as endAlert, values(ticket_EV) as ticket_EV by incidentNumber
Hello @ITWhisperer, Thank you for your help, I tried to add your line but it aggregates all the lines between them and if in absolute terms, I see everything on a single line, I cannot manipulate ... See more...
Hello @ITWhisperer, Thank you for your help, I tried to add your line but it aggregates all the lines between them and if in absolute terms, I see everything on a single line, I cannot manipulate the data (for example, put a message when there has been no acknowledgment): Example : | eval ticket_EV = if(alertType == "RECOVERY" AND (isnull(ackAlert)), "No ticket", ticket_EV) Sincerely, Rajaion
You can convert/upgrade in place, Red Hat has an utility (Convert2RHEL) that allows you to upgrade CentOS 7 to RHEL 8. We've done this across thousands of CentOS servers with various configurations a... See more...
You can convert/upgrade in place, Red Hat has an utility (Convert2RHEL) that allows you to upgrade CentOS 7 to RHEL 8. We've done this across thousands of CentOS servers with various configurations and apps and had no issues.
Thanks @isoutamo     So, I understood it correctly. We do not need to restore the backup. As soon as we add the detached node back to cluster, all configuration and data will be resynced as it is? ... See more...
Thanks @isoutamo     So, I understood it correctly. We do not need to restore the backup. As soon as we add the detached node back to cluster, all configuration and data will be resynced as it is? correct me If my understanding is incorrect. For me, its bit confusing as how the configuration files will be restored without restoring the backup? Also, data might be replicated as soon as sync starts but it may take ages to complete the sync considering 4 TB of data. what do you think?   Thanks