All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @darkhorse91 , you could use join command but I don't hint because you'll have a very slow search. Otherwise, you could run something like this: (index=retrospective earliest=-30d latest=now) O... See more...
Hi @darkhorse91 , you could use join command but I don't hint because you'll have a very slow search. Otherwise, you could run something like this: (index=retrospective earliest=-30d latest=now) OR (index=current earliest=-24h latest=now) | stats values(field_retrospective_1) AS field_retrospective_1 values(field_retrospective_2) AS field_retrospective_2 values(field_retrospective_3) AS field_retrospective_3 values(field_current_1) AS field_current_1 values(field_current_2) AS field_current_2 BY my_field if you want also to add the condition that my_field must be present in both the indexes, you could run (index=retrospective earliest=-30d latest=now) OR (index=current earliest=-24h latest=now) | stats values(field_retrospective_1) AS field_retrospective_1 values(field_retrospective_2) AS field_retrospective_2 values(field_retrospective_3) AS field_retrospective_3 values(field_current_1) AS field_current_1 values(field_current_2) AS field_current_2 dc(indexes) AS index_count BY my_field | where index_count=2 Ciao. Giuseppe
we have an scheduled alert configured in splunk which is working  fine as per event from the user logs but its delayed in sending email as alert notification 
Splunk doesn't care what OS it runs on as long as the kernel version is at least 3.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/SystemRequirements#Unix_operating_systems . Lo... See more...
Splunk doesn't care what OS it runs on as long as the kernel version is at least 3.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/SystemRequirements#Unix_operating_systems . Look in older versions of that document to find an older kernel version.
Or is there any chance to create a single token for two indexes
We are using splunk metrics-toolkit app to check the logs. created two indexes 1.metrics 2. platform_benefits and one token for the index metrics In metrics-toolkit app.dev file we are using one to... See more...
We are using splunk metrics-toolkit app to check the logs. created two indexes 1.metrics 2. platform_benefits and one token for the index metrics In metrics-toolkit app.dev file we are using one token  As a result it's is logging only metrics index data in splunk, we have both metrics and platform_benefits dashboards  Is there any way to configure  two tokens inside the app.dev yaml file to get both index logs? https://github.com/mulesoft-catalyst/metrics-toolkit/blob/main/src/main/resources/properties/secure/_template.yaml
Hi , I have two queries, that have a common field someField one helps me find inconsistencies: sourcetype="my_source" someLog inconsistencies  other helps me find consistencies sourcetype="my_s... See more...
Hi , I have two queries, that have a common field someField one helps me find inconsistencies: sourcetype="my_source" someLog inconsistencies  other helps me find consistencies sourcetype="my_source" someLog consistencies  This gives me both consistencies and inconsistencies: sourcetype="my_source" someLog  Note that someLog  is just a text used an identifier that's common for both the queries. if the someField was logged as inconsistent it can be logged as consistent in the future.   How can I find those values of someField that are truly inconsistent in a given time frame, retrospectively?i.e. if currently values are inconsistent I want to be able to search (in the past or future relative to the current search) those values that are truly inconsistent - not part of the consistent results in that time frame
@Ryan.Paredez  can you help me 
Hi @gcusello    Amazing. This works. Thanks   I have Another query: how can I print those field values from subsearch that are not in the main search? In this case the results of the main search... See more...
Hi @gcusello    Amazing. This works. Thanks   I have Another query: how can I print those field values from subsearch that are not in the main search? In this case the results of the main search is a superset of the subsearch
Darn ! I clicked on Ask a question button here : https://community.splunk.com/t5/c-oqeym24965/Radius+Technical+Add-on/pd-p/4547 and it removed the info about the app. I thought it would've been sent... See more...
Darn ! I clicked on Ask a question button here : https://community.splunk.com/t5/c-oqeym24965/Radius+Technical+Add-on/pd-p/4547 and it removed the info about the app. I thought it would've been sent to the developer .... Ok, so the add-on is RADIUS Technology Add-On developed by some Brian Daniel Potter The only info splunkbase has about it (and I'm surprised it was published with just that) is:      I built this radius TA against a very large dataset that was geographically diverse, with different software (freeradius, radiusd, etc), different versions, etc. This should work for most *nix *Radius implementations. If you have data that fails to parse with this add-on please send me a note and a sample log and I will expand the add-on scope.      My understanding would then be it's supposed to monitor some radius configs ... but really it's not explicit.  
Yes, but that isn’t working.  So here is a solution that I came up with —  Step 1 - first write your results to a lookup file.  <your query> |outputlookup yourlookup.csv   Step 2 - use tha... See more...
Yes, but that isn’t working.  So here is a solution that I came up with —  Step 1 - first write your results to a lookup file.  <your query> |outputlookup yourlookup.csv   Step 2 - use that lookup in the query as shown below:   <your query> |append [|inputlookup yourlookup.csv  |outputlookup yourlookup.csv override_if_empty = false create_empty = false]   the above query writes the results and stores in yourlookup.csv with wider time range. And  we are rewriting the stored results to the same lookupfile. In the last line override and Create_empty commands will make sure it will not give empty results.  Note: use |dedup in the last if you see any duplicate results. step 3- Create a saved search with this query and schedule it according to your requirement. 
What is the latest version of Splunk Enterprise supported on RHEL 7.x?
i will do validations but i think that it works , thanks!
Hi, It's a very useful query! | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title,author,description,"eai:acl.owner","next_scheduled_time","action... See more...
Hi, It's a very useful query! | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title,author,description,"eai:acl.owner","next_scheduled_time","action.email.to" I need the alerts results and the second query doesn't work for me. i have already created an alert and see in under the "Alerts" tab and scheduled in today. What i need to change in the second query to results?  maybe something in the alert setting? or different index?
| eval _time=strptime(TimeStamp, "%F %T") | timechart span=12h count(Name) AS CountEvents by machine cont=t usenull=f useother=f | untable _time machine count | where count == 0
Hi, Instead of passing the username and password in a plain text format, I was trying the basicauth extension for authentication and monitoring the oracledb and require some assistance, as after add... See more...
Hi, Instead of passing the username and password in a plain text format, I was trying the basicauth extension for authentication and monitoring the oracledb and require some assistance, as after adding the below details in the agent_config.yml , The splunk otel collector is not starting up and am seeing error. Kindy help. In agent_config.yml extensions:    basicauth:    htpasswd:        file: /etc/otel/collector/.htpasswd receivers:   oracledb/demo:   protocols:      http:        auth:           authenticator: basicauth    endpoint: <hostname:port>    service: <DBname> service:    metrics:        receivers: [oracledb/demo]    
thanks! i don't get all cells=0, no results when using the where clause (if i remove `where` i see that cells==0 exist) . i found a ticket: https://community.splunk.com/t5/Splunk-Search/How-to-sh... See more...
thanks! i don't get all cells=0, no results when using the where clause (if i remove `where` i see that cells==0 exist) . i found a ticket: https://community.splunk.com/t5/Splunk-Search/How-to-show-only-fields-over-0/m-p/164589 maybe i can't do it with timechat? | eval _time=strptime(TimeStamp, "%F %T") | timechart span=12h count(Name) AS CountEvents by machine cont=t usenull=f useother=f | where CountEvents=0
@Bisho-Fouad - Why do you want to create input on all heavy forwarders?  * I think this will duplicate the data. There is no necessity to create index on all heavy forwarders. Only where you are co... See more...
@Bisho-Fouad - Why do you want to create input on all heavy forwarders?  * I think this will duplicate the data. There is no necessity to create index on all heavy forwarders. Only where you are configuring the input.
Hi @darkhorse91, you have to use a subsearch, with the limitation that you cannot have more than 50,000 results from the subsearch,  if: the current search is on index=current and runs on the las... See more...
Hi @darkhorse91, you have to use a subsearch, with the limitation that you cannot have more than 50,000 results from the subsearch,  if: the current search is on index=current and runs on the last day, the retrospetive search runs on index=retrospective and the last 30 days,  the common field is my_field and it has the same name in both the searches, you could try something like this: index=retrospective earliest=-30d latest=now [ search index=current earliest=-24h latest=now) | dedup my_field | fields my_field ] You have to adapt my approach to your searches. Ciao. Giuseppe  
Hi @ITWhisperer - thanks a lot, this worked like a charm.