All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana unfortunately its not working, the only issue i guess is the custom filed "Severity" creating issue here. i tried a lot of different searches but no use.
Thanks for the approach, could you also please help me understand how to calculate the incident end/resolved date working hours (we might need to consider if the incident is closed on weekends) and t... See more...
Thanks for the approach, could you also please help me understand how to calculate the incident end/resolved date working hours (we might need to consider if the incident is closed on weekends) and the number of middle days excluding the holidays and weekend. Kindly help me with the Splunk query.
Hi @gcusello , Thank you so much the query you provided worked. But when  i am trying to add time its not working, please find the below query: Can you please help on this??? | tstats coun... See more...
Hi @gcusello , Thank you so much the query you provided worked. But when  i am trying to add time its not working, please find the below query: Can you please help on this??? | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | eval date=strftime(_time,"%Y-%m-%d %H:%M") | search NOT [ | inputlookup calendsr.csv WHERE type="holyday" | fields date ] csv file as below date type 2024-03-08 12:00 normal 2024-03-09 10:00 holyday 2024-03-09 12:00 holyday 2024-03-09 18:00 holyday 2024-03-09 23:00 holyday 2024-03-10 14:00 holyday 2024-03-10 18:00 holyday 2024-03-10 22:00 holyday  
That's the way rejects works.  When the token has a value, the element is hidden.  To make the element always visible, remove the rejects option.
Yeah the tokens are in comma separated, but the only thing is when i use rejects condition the rows  are hidden. How to fix that? @bowesmana 
Note that depends and rejects take a comma separated list of tokens, not a space separated list. 
When using where and equals, the right hand side is treated as a field name, unless it is numeric, so if you do | where severity=$eventid$ that will translate to  | where severity=informational w... See more...
When using where and equals, the right hand side is treated as a field name, unless it is numeric, so if you do | where severity=$eventid$ that will translate to  | where severity=informational which will mean it's trying to compare the severity field to the informational field, which is of course not what you want. You should do this with your where clause | where strftime(_time, "%F %T")=$eventid|s$ OR EventID=$eventid|s$ OR Server=$eventid|s$ OR Message=$eventid|s$ OR Severity=$eventid|s$ The $eventid|s$ will cause the token value to be correctly quoted, so it will become | where severity="Informational" The reason I have made strftime(_time, "%F %T") is because _time is an epoch, so unless you specify the exact time epoch in seconds it will not match. This allows you to enter an ISO8601 date format YYYY-MM-DD HH:MM:SS Note that the where clause will not support wildcard. You could change this to a "search" clause rather than a where clause then you could use wildcards in your search text box.
@sanjai If you haven't already found it, you can use allowCustomValues in the dropdown XML to allow a user to enter a custom text value as well as choosing from a dropdown. The the dropdown section ... See more...
@sanjai If you haven't already found it, you can use allowCustomValues in the dropdown XML to allow a user to enter a custom text value as well as choosing from a dropdown. The the dropdown section here https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#input_.28form.29  
Thank you so much @tscroggins . It's working fine.
Hi, AlwaysOn Profiling should work in the trial and with Java. Can you please share what version of Java and what distribution you're using? It would also be helpful to know what you have configured... See more...
Hi, AlwaysOn Profiling should work in the trial and with Java. Can you please share what version of Java and what distribution you're using? It would also be helpful to know what you have configured as your Java arguments for "-javaagent" and any environment variables you may have set. 
Hi, You should be able to use AlwaysOn Profiling in the trial. If you're not seeing any profiling data, it could be many possible things, but I would start with checking requirements and then checki... See more...
Hi, You should be able to use AlwaysOn Profiling in the trial. If you're not seeing any profiling data, it could be many possible things, but I would start with checking requirements and then checking instrumentation. What language (and language version) are you using?  Here is the page for basic troubleshooting for profiling: https://docs.splunk.com/observability/en/apm/profiling/profiling-troubleshooting.html
Hi, Can you confirm you're using a token with "INGEST" capability? Note, the "default" token will have "INGEST" and "API" capabilities, so you should be fine if you use the default token.
1. If you can get the events in XML into your Splunk, you can just use the default xml windows event format from TA_windows. Unfortunately it's not that easy with third party tools (there are some of... See more...
1. If you can get the events in XML into your Splunk, you can just use the default xml windows event format from TA_windows. Unfortunately it's not that easy with third party tools (there are some of them which are supposed to be able to do it but I've never tested it) 2. If you use WEF, why not use UF on the collector host? 3. Using regex on structured data is not the best idea.
The thing is that the file is being opened and is held open in case it's getting truncated and rewritten with textual contents. So the 100 fd limit is exhausted quickly. About the order - I suppose ... See more...
The thing is that the file is being opened and is held open in case it's getting truncated and rewritten with textual contents. So the 100 fd limit is exhausted quickly. About the order - I suppose either /bin is first (which in case of my Fedora is just a symlink to /usr/bin) or the order is the disk order not the alphabetical one.
This can be done by adding a props and transforms config file on the indexer machines. As an example, you could push an app to your indexers with: /<appname>/local/props.conf containing: [collecte... See more...
This can be done by adding a props and transforms config file on the indexer machines. As an example, you could push an app to your indexers with: /<appname>/local/props.conf containing: [collectedevents] # change_host and changehostfield are arbitrary values. Change how you like TRANSFORMS-change_host = changehostfield  -and- /<appname>/local/transforms.conf containing: # Stanza name must match whatever you set the "changehostfield" value in props.conf [changehostfield] DEST_KEY = Metadata:Host # Add your regex below REGEX=<Computer>([^<]+)<\/Computer> FORMAT = host::$1   Then the indexers should replace the host field with the value in the XML <Computer> field
@ChocolateRocket, the latitude and longitude fields are generated by the iplocation command and they are used to plot the data points on the map. You could remove them but then that would break the v... See more...
@ChocolateRocket, the latitude and longitude fields are generated by the iplocation command and they are used to plot the data points on the map. You could remove them but then that would break the visualization. Good luck, we're all counting on you.
@ITWhisperer  I tried the below search its not working at all. | where _time=$eventid$ OR EventID=$eventid$ OR Server=$eventid$ OR Message=$eventid$ OR Severity=$eventid$ When i keep this search i... See more...
@ITWhisperer  I tried the below search its not working at all. | where _time=$eventid$ OR EventID=$eventid$ OR Server=$eventid$ OR Message=$eventid$ OR Severity=$eventid$ When i keep this search in the pannel it gives all the desired results. But, when i search in the "textbox" like values of Severity(Critical or Warning or Information) its not working. when i search in the "textbox" like values of (EventID or Server or Message) it is working I think due to Severity is a custom field, so its not working i guess is this right? the EventID, Name as Message, host as Server fields are from _raw index=foo host=foo "$search$" OR Severity="$search$" | eval Severity=case(EventID="1068", "Warning", EventID="1", "Information", EventID="1021", "Warning", EventID="7011", "Warning", EventID="6006", "Warning", EventID="4227", "Warning", EventID="4231", "Warning", EventID="1069", "Critical", EventID="1205", "Critical", EventID="1254", "Critical", EventID="1282", "Critical") | rename Name as Message, host as Server | table _time EventID Server Message Severity any suggestions.
WARN .... Reason: binary. Should be right, since no binary is grant in props and by default is set to not get access to binaries. Descriptors to 100 is default, and it's ok, but should progress any... See more...
WARN .... Reason: binary. Should be right, since no binary is grant in props and by default is set to not get access to binaries. Descriptors to 100 is default, and it's ok, but should progress anyway. And in splunkd.log i can't see any WARN about descriptors. Now, why the "/etc" with all its ascii system files are not ingested since it's before the "/usr"?
We were able to address the issue It seems that there's a problem between Dynatrace RUM (real-time user monitoring) and the JS library used by SOAR-UI Basically, by lowering the intensity of the Dy... See more...
We were able to address the issue It seems that there's a problem between Dynatrace RUM (real-time user monitoring) and the JS library used by SOAR-UI Basically, by lowering the intensity of the Dynatrace Ruxit Agent, it is not overloading the browser stack and all the callbacks are now working properly
@tatdat171  I have also recently opened a case with Splunk support and it's in queue, not acknowledge yet. Please let me know if you have any updates/finding. Thank you.