All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks  Working as expected .
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', ext... See more...
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', extrapolation='zero').publish(label='B') C = data('prod.metrics.biz.l2_cache_hit', rollup='rate', extrapolation='zero').publish(label='C') How can I create a new metrics out of these two to find either cache hit or miss percentage? 
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are... See more...
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are obtained I then want to attempt to see processes that are not running from C:\Windows\System32 or C:\Windows\SysWOW64    | tstats `summariesonly` count from datamodel=Endpoint.Processes where Processes.process_name=* by Processes.aid Processes.dest Processes.process_name Processes.process _time  
I think it's a right analysis Maybe, just for some tests i'll try playing with "max_fd" in limits to see how System works. Just to stress the System as said in first post, it's only a test ... See more...
I think it's a right analysis Maybe, just for some tests i'll try playing with "max_fd" in limits to see how System works. Just to stress the System as said in first post, it's only a test to understand better UF. I just saw how dangerous, sometimes, is to introduce the "..." or "*" or any other wildcard in path inputs, since UF could get crazy 🤷‍ like crcSalt, which could ingest x2/x3/x4/... data if not right blacklisted (think about log rotation with maybe gz/zip/bz extensions 🤷‍ ). Anyway, there's something else than only fd. In a stable environment, UF should "leave free" the file (and drop its fd) after "time_before_close" (5 by default), so can process other files in queue. Another strange situation, i can't see any WARN about fd in splunkd.log, as sometimes i saw in other situations log explicitly said the max_fd was raised now not! Strange! Maybe this behaviour occurs different on different distros, as should be a System problem, not directly related to UF work 🤷‍
@bowesmana @gcusello @ITWhisperer  Thanks for your Ideas for helping me. finally, I did it with the below addition, it worked what i desired results. | rename host as Server, Name as Message | ev... See more...
@bowesmana @gcusello @ITWhisperer  Thanks for your Ideas for helping me. finally, I did it with the below addition, it worked what i desired results. | rename host as Server, Name as Message | eval Severity=case( EventID="1068", "Warning", EventID="1", "Information", EventID="1021", "Warning", EventID="7011", "Warning", EventID="6006", "Warning", EventID="4227", "Warning", EventID="4231", "Warning", EventID="1069", "Critical", EventID="1205", "Critical", EventID="1254", "Critical", EventID="1282", "Critical") | fields Server, EventID, Message, Severity | search Severity="*$search$*" OR EventID="*$search$*" OR Server="*$search$*" OR Message="*$search$*" | table _time, Server, EventID, Message, Severity
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and... See more...
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and does not pull it continuously at said interval. No errors observed in the internal  logs. Has any one tried using this add-on for this integration? Azure DevOps (Git Activity) - Technical Add-On 
@bowesmana unfortunately its not working, the only issue i guess is the custom filed "Severity" creating issue here. i tried a lot of different searches but no use.
Thanks for the approach, could you also please help me understand how to calculate the incident end/resolved date working hours (we might need to consider if the incident is closed on weekends) and t... See more...
Thanks for the approach, could you also please help me understand how to calculate the incident end/resolved date working hours (we might need to consider if the incident is closed on weekends) and the number of middle days excluding the holidays and weekend. Kindly help me with the Splunk query.
Hi @gcusello , Thank you so much the query you provided worked. But when  i am trying to add time its not working, please find the below query: Can you please help on this??? | tstats coun... See more...
Hi @gcusello , Thank you so much the query you provided worked. But when  i am trying to add time its not working, please find the below query: Can you please help on this??? | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | eval date=strftime(_time,"%Y-%m-%d %H:%M") | search NOT [ | inputlookup calendsr.csv WHERE type="holyday" | fields date ] csv file as below date type 2024-03-08 12:00 normal 2024-03-09 10:00 holyday 2024-03-09 12:00 holyday 2024-03-09 18:00 holyday 2024-03-09 23:00 holyday 2024-03-10 14:00 holyday 2024-03-10 18:00 holyday 2024-03-10 22:00 holyday  
That's the way rejects works.  When the token has a value, the element is hidden.  To make the element always visible, remove the rejects option.
Yeah the tokens are in comma separated, but the only thing is when i use rejects condition the rows  are hidden. How to fix that? @bowesmana 
Note that depends and rejects take a comma separated list of tokens, not a space separated list. 
When using where and equals, the right hand side is treated as a field name, unless it is numeric, so if you do | where severity=$eventid$ that will translate to  | where severity=informational w... See more...
When using where and equals, the right hand side is treated as a field name, unless it is numeric, so if you do | where severity=$eventid$ that will translate to  | where severity=informational which will mean it's trying to compare the severity field to the informational field, which is of course not what you want. You should do this with your where clause | where strftime(_time, "%F %T")=$eventid|s$ OR EventID=$eventid|s$ OR Server=$eventid|s$ OR Message=$eventid|s$ OR Severity=$eventid|s$ The $eventid|s$ will cause the token value to be correctly quoted, so it will become | where severity="Informational" The reason I have made strftime(_time, "%F %T") is because _time is an epoch, so unless you specify the exact time epoch in seconds it will not match. This allows you to enter an ISO8601 date format YYYY-MM-DD HH:MM:SS Note that the where clause will not support wildcard. You could change this to a "search" clause rather than a where clause then you could use wildcards in your search text box.
@sanjai If you haven't already found it, you can use allowCustomValues in the dropdown XML to allow a user to enter a custom text value as well as choosing from a dropdown. The the dropdown section ... See more...
@sanjai If you haven't already found it, you can use allowCustomValues in the dropdown XML to allow a user to enter a custom text value as well as choosing from a dropdown. The the dropdown section here https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#input_.28form.29  
Thank you so much @tscroggins . It's working fine.
Hi, AlwaysOn Profiling should work in the trial and with Java. Can you please share what version of Java and what distribution you're using? It would also be helpful to know what you have configured... See more...
Hi, AlwaysOn Profiling should work in the trial and with Java. Can you please share what version of Java and what distribution you're using? It would also be helpful to know what you have configured as your Java arguments for "-javaagent" and any environment variables you may have set. 
Hi, You should be able to use AlwaysOn Profiling in the trial. If you're not seeing any profiling data, it could be many possible things, but I would start with checking requirements and then checki... See more...
Hi, You should be able to use AlwaysOn Profiling in the trial. If you're not seeing any profiling data, it could be many possible things, but I would start with checking requirements and then checking instrumentation. What language (and language version) are you using?  Here is the page for basic troubleshooting for profiling: https://docs.splunk.com/observability/en/apm/profiling/profiling-troubleshooting.html
Hi, Can you confirm you're using a token with "INGEST" capability? Note, the "default" token will have "INGEST" and "API" capabilities, so you should be fine if you use the default token.
1. If you can get the events in XML into your Splunk, you can just use the default xml windows event format from TA_windows. Unfortunately it's not that easy with third party tools (there are some of... See more...
1. If you can get the events in XML into your Splunk, you can just use the default xml windows event format from TA_windows. Unfortunately it's not that easy with third party tools (there are some of them which are supposed to be able to do it but I've never tested it) 2. If you use WEF, why not use UF on the collector host? 3. Using regex on structured data is not the best idea.
The thing is that the file is being opened and is held open in case it's getting truncated and rewritten with textual contents. So the 100 fd limit is exhausted quickly. About the order - I suppose ... See more...
The thing is that the file is being opened and is held open in case it's getting truncated and rewritten with textual contents. So the 100 fd limit is exhausted quickly. About the order - I suppose either /bin is first (which in case of my Fedora is just a symlink to /usr/bin) or the order is the disk order not the alphabetical one.