All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The requirement is to create a time delta field which has the value of time difference between the 2 time fields. Basically the difference between start time & receive time should populate under new ... See more...
The requirement is to create a time delta field which has the value of time difference between the 2 time fields. Basically the difference between start time & receive time should populate under new field name called timediff.   I have created eval conditions, can anyone help me with a props config based on that. index=XXX sourcetype IN(xx:xxx, xxx:xxxxx) | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval it = strptime(start_time, "%Y/%m/%d %H:%M:%S") | eval ot = strptime(receive_time, "%Y/%m/%d %H:%M:%S") | eval diff = tostring((ot - it), "duration") | table start_time, receive_time,indextime,_time, diff
This can't be done with this search because there is no field called max_time - please clarify your search
| appendpipe [| eval Completed=if(Name="Grand Total:",100*Completed/(Completed + Remaining), null()) | eval Remaining=null() | eval Name=if(Name="Grand Total:","Completion%",null()) |... See more...
| appendpipe [| eval Completed=if(Name="Grand Total:",100*Completed/(Completed + Remaining), null()) | eval Remaining=null() | eval Name=if(Name="Grand Total:","Completion%",null()) | where isnotnull(Name)]
    Hey I know that such a question has been asked many times but I still haven't found a relevant answer that works for me. I have a table and I want to color a column with a different variable,... See more...
    Hey I know that such a question has been asked many times but I still haven't found a relevant answer that works for me. I have a table and I want to color a column with a different variable,   |stats values(interfaceName) as importer |eval importer_in_csv=if(isnull(max_time),0,1)   I want to color the importer column if importer_in_csv = 0 How do I do it in XML? thanks!!  
1. there is a time range picker object on the dashboard. If I select any range, e.G. the whole day 05.12.2023, this time range I would like to have on x-axis in area chart. 2. in this case,  | eva... See more...
1. there is a time range picker object on the dashboard. If I select any range, e.G. the whole day 05.12.2023, this time range I would like to have on x-axis in area chart. 2. in this case,  | eval _time=case(row=0,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=1,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=2,strptime(EndTime,"%Y-%m-%d %H:%M:%S"),row=3,strptime(EndTime,"%Y-%m-%d %H:%M:%S")) | eval value=case(row=0,0,row=1,1,row=2,1,row=3,0) the time range of the x-axis in area chart is from the first StartTime (05:30) ... last EndTime (13:30).
Hey everyone, I'm here with a query regarding the support provided by Red Hat [ https://www.lenovo.com/de/de/servers-storage/solutions/redhat/ ] for integrating Splunk into its ecosystem. Specifical... See more...
Hey everyone, I'm here with a query regarding the support provided by Red Hat [ https://www.lenovo.com/de/de/servers-storage/solutions/redhat/ ] for integrating Splunk into its ecosystem. Specifically, I'm seeking clarification on whether Red Hat extends support or compatibility for integrating Splunk within its systems. The primary concern revolves around the feasibility of integrating Splunk—a popular data analytics and monitoring platform—into Red Hat's environment. Understanding whether Red Hat officially supports or provides compatibility for integrating Splunk within its systems is crucial for ensuring a seamless integration process. The uncertainty arises from the need to establish a smooth and reliable connection between Splunk and Red Hat systems without encountering compatibility issues or unexpected limitations. Ensuring that Splunk can effectively operate within the Red Hat environment is essential for our integration plans. If anyone within the community has insights or experiences related to integrating Splunk with Red Hat, I'd greatly appreciate hearing about them. Any information regarding the official stance of Red Hat on supporting Splunk integration or any challenges encountered during such integration attempts would be immensely valuable. Understanding any potential roadblocks, compatibility concerns, or success stories related to integrating Splunk with Red Hat systems would greatly assist in planning and executing a successful integration strategy. Thank you all in advance for your time and contributions. Your shared experiences and expertise could provide valuable insights into the compatibility and support aspects of integrating Splunk within the Red Hat environment.
@meshorer I would really need to see the app code to make any informed advice. The tutorial should have the information you need and within the IDE itself you should be able to see the process output... See more...
@meshorer I would really need to see the app code to make any informed advice. The tutorial should have the information you need and within the IDE itself you should be able to see the process outputted when testing the action in the IDE.  Even if the data is being saved, to see it in a playbook as an output datapath you need to map the output datapaths in the app's JSON file.  I would look at other apps and try to spot any difference in your code when adding results and also the JSON structure for the action outputs.  Happy SOARing!
Try something like this <change> <eval token="starttime">relative_time($timepicker.earliest$","-1h")</eval> <eval token="finishtime">relative_time($timepicker.latest$","-1h")</eval> </change>
Hi Splunk Gurus, @ITWhisperer  I need your expertise in solving a rather simple issue which is taking lot more hours for me to solve. I'm trying to create a table in Splunk which should display Gran... See more...
Hi Splunk Gurus, @ITWhisperer  I need your expertise in solving a rather simple issue which is taking lot more hours for me to solve. I'm trying to create a table in Splunk which should display Grand Total under each row and also display Compleion% under "Completed" field. I'm able to achieve the grand total using addcoltotals. However, I'm unable to display the completion% under "Completed" field. Here is how the table should look like:  Name   Remaining   Completed   Alice  25 18  Bob  63 42  Claire  10 7  David  45 30  Emma  80 65 Grand Total: 223 162 Completion%   42.07% percetnage calculation = 162/(223+162)*100 I tried using eval function to calculate the percentage but it calculates for each row in a new field. Can you please help me out? Many thanks. Much appreciated!
@splunk4days i believe that by using the phantom.vault_add() API the file is "moved" from the tmp dir into the relevant file location on the platform where the vault storage is, rather than copied. ... See more...
@splunk4days i believe that by using the phantom.vault_add() API the file is "moved" from the tmp dir into the relevant file location on the platform where the vault storage is, rather than copied. I have not tested this but have also never had to clear the /tmp dir when using it for vault_add() API calls. 
I am not sure how your solutions works since you are not setting _time when row=3, and it is not clear what "restricted" _time is, nor what your expected result should look like.
Try this | rex "(?ms)DETAILS: (?<details>\[.*\])" | spath input=details
I am loading the search from a datamodel, so I can not do  | datamodel earliest=$<time_token>$-1h
I'm searching in verbose mode. Yes i tried searching for a filed and value.. the events are filtering 
But why exactly do you want to change the token itself? Isn't it enough to skew the timerange for the resulting search?
OK, first things first - are you searching in fast or verbose mode? Did you try to search for a value (even any value like something=*) in any of those fields?
That is kinda strange. If you check it on regex101 - https://regex101.com/r/Bavlui/1 (I have no idea how long the saved regexes are kept) - it seems to work. As you can see, the group 1 is properly ... See more...
That is kinda strange. If you check it on regex101 - https://regex101.com/r/Bavlui/1 (I have no idea how long the saved regexes are kept) - it seems to work. As you can see, the group 1 is properly matched to the space between events. So there might be something not 100% copy-pasteable and your events might actually look a bit different (maybe have some hanging spaces/tabs or something like that). In general, your LINE_BREAKER should match the place on which you want to break the stream into separate events and must contain a capturing group which will match the part which separates one event from another. That group will be discarded as the "spacer" between events.
Hi @Mr_Adate , the solution is the same of my previous answer, you have to rename the fields in the two lookups having the same field name  to compare values from the two lookups: | inputlookup ABC... See more...
Hi @Mr_Adate , the solution is the same of my previous answer, you have to rename the fields in the two lookups having the same field name  to compare values from the two lookups: | inputlookup ABC.csv | eval lookup="ABC.csv" | fields Firewall_Name | append [ | inputlookup XYZ.csv | eval lookup="XYZ.csv" | rename Firewall_Hostname AS Firewall_Name | fields Firewall_Name] | chart count OVER lookup BY Firewall_Name Ciao. Giuseppe
Transform your lookup in a way, that every productID has a row. Then you can use the lookup in its native way. It will lead to large lookup files, but the lookup itself is still very performant. Ev... See more...
Transform your lookup in a way, that every productID has a row. Then you can use the lookup in its native way. It will lead to large lookup files, but the lookup itself is still very performant. Every workaround with map, subsearch etc. will be slow and imperformant.
I do not recommend to use map. It is an extremely slow command. As far as I know Splunk unfortunately does not support range lookups. We also had this issue and at the end we transformed our lookup ... See more...
I do not recommend to use map. It is an extremely slow command. As far as I know Splunk unfortunately does not support range lookups. We also had this issue and at the end we transformed our lookup file in a way, that every value of the range is a single row. It leads to large lookup files but performing the lookup is still much more performant than map or similar commands