All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@meshorer I would really need to see the app code to make any informed advice. The tutorial should have the information you need and within the IDE itself you should be able to see the process output... See more...
@meshorer I would really need to see the app code to make any informed advice. The tutorial should have the information you need and within the IDE itself you should be able to see the process outputted when testing the action in the IDE.  Even if the data is being saved, to see it in a playbook as an output datapath you need to map the output datapaths in the app's JSON file.  I would look at other apps and try to spot any difference in your code when adding results and also the JSON structure for the action outputs.  Happy SOARing!
Try something like this <change> <eval token="starttime">relative_time($timepicker.earliest$","-1h")</eval> <eval token="finishtime">relative_time($timepicker.latest$","-1h")</eval> </change>
Hi Splunk Gurus, @ITWhisperer  I need your expertise in solving a rather simple issue which is taking lot more hours for me to solve. I'm trying to create a table in Splunk which should display Gran... See more...
Hi Splunk Gurus, @ITWhisperer  I need your expertise in solving a rather simple issue which is taking lot more hours for me to solve. I'm trying to create a table in Splunk which should display Grand Total under each row and also display Compleion% under "Completed" field. I'm able to achieve the grand total using addcoltotals. However, I'm unable to display the completion% under "Completed" field. Here is how the table should look like:  Name   Remaining   Completed   Alice  25 18  Bob  63 42  Claire  10 7  David  45 30  Emma  80 65 Grand Total: 223 162 Completion%   42.07% percetnage calculation = 162/(223+162)*100 I tried using eval function to calculate the percentage but it calculates for each row in a new field. Can you please help me out? Many thanks. Much appreciated!
@splunk4days i believe that by using the phantom.vault_add() API the file is "moved" from the tmp dir into the relevant file location on the platform where the vault storage is, rather than copied. ... See more...
@splunk4days i believe that by using the phantom.vault_add() API the file is "moved" from the tmp dir into the relevant file location on the platform where the vault storage is, rather than copied. I have not tested this but have also never had to clear the /tmp dir when using it for vault_add() API calls. 
I am not sure how your solutions works since you are not setting _time when row=3, and it is not clear what "restricted" _time is, nor what your expected result should look like.
Try this | rex "(?ms)DETAILS: (?<details>\[.*\])" | spath input=details
I am loading the search from a datamodel, so I can not do  | datamodel earliest=$<time_token>$-1h
I'm searching in verbose mode. Yes i tried searching for a filed and value.. the events are filtering 
But why exactly do you want to change the token itself? Isn't it enough to skew the timerange for the resulting search?
OK, first things first - are you searching in fast or verbose mode? Did you try to search for a value (even any value like something=*) in any of those fields?
That is kinda strange. If you check it on regex101 - https://regex101.com/r/Bavlui/1 (I have no idea how long the saved regexes are kept) - it seems to work. As you can see, the group 1 is properly ... See more...
That is kinda strange. If you check it on regex101 - https://regex101.com/r/Bavlui/1 (I have no idea how long the saved regexes are kept) - it seems to work. As you can see, the group 1 is properly matched to the space between events. So there might be something not 100% copy-pasteable and your events might actually look a bit different (maybe have some hanging spaces/tabs or something like that). In general, your LINE_BREAKER should match the place on which you want to break the stream into separate events and must contain a capturing group which will match the part which separates one event from another. That group will be discarded as the "spacer" between events.
Hi @Mr_Adate , the solution is the same of my previous answer, you have to rename the fields in the two lookups having the same field name  to compare values from the two lookups: | inputlookup ABC... See more...
Hi @Mr_Adate , the solution is the same of my previous answer, you have to rename the fields in the two lookups having the same field name  to compare values from the two lookups: | inputlookup ABC.csv | eval lookup="ABC.csv" | fields Firewall_Name | append [ | inputlookup XYZ.csv | eval lookup="XYZ.csv" | rename Firewall_Hostname AS Firewall_Name | fields Firewall_Name] | chart count OVER lookup BY Firewall_Name Ciao. Giuseppe
Transform your lookup in a way, that every productID has a row. Then you can use the lookup in its native way. It will lead to large lookup files, but the lookup itself is still very performant. Ev... See more...
Transform your lookup in a way, that every productID has a row. Then you can use the lookup in its native way. It will lead to large lookup files, but the lookup itself is still very performant. Every workaround with map, subsearch etc. will be slow and imperformant.
I do not recommend to use map. It is an extremely slow command. As far as I know Splunk unfortunately does not support range lookups. We also had this issue and at the end we transformed our lookup ... See more...
I do not recommend to use map. It is an extremely slow command. As far as I know Splunk unfortunately does not support range lookups. We also had this issue and at the end we transformed our lookup file in a way, that every value of the range is a single row. It leads to large lookup files but performing the lookup is still much more performant than map or similar commands
Hello Friends,   I need your help to find out matching fields values and their total count by comparing from two different lookup files. | inputlookup   ABC.csv | fields Firewall_Name | stats coun... See more...
Hello Friends,   I need your help to find out matching fields values and their total count by comparing from two different lookup files. | inputlookup   ABC.csv | fields Firewall_Name | stats count | inputlookup  XYZ.csv | fields Firewall_Hostname | stats count My goal is to compare  two lookup files by using field name Firewall_Name with Firewall_Hostname and get matching field values count. EX. if in ABC.csv file field name Firewall_Name total count is 1000 and in second lookup file XYZ.csv field name  Firewall_Hostname total count is 850 then my result should display all matched values with their count. so I can get confirmation that from file name XYZ.csv all fields are matching with file ABC.csv and all firewalls are up and running with their total matched firewall count 850.  
Hi @umithchada , please try this: | rex field=ELAPSED "(?<dd>\d*)\-?(?<hh>\d*)\:?(?<mm>\d*)\:?(?<ss>\d*)$" | eval elapsed_secs=(dd * 86400) + (hh * 3600) + (mm * 60) + (ss*1) | table ELAPSED second... See more...
Hi @umithchada , please try this: | rex field=ELAPSED "(?<dd>\d*)\-?(?<hh>\d*)\:?(?<mm>\d*)\:?(?<ss>\d*)$" | eval elapsed_secs=(dd * 86400) + (hh * 3600) + (mm * 60) + (ss*1) | table ELAPSED seconds_elapsed _time You can test the regex at https://regex101.com/r/VfyG4S/1 Ciao. Giuseppe
Hi @Tyrian01 , it's a very slow search, but try: index=nessus source="*2019_04_17_CRIT_HIGH.csv" if you still have these logs, you should be able to find them. The problem could be the retention:... See more...
Hi @Tyrian01 , it's a very slow search, but try: index=nessus source="*2019_04_17_CRIT_HIGH.csv" if you still have these logs, you should be able to find them. The problem could be the retention: how long do you maintain logs in your system? Ciao. Giuseppe
thanks @bowesmana - Unfortunately, I could not accept 2 answers but this helped. Thank you.
@kymkin  I'm not exactly sure where the install is failing for you, but I can tell you the additional parameters I've successfully used for my install script. Adding the directory of the forwarder... See more...
@kymkin  I'm not exactly sure where the install is failing for you, but I can tell you the additional parameters I've successfully used for my install script. Adding the directory of the forwarder program file location. (i.e., C:\ or D:\ drive before the .msi file name) INSTALLDIR_ parameter (determines where install location of the UF program) I add the the license agreement parameter prior to the log collection parameters. Not sure if this actually changes the install process or not. SPLUNKUSERNAME/SPLUNKPASSWORD parameters to set your own admin credentials. /passive end flag (instead of quiet). This is essentially a quiet installation with a progress display. Hope this helps.
I have 1 question. The solution shows the time range in restricted _time. It is possible to expand it into/show in selected time range, which is defined in the time range picker? To the range addinfo... See more...
I have 1 question. The solution shows the time range in restricted _time. It is possible to expand it into/show in selected time range, which is defined in the time range picker? To the range addinfo.info_max_time, addinfo.info_min_time?