All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @pagillar, Please see here for instructions on how to troubleshoot further: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highlight/true#.... Cheers,    ... See more...
Hi @pagillar, Please see here for instructions on how to troubleshoot further: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highlight/true#.... Cheers,     - Jo.
Hello @gcusello , Regarding as per Subject, Sequence number will be differ on every transaction log's , so how can we write log's for  Values are all Dynamic ( not a same numbers on every transac... See more...
Hello @gcusello , Regarding as per Subject, Sequence number will be differ on every transaction log's , so how can we write log's for  Values are all Dynamic ( not a same numbers on every transaction's)  Every transaction logs( sequence number is different ) 1 to n(last number) if missed any number's between  1 to N, can you help on this, really thanks in advance
Hi @Kingsly007, please, next time, open a new question, even if on the same topic: you'll have a faster and probably better answer to your question! in addition, at the end of the analysis, you can... See more...
Hi @Kingsly007, please, next time, open a new question, even if on the same topic: you'll have a faster and probably better answer to your question! in addition, at the end of the analysis, you can accept the answer and give more information for the other people of Community. Anyway, could you better describe what you mean with "Dynamic"? if you have comma divided values, the number of them isn't relevant. Could you share a sample of your logs? Ciao. Giuseppe
Hello @gcusello Thanks for your approach, I appreciate it , and  i have another question , if number's are Dynamic mean's how we can split comma separated values and display them individually in ta... See more...
Hello @gcusello Thanks for your approach, I appreciate it , and  i have another question , if number's are Dynamic mean's how we can split comma separated values and display them individually in table
Use CSS to set the font size to zero. (You will need to give your panel an id.) Try something like this <panel depends="$stayhidden$"> <html> <style> #hiddentext td { ... See more...
Use CSS to set the font size to zero. (You will need to give your panel an id.) Try something like this <panel depends="$stayhidden$"> <html> <style> #hiddentext td { font-size: 0 !important; } </style> </html> </panel> <panel> <table id="hiddentext">  
We've looked a bit more into this case. The error is coming from the script "identity_manager.py" in the app "SA-IdentityManagement". The error is generated in the following "for" loop. for url, pat... See more...
We've looked a bit more into this case. The error is coming from the script "identity_manager.py" in the app "SA-IdentityManagement". The error is generated in the following "for" loop. for url, path, size, last_updated in update_times: if path and last_updated: lookup[url] = last_updated else: logger.error('status="Lookup file error, unknown path or update time" name=%s', url) The "update_times" array comes from the method "get_lookup_table_file_update_times", which again comes ultimately from the Python package "importlib.util.spec_from_file_location". We were thinking that this error might be from this package, and not from Splunk per se, but when we look at the actual lookup file CSV in the Linux OS, it is there and has the last modified time value sat, so that is not the cause either. So, still haven't figured this out.
Hi @Naga1, please try the approach of my example: | makeresults | eval my_field="00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514,0000000000... See more...
Hi @Naga1, please try the approach of my example: | makeresults | eval my_field="00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514,00000000000000872515,00000000000000872516,00000000000000872517,00000000000000872518,00000000000000872519,00000000000000872520,00000000000000872521,00000000000000872522,00000000000000872523,00000000000000872524,00000000000000872525,00000000000000872526,00000000000000872527,00000000000000872528,00000000000000872529,00000000000000872530,00000000000000872531,00000000000000872532,00000000000000872533" | makemv delim="," my_field | fields - _time | mvexpand my_field Ciao. Giuseppe
If I am having list of comma separated numbers in single splunk  event field: I am having too many event fields like below,How Can I split these comma separated values and display them in table form... See more...
If I am having list of comma separated numbers in single splunk  event field: I am having too many event fields like below,How Can I split these comma separated values and display them in table format I mentioned below? Any suggestion here? Sequence Numbers processed during this transaction : 00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514,00000000000000872515,00000000000000872516,00000000000000872517,00000000000000872518,00000000000000872519,00000000000000872520,00000000000000872521,00000000000000872522,00000000000000872523,00000000000000872524,00000000000000872525,00000000000000872526,00000000000000872527,00000000000000872528,00000000000000872529,00000000000000872530,00000000000000872531,00000000000000872532,00000000000000872533   How Can I split thiese comma separated values and display them individually in table like: 00000000000000872510 00000000000000872511 00000000000000872512 00000000000000872513 00000000000000872514 00000000000000872515 00000000000000872516 . .likewise till 00000000000000872533
Hi @samsign, I suppose that you are trying to add the index to an Heavy Forwarder and not on Indexer. If this is your situation, it's normal becase Indexes aren't local on the HF. You have two sol... See more...
Hi @samsign, I suppose that you are trying to add the index to an Heavy Forwarder and not on Indexer. If this is your situation, it's normal becase Indexes aren't local on the HF. You have two solutions: manually modify the inputs.conf file by SSH, create an empty local index on the HF that you can use only for this configuration. Ciao. Giuseppe
Hi @Netza, I did the same process hinted by @richgalloway some years ago. In addition, you could open a non technical Case to Splunk Support. Ciao. Giuseppe
Hi @mikefg , as @richgalloway said, it's a best practice to disable KV-Store in all Splunk servers except Search Heads to use the resources for other purposes, even if, there are some Add-Ons, that... See more...
Hi @mikefg , as @richgalloway said, it's a best practice to disable KV-Store in all Splunk servers except Search Heads to use the resources for other purposes, even if, there are some Add-Ons, that must be installed on HFs or IDXs, that disabling KV-Store will give you error messages because they use KV-Store . Anyway, you can disable KV-Store adding to server.conf the following stanza: [kvstore] disabled = true Ciao. Giuseppe
Hi @anooshac, I don't think that's possible using the Classic Dashboard interface, maybe modifying the dashboard CSS, but I'm not sure. It should be possible in Dashboard Studio. Ciao. Giuseppe
Hi @innoce , as @bowesmana said, you have to extract the second value from the second field. Are you sure about the position of the second value in the second field? if it's alway after "A=" and a... See more...
Hi @innoce , as @bowesmana said, you have to extract the second value from the second field. Are you sure about the position of the second value in the second field? if it's alway after "A=" and always in the beginning of the field, you could use the following regex: <your_search> | rex field=b "^A\=(?<A>[^,]*)" | where a=A that you can test at https://regex101.com/r/9hePOP/1 othrwise you have to modify the regex but using the same approach.  Ciao. Giuseppe
Hi @mninansplunk, the colour is assigned to a cell based on the value and you can choose to colour the text or the backgroung but not both of them. If you only want an indicatore of a status withou... See more...
Hi @mninansplunk, the colour is assigned to a cell based on the value and you can choose to colour the text or the backgroung but not both of them. If you only want an indicatore of a status without a value, you could see, in the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) the table Icon Set (Rangemap) that permits to use an icon as status indicator without displaying any value, even if I don't understand why you don't want to display a value. If you don't want to display a value, but a fixed definition (e.g. High, Medium, Low), you could use eval to assign these vales to the field and define colours based on these new values always using the GUI (choosing ranges instead values). Ciao. Giuseppe 
Hi @aditsss, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @aditsss, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @aditsss, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
good morning. for example I have number the following +140871771234, +140871771245, +140871771286 +171522334321, +171522334325, +171522334329 +151688325297,  +151688325258, +151688325239 range ... See more...
good morning. for example I have number the following +140871771234, +140871771245, +140871771286 +171522334321, +171522334325, +171522334329 +151688325297,  +151688325258, +151688325239 range +1408717712XX, site code is A +1715223343XX, site code is B +1516883252XX, site code is C when number found in the range, how to give as site code   thank you.    
So, answer from Splunk Support: You should not remove file libcrypto.so.1.0.0, it is part of libraries. This file exists in fresh new 9.1.0.2 Splunk installation too, so it is not part of old upgrad... See more...
So, answer from Splunk Support: You should not remove file libcrypto.so.1.0.0, it is part of libraries. This file exists in fresh new 9.1.0.2 Splunk installation too, so it is not part of old upgrade. Splunk version 9.1.0.2 uses OpenSSL 1.0.2zg. Topic about CVE-2023-3446 vulnerability was send to developer team.   In the meantime, Tennable apparently found out, that they'd been a bit premature... OpenSSL disappeared from their scannings... 
I found a solution Just needed to write  case( like('operating-system',"Microsoft Windows Server%"), "Windows Server", like('operating-system',"Microsoft Windows%"), "Windows OS", like('operatin... See more...
I found a solution Just needed to write  case( like('operating-system',"Microsoft Windows Server%"), "Windows Server", like('operating-system',"Microsoft Windows%"), "Windows OS", like('operating-system',"%Linux%"), "Linux", like('operating-system',"%CentOS%"), "Linux", like('operating-system',"%Debian%"), "Linux", like('operating-system',"%CentOS%"), "Linux" )