All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you need parallelism then you must use kvstore based lookups not CSV based.
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexe... See more...
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexers and start it. Then you could install UF and start it with different ports for mgmt and probably some other. You just must check correct parameters from docs (I cannot find those now). But if I recall right UF tell that normal ports are reserved and ask to use some other ports. r. Ismo
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it... See more...
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it didnt help. Can you give an example
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. W... See more...
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. What is your actual issue which you are trying to solve with this eval way? Maybe there is some other way to do it or otherwise you must create additional custom function or something similar.
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have chec... See more...
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have checked the permissions, and they are all fine. Thanks! Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 846, in exec_module File "<frozen importlib._bootstrap_external>", line 982, in get_code File "<frozen importlib._bootstrap_external>", line 1039, in get_data PermissionError: [Errno 1] Operation not permitted: '/opt/splunk/lib/python3.9/encodings/__init__.py'
Hi @sherwin_r , if you cannot follow the solution from @isoutamo ,  the only way is my first solution, modify apps.conf on the Deployer and push the app to the Search Head Cluster. Ciao. Giuseppe
Hi @AL3Z , in this case, you have to configure two virtual machines that are connected: one with Splunk Enterprise and one with Splunk Universal Forwarder. Ciao. Giuseppe
Hi @chimuru84 , in this case you have to add some fields to the stats command, but the approach is always the same: index=...... earliest=-2y latest=-h [ search index=...... earliest=-h latest=now ... See more...
Hi @chimuru84 , in this case you have to add some fields to the stats command, but the approach is always the same: index=...... earliest=-2y latest=-h [ search index=...... earliest=-h latest=now | dedup id | fields id ] | eval period=if(_time>now()-31536000, "last Year","Previous Year") | stats dc(Period) AS Period_count values(Period) AS Period earliest(_time) AS first_date latest(_time) AS last_date BY id | where Period_count=1 AND Period!="Previous Year" | eval nr_of_days=last_date-first_date, first_date=strftime(first_date,"%Y-%m-%d %H:%M:%S"), last_date=strftime(last_date,"%Y-%m-%d %H:%M:%S") | table id nr_of_days first_date last_date Ciao. Giuseppe
Unfortunately I don't think that you have any other option. That was the reason why @gjanders did this app...
Hi @Taruchit , in addition, you could use a lookup with kv-store so you'll have a key that guarantees the unicity of data. Only one question: you're trying to use Splunk as a database and Splunk is... See more...
Hi @Taruchit , in addition, you could use a lookup with kv-store so you'll have a key that guarantees the unicity of data. Only one question: you're trying to use Splunk as a database and Splunk isn't a database, are you sure that you're using the best solution for your requirements? Ciao. Giuseppe
i will give a try thanks for the helps
The lookup file is written by the search head in one go.  There is no parallelism.
@Skins hi) I faced the same limitation - the inability to use Ingest-time lookup on hw, did you manage to solve this issue?
If your problem is resolved, then please click an "Accept as Solution" button to help future readers.
Hello again @gcusello ! Sorry again. I want to return id, nr_of_days (difference between last_date and_first_date), login of last_date (could be today, yesterday, 1 month ago etc.) and login of first... See more...
Hello again @gcusello ! Sorry again. I want to return id, nr_of_days (difference between last_date and_first_date), login of last_date (could be today, yesterday, 1 month ago etc.) and login of first_date (where first_date is 365 days or more). 
I looked at the events for the component you mentioned and found that there is only one type of log entry. I also tried it for the "last 7 days" time range.  
How can you say your script is executing fine if it is not doing what you expect? Try running the search without the collect command as see what happens. Try running the search with different time ... See more...
How can you say your script is executing fine if it is not doing what you expect? Try running the search without the collect command as see what happens. Try running the search with different time chunks (if possible) to see what happens. Try running parts of the search to see whether data goes missing at any point. Try other ways to diagnose your issue as you haven't given us any useful information (so far) to help you determine what might be going on
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I... See more...
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I tried finding solutions, but all the solutions mentioned in other posts are for the Classic dashboards which are written in XML. How can we do this in JSON written Dashboard. Thanks, Viral
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can ... See more...
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can I run parallel search for hosts related to each index and thus parallelly update the same lookup file? Or is there any risk of performance, consistency of data? Thank you Taruchit
@ITWhisperer  My script is executing fine but filling no data for 5th august in summary index.