All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hettervik  How much data are we talking here? Is this GB/TB?  Ultimately the best approach to take depends on the amount of data you need to extract/re-index. The collect approach might still ... See more...
Hi @hettervik  How much data are we talking here? Is this GB/TB?  Ultimately the best approach to take depends on the amount of data you need to extract/re-index. The collect approach might still be viable, but should be scripted to run smaller increments continuously until  you've extracted what you need. Alternatively you could take a similar approach to incrementally export blocks of the data using the Splunk REST API endpoints, see https://help.splunk.com/en/splunk-enterprise/search/search-manual/9.3/export-search-results/export-data-using-the-splunk-rest-api for more info - you can then re-ingest this using a UF/HF.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello @tej57 ,  Thanks for your response. I can confirm that my dashboard is already built in Classic Dashboard, not Dashboard Studio. However, even in Classic mode, the $click.value$ token is not... See more...
Hello @tej57 ,  Thanks for your response. I can confirm that my dashboard is already built in Classic Dashboard, not Dashboard Studio. However, even in Classic mode, the $click.value$ token is not being populated when clicking a node. The only token that works is $click.name$.  
This lead me to the solution! I tried looking for eventtype=account_locked and got nothing. Turns out that my eventtypes were not global. Not only that but I needed to make a copy of my tags.conf in ... See more...
This lead me to the solution! I tried looking for eventtype=account_locked and got nothing. Turns out that my eventtypes were not global. Not only that but I needed to make a copy of my tags.conf in the search app instead of it being local to the app [logstream].
Are you having a proble, with your SHC? If so, please post a new thread describing it in detail. If not, what are those settings supposed to accomplish? And why these particular values?
I added the following to server.conf on the SHC members. [httpServer] maxThreads = 8000 maxSockets = 8000
Hi @livehybrid , I'm not trying to build a custom visualization, i'm trying to visualize a 3d object in a dashboard's panel.
Hey @atme, It would be complex if you try to extract all of these fields at index time. The computational load would also increase. I would prefer going for search time extractions. However, if you ... See more...
Hey @atme, It would be complex if you try to extract all of these fields at index time. The computational load would also increase. I would prefer going for search time extractions. However, if you still wish to extract fields at index time,  it would be great if you can share what you've configured till now in props and transforms. Since the _raw event varies in number of fields also,  we need to define a regex based pattern or key-value pair to extract the fields. Thanks, Tejas.
Hey @luispulido, As the warning suggests, it is possible that Drilldown might not function properly for Dashboard Studio. However, upon checking Splunkbase, it seems it can work well with Classic Da... See more...
Hey @luispulido, As the warning suggests, it is possible that Drilldown might not function properly for Dashboard Studio. However, upon checking Splunkbase, it seems it can work well with Classic Dashboard. If you can switch your dashboard back to Classic, I suppose you'll be able to have fully functional visualization. And you can then utilize the tokens as per the use case as well.    Thanks, Tejas. --- If the above solution helps, an upvote is appreciated..!!
Still a known issue in Splunk 10.0 : SPL-226019
Hi @unclemoose  Firstly, settings tags = authentication, failure, account_locked in your eventtypes.conf is deprecated, so you should probably remove this incase its causing an issue. Secondly, I... See more...
Hi @unclemoose  Firstly, settings tags = authentication, failure, account_locked in your eventtypes.conf is deprecated, so you should probably remove this incase its causing an issue. Secondly, I wanted to check is what search mode you are using, are you using Fast mode? If so you probably wont see the eventtypes/tag fields come back - Try running in Smart of Verbose mode - do you see the tags returned then? Lastly, where are these files within your environment? Are they in a custom/specific app? Are you running the search from the same app as the app? Are the configurations shared globally with the system or only within its app?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Apart from what has already been said about permissions, the question is what is your architecture? (all-in-one, separate indexing and search-head layer, any pre-parsing HFs?) And where did you put t... See more...
Apart from what has already been said about permissions, the question is what is your architecture? (all-in-one, separate indexing and search-head layer, any pre-parsing HFs?) And where did you put those props and transforms. And don't use indexed extractions unless there is absolutely no other way (not related to the problem at hand but worth remembering).
Ok, i'll try. But was there something I did wrong when deploying the app? Why did I end up into this state and how to prevent it in the future?
Hello Splunk Community, I’m working in the Behavioral Profiling app to create an Anomaly Scoring Rule. In the Define Indicator Source step, I have successfully selected my Behavioral Indicator (e.g... See more...
Hello Splunk Community, I’m working in the Behavioral Profiling app to create an Anomaly Scoring Rule. In the Define Indicator Source step, I have successfully selected my Behavioral Indicator (e.g., "Amount Transaction"), but the Scoring Field dropdown is disabled / showing a red mark, and I’m unable to select any value. Details: Behavioral Indicator: Amount Transaction Data is visible when I run the same SPL in Search & Reporting. Time Range: Last Day (also tried other ranges) Using the default fields from my dataset (contains account, amount, _time). The Scoring Field dropdown does not show any options. What I have tried: Verified the field exists in my data. Changed the Time Range to ensure data is available. Recreated the Behavioral Indicator. Question: What specific requirements or field types does the Scoring Field expect? Do I need to modify the Behavioral Indicator definition or SPL so that this dropdown is populated? Any guidance or examples would be greatly appreciated. Thanks in advance!   The Data that I have provided for profiling is as follows : imestamp,account,amount 2025-08-11 11:25:56,ACC1001,2500 2025-08-11 11:25:56,ACC1001,3000 2025-08-11 11:25:56,ACC1001,5000 2025-08-11 11:25:56,ACC1002,1500 2025-08-11 11:25:56,ACC1002,2000 2025-08-11 11:25:56,ACC1003,8000 2025-08-11 11:25:56,ACC1003,4000 2025-08-11 11:25:56,ACC1004,12000 2025-08-11 11:25:56,ACC1005,600 2025-08-11 11:25:56,ACC1005,750 2025-08-11 11:25:56,ACC1006,5000 2025-08-11 11:25:56,ACC1006,7000  
Hello @JykkeDaMan This can be addressed by following below steps: Stop the Deployer First, stop the deployer service to begin the resolution process. Stop the Splunk Service on Each Cluste... See more...
Hello @JykkeDaMan This can be addressed by following below steps: Stop the Deployer First, stop the deployer service to begin the resolution process. Stop the Splunk Service on Each Cluster Node On each cluster node, stop the Splunk service before proceeding. Remove Keystore and Password Files On each cluster node, remove the following files: keystore/default.jks certs/keystore_password.dat Delete Secret Data from Splunk Storage Collections Run the following command on each cluster node to delete the secret data:  curl -k -u username -X DELETE https://<host>:<management-port>/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/secret   Repeat the Process on All Cluster Nodes Perform steps 2 through 4 on all nodes in the cluster to ensure consistency. Start the Splunk Service on All Nodes After completing the above steps on all nodes, start the Splunk service again Also raising support case can make your work more easily in such issues.      
Hi @uagraw01 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @splunkville , yes it is correct, but what's your issue? Ciao. Giuseppe
Hi @unclemoose , where are you running the eventtype search: in the same app where it was created or in another one? check if your eventtype is visible also outside the app where it was created, pr... See more...
Hi @unclemoose , where are you running the eventtype search: in the same app where it was created or in another one? check if your eventtype is visible also outside the app where it was created, probably you shared your eventtype at app level and not at global level. check the permissions of the eventtype. Ciao. Giuseppe
thanks mate, this is exactly what I am looking for
Just use a couple of stats, first count the user numbers then create a new field with the user and count then re-stats with the values, e.g. | makeresults format=csv data="_time,user,src_ip 2025-08-... See more...
Just use a couple of stats, first count the user numbers then create a new field with the user and count then re-stats with the values, e.g. | makeresults format=csv data="_time,user,src_ip 2025-08-11,ronald,192.168.2.5 2025-08-11,jasmine,192.168.2.5 2025-08-11,tim,192.168.2.6 2025-08-11,ronald,192.168.2.5" ``` Like this ``` | stats count by user src_ip | eval user_count=user.":".count | stats values(user*) as values_user* by src_ip