All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to t... See more...
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to this search to give a graph visualization of total cpu_usage every 4 hours.
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the ... See more...
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the correct filtering of accounts. I work in an animation studio, and transferring large video files was always a challenge. We tried cloud storage, but it was slow and required sign-ups. Filemail solved all our problems—it’s fast, secure, and lets us send huge files without forcing the recipient to create an account. If you’re in the creative industry, this is a must-have!
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and... See more...
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and it worked for me: { hideChrome: true, pageTitle: "Splunk React app", theme, hideSplunkBar: true }
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Inf... See more...
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Infinity? And what would be the solution?
What is the definition of large? Is it measured in total bytes? Number of records? And in either case how much?
Thanks for your reply, since i don't have privilege to see that i will follow up this issue first. if it solved i will give you upvote/karma points.  Danke  Zake
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure... See more...
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure all members are in sync.  Use the Monitoring Console to view the status of your search head cluster and identify any issues with job execution.  Please check this: Solved: Why is a Search Head Cluster Member not replicatin... - Splunk Community Use the monitoring console to view search head cluster status and troubleshoot issues - Splunk Documentation Solved: Trying to run a search, why are we getting a "Queu... - Splunk Community limits.conf - Splunk Documentation
@msatish  In order to update the changes successfully into the Qualys TA for Splunk, please follow the below steps: 1)From Settings> Data Inputs disable the TA Inputs 2)Delete passwords.conf file.... See more...
@msatish  In order to update the changes successfully into the Qualys TA for Splunk, please follow the below steps: 1)From Settings> Data Inputs disable the TA Inputs 2)Delete passwords.conf file. 3)Reboot the splunk instance. 4)Go to TA config in Splunk UI and give the credentials again. 5)Check if the passwords.conf file created 6)Enable TA inputs from data Inputs
Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, bef... See more...
Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, before I delete it I create another account with id 13 and move all correlation search/ saved search/ dashboard created by account 7 to account 13 so that the owner will move everything to account 13 and account 7 can be deleted immediately. Currently, my problem is that when I move to account 13, account 13 will get a notification "Waiting for queued job to start Manage Jobs" which causes me to not be able to search. Even though account 13 for the role has been equated with account 7, and the role has also been raised for the role search job limit and user search job limit, but strangely it is still queued. What's even more strange, this 13 account only searches around below 5000 data/day even though other users have more than 5000 data but there are no problems with searching. Here I attach a picture, in this case account 13 is in 4th place or from the brown chart, account 7 is in 5th place, while the account for analysts is in 1,2,3
My bad, in this env my friend setting different inputs.conf and it from .evtx and it cannot readable in splunk without some setting. Sorry guys
@msatish  You have to either re-enter the credentials and delete the old, or reinstall the app. Check this documentation: : https://community.splunk.com/t5/Getting-Data-In/Having-trouble-setting-up... See more...
@msatish  You have to either re-enter the credentials and delete the old, or reinstall the app. Check this documentation: : https://community.splunk.com/t5/Getting-Data-In/Having-trouble-setting-up-TA-QualysCloudPlatform-App/m-p/566066 
Thank you for your response, but your query is providing list of dashboard accessed and i am looking for number of unused/not accessed by anyone. 
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, ... See more...
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on(TA-QualysCloudPlatform) is still not accepting new credentials, and logs are not flowing to Splunk, what might be the issue.   Steps Followed: Updated new password in TA-QualysCloudPlatform and restarted Splunk
I've given up. I don't know if it's a network issue on my side or what, but I'm just going to use standard Restful API libraries. All the samples around splunk-sdk that I could find seem out of date ... See more...
I've given up. I don't know if it's a network issue on my side or what, but I'm just going to use standard Restful API libraries. All the samples around splunk-sdk that I could find seem out of date and I'm concerned about long-term support.
Hi @harishsplunk7  I dont think your search covers Dashboard Studio dashboards, only Simple XML dashboards (but could be wrong) Have a go with the following search and let me know how you get on! ... See more...
Hi @harishsplunk7  I dont think your search covers Dashboard Studio dashboards, only Simple XML dashboards (but could be wrong) Have a go with the following search and let me know how you get on! index=_internal sourcetype=splunkd_ui_access earliest=-90d@d uri="*/data/ui/views/*" | rex field=uri "/servicesNS/(?<user>[^/]+)/(?<app>[^/]+)/data/ui/views/(?<dashboard>[^\.?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by app, dashboard | append [| rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app as app | fields dashboard app | eval isDashboard=1] | stats sum(accessed) as accessed, values(isDashboard) as isDashboard by app, dashboard | search isDashboard=1 accessed>0
Try something like this | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value... See more...
Try something like this | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value.stringValue') | stats values(X_*) as * by _raw
Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst r... See more...
Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst retains an alert in their queue.  It will help us with managing alerts more efficiently/determine bottlenecks in our process. It should be able to be displayed in a table if possible. Thank you, in advance.
Created an answer with workaround for the xpath and prolog header line issue here:  https://community.splunk.com/t5/Splunk-Search/The-xpath-command-does-not-work-with-XML-prolog-header-lines-e-g/td-... See more...
Created an answer with workaround for the xpath and prolog header line issue here:  https://community.splunk.com/t5/Splunk-Search/The-xpath-command-does-not-work-with-XML-prolog-header-lines-e-g/td-p/711425
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 record... See more...
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint   I have noticed the btree_index.dat and btree_records.dat are re-created every few seconds. They appear to be copying into the corrupt directory.  I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.   I ran a btprobe on the splunk_private_db fishbucket and the output was   no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Any ideas on where to look next?   Currently LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4.0 Build: 6b4ebe426ca6
To workaround this issue, remove the valid XML prolog headers from the event before calling the xpath command, or use the spath command instead.  Here is a run anywhere example. | makeresults | eval... See more...
To workaround this issue, remove the valid XML prolog headers from the event before calling the xpath command, or use the spath command instead.  Here is a run anywhere example. | makeresults | eval _raw="<?xml version\"1.0\"?> <Event> <System> <Provider Name='ABC'/> </System> </Event> <!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"> <Event> <System> <Provider Name='EFG'/> </System> </Event> <?xml version\"1.0\"?> <!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"> <Event> <System> <Provider Name='HIJ'/> </System> </Event>" | eval xml=replace(_raw, "<(\?xml|!DOCTYPE).+?>[\r\n]*", "") | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | xpath field=xml outfield=xml_provider_name_attr "//Provider/@Name" | spath output=spath_provider_name_attr Event.System{2}.Provider{@Name} | table _raw raw_provider_name_attr xml* spath*