All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  You can modify your search to aggregate cpu_usage over 4-hour intervals and visualize it.   
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server me... See more...
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server means just this requirement, don't use it for any additional role, also forwarding!)
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and th... See more...
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and the HF will consume no license as long as it does not index data. 2) Install the forwarder license. This will give the HF many enterprise capabilities, but not all. The HF will be able to parse and forward data. However, it will not be permitted to index and it will not be able to act as a deployment server (as an example). This is the option I would usually choose. (Note that the Universal Forwarder has the forwarder license pre-installed.) I strongly discourage using either the trial license or the free license on a production forwarder. Licenses and distributed deployments - Splunk Documentation
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Fo... See more...
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Forwarder license while utilizing the functions of both the Deployment Server and Heavy Forwarder?Thank you so much
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to t... See more...
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to this search to give a graph visualization of total cpu_usage every 4 hours.
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the ... See more...
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the correct filtering of accounts. I work in an animation studio, and transferring large video files was always a challenge. We tried cloud storage, but it was slow and required sign-ups. Filemail solved all our problems—it’s fast, secure, and lets us send huge files without forcing the recipient to create an account. If you’re in the creative industry, this is a must-have!
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and... See more...
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and it worked for me: { hideChrome: true, pageTitle: "Splunk React app", theme, hideSplunkBar: true }
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Inf... See more...
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Infinity? And what would be the solution?
What is the definition of large? Is it measured in total bytes? Number of records? And in either case how much?
Thanks for your reply, since i don't have privilege to see that i will follow up this issue first. if it solved i will give you upvote/karma points.  Danke  Zake
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure... See more...
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure all members are in sync.  Use the Monitoring Console to view the status of your search head cluster and identify any issues with job execution.  Please check this: Solved: Why is a Search Head Cluster Member not replicatin... - Splunk Community Use the monitoring console to view search head cluster status and troubleshoot issues - Splunk Documentation Solved: Trying to run a search, why are we getting a "Queu... - Splunk Community limits.conf - Splunk Documentation
@msatish  In order to update the changes successfully into the Qualys TA for Splunk, please follow the below steps: 1)From Settings> Data Inputs disable the TA Inputs 2)Delete passwords.conf file.... See more...
@msatish  In order to update the changes successfully into the Qualys TA for Splunk, please follow the below steps: 1)From Settings> Data Inputs disable the TA Inputs 2)Delete passwords.conf file. 3)Reboot the splunk instance. 4)Go to TA config in Splunk UI and give the credentials again. 5)Check if the passwords.conf file created 6)Enable TA inputs from data Inputs
Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, bef... See more...
Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, before I delete it I create another account with id 13 and move all correlation search/ saved search/ dashboard created by account 7 to account 13 so that the owner will move everything to account 13 and account 7 can be deleted immediately. Currently, my problem is that when I move to account 13, account 13 will get a notification "Waiting for queued job to start Manage Jobs" which causes me to not be able to search. Even though account 13 for the role has been equated with account 7, and the role has also been raised for the role search job limit and user search job limit, but strangely it is still queued. What's even more strange, this 13 account only searches around below 5000 data/day even though other users have more than 5000 data but there are no problems with searching. Here I attach a picture, in this case account 13 is in 4th place or from the brown chart, account 7 is in 5th place, while the account for analysts is in 1,2,3
My bad, in this env my friend setting different inputs.conf and it from .evtx and it cannot readable in splunk without some setting. Sorry guys
@msatish  You have to either re-enter the credentials and delete the old, or reinstall the app. Check this documentation: : https://community.splunk.com/t5/Getting-Data-In/Having-trouble-setting-up... See more...
@msatish  You have to either re-enter the credentials and delete the old, or reinstall the app. Check this documentation: : https://community.splunk.com/t5/Getting-Data-In/Having-trouble-setting-up-TA-QualysCloudPlatform-App/m-p/566066 
Thank you for your response, but your query is providing list of dashboard accessed and i am looking for number of unused/not accessed by anyone. 
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, ... See more...
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on(TA-QualysCloudPlatform) is still not accepting new credentials, and logs are not flowing to Splunk, what might be the issue.   Steps Followed: Updated new password in TA-QualysCloudPlatform and restarted Splunk
I've given up. I don't know if it's a network issue on my side or what, but I'm just going to use standard Restful API libraries. All the samples around splunk-sdk that I could find seem out of date ... See more...
I've given up. I don't know if it's a network issue on my side or what, but I'm just going to use standard Restful API libraries. All the samples around splunk-sdk that I could find seem out of date and I'm concerned about long-term support.
Hi @harishsplunk7  I dont think your search covers Dashboard Studio dashboards, only Simple XML dashboards (but could be wrong) Have a go with the following search and let me know how you get on! ... See more...
Hi @harishsplunk7  I dont think your search covers Dashboard Studio dashboards, only Simple XML dashboards (but could be wrong) Have a go with the following search and let me know how you get on! index=_internal sourcetype=splunkd_ui_access earliest=-90d@d uri="*/data/ui/views/*" | rex field=uri "/servicesNS/(?<user>[^/]+)/(?<app>[^/]+)/data/ui/views/(?<dashboard>[^\.?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by app, dashboard | append [| rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app as app | fields dashboard app | eval isDashboard=1] | stats sum(accessed) as accessed, values(isDashboard) as isDashboard by app, dashboard | search isDashboard=1 accessed>0
Try something like this | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value... See more...
Try something like this | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value.stringValue') | stats values(X_*) as * by _raw