All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, bef... See more...
Hi Everyone, In my Splunk environment, I have about 15 users, but the one responsible for creating correlation searches is on 1 account, let's say account 7. Then I plan to delete the account, before I delete it I create another account with id 13 and move all correlation search/ saved search/ dashboard created by account 7 to account 13 so that the owner will move everything to account 13 and account 7 can be deleted immediately. Currently, my problem is that when I move to account 13, account 13 will get a notification "Waiting for queued job to start Manage Jobs" which causes me to not be able to search. Even though account 13 for the role has been equated with account 7, and the role has also been raised for the role search job limit and user search job limit, but strangely it is still queued. What's even more strange, this 13 account only searches around below 5000 data/day even though other users have more than 5000 data but there are no problems with searching. Here I attach a picture, in this case account 13 is in 4th place or from the brown chart, account 7 is in 5th place, while the account for analysts is in 1,2,3
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, ... See more...
Password of Splunk user account in qualys got expired, we have reset the password now, new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on(TA-QualysCloudPlatform) is still not accepting new credentials, and logs are not flowing to Splunk, what might be the issue.   Steps Followed: Updated new password in TA-QualysCloudPlatform and restarted Splunk
Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst r... See more...
Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst retains an alert in their queue.  It will help us with managing alerts more efficiently/determine bottlenecks in our process. It should be able to be displayed in a table if possible. Thank you, in advance.
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 record... See more...
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint   I have noticed the btree_index.dat and btree_records.dat are re-created every few seconds. They appear to be copying into the corrupt directory.  I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.   I ran a btprobe on the splunk_private_db fishbucket and the output was   no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Any ideas on where to look next?   Currently LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4.0 Build: 6b4ebe426ca6
The xpath command does not work if the XML event contains valid prolog header lines (https://www.w3schools.com/xml/xml_syntax.asp). For example, this works   | makeresults | eval _raw="<Event> ... See more...
The xpath command does not work if the XML event contains valid prolog header lines (https://www.w3schools.com/xml/xml_syntax.asp). For example, this works   | makeresults | eval _raw="<Event> <System> <Provider Name='ABC'/> </System> </Event>" | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | table _raw raw_provider_name_attr    but, add a prolog header and it will no longer work ...   | makeresults | eval _raw="<?xml version=\"1.0\?> <Event> <System> <Provider Name='ABC'/> </System> </Event>" | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | table _raw raw_provider_name_attr   I've raised a support case with Splunk about this.
I am want to get the list of dashboard which is not used by anyone for more than 90 days. i have tired to use the below query but didnt work well.  | rest splunk_server=local "/servicesNS/-/-/data/u... See more...
I am want to get the list of dashboard which is not used by anyone for more than 90 days. i have tired to use the below query but didnt work well.  | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard | fields dashboard | eval accessed=0 | search NOT [ search index=_internal sourcetype=splunkd_ui_access earliest=-90d@d | rex field=uri "/app/[^/]+/(?<dashboard>[^?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by dashboard | fields dashboard, accessed ] | stats sum(accessed) as total_accessed by dashboard | where total_accessed=0 | table dashboard
Splunk's xpath documentation does not show any examples on how to use the xpath command if the XML contains namespace declarations, e.g. <event xmlns='mynamespace'> or <prefix:Event xmlns:prefix='myn... See more...
Splunk's xpath documentation does not show any examples on how to use the xpath command if the XML contains namespace declarations, e.g. <event xmlns='mynamespace'> or <prefix:Event xmlns:prefix='mynamespace'>.   The xpath command will not extract any results unless the event is modified and the namespace declaration(s) removed from the event first.  Probably the most used workaround would be using the spath command instead. However, after some googling about the path syntax for XPath you find there is a special local-name() notation that can be used so the namespace declarations are ignored during parsing.
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sour... See more...
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sourcetype=cp_log:syslog source=checkpoint:firewall dest="172.24.245.210" | fields dest, src | dedup dest, src | table dest, src I am looking to identify any front end application server that connects to this 172.24.245.210 server    
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.s... See more...
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue or resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue. I want to use them to run stats commands on them. So I was looking to extract each  Key | doubleValue or stringValue  and then use them This is some of the data I have. We can see that doubleValue and stringValue  are mixed and can pop up anytime. I have tried the following. But there is an issue     source="trace_Marketing_Bench_31032016_17_cff762901d1eff01766119738a9218e2.jsonl" host="TEST1" index="murex_logs" sourcetype="Market_Risk_DT" "**strategy**" 920e1021406277a9 | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key" | eval output=mvzip('resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue','resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key') | table output     The order is not coming out correctly. In red, we can see that  WARNING is with mr_batch_status, not mr_batch_compute_cpu_time - That is because they are both extracting independently and not synced to each other. How do I get them to extract the same? SOme raw data     {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.version","value":{"stringValue":"1.12.0"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.language","value":{"stringValue":"cpp"}},{"key":"service.instance.id","value":{"stringValue":"00vptl2h"}},{"key":"service.namespace","value":{"stringValue":"MXMARKETRISK.SERVICE"}},{"key":"service.name","value":{"stringValue":"MXMARKETRISK.ENGINE.MX"}}]},"scopeSpans":[{"scope":{"name":"murex::tracing_backend::otel","version":"v1"},"spans":[{"traceId":"cff762901d1eff01766119738a9218e2","spanId":"71d94e8ebb30a3d5","parentSpanId":"920e1021406277a9","name":"fullreval_task","kind":"SPAN_KIND_INTERNAL","startTimeUnixNano":"1716379123221825454","endTimeUnixNano":"1716379155367858727","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"440"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":"imccBucket#ALL_10_Reduced"}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"Marketing_Bench | 31/03/2016 | 17"}},{"key":"mr_strategy","value":{"stringValue":"typo_Bond"}},{"key":"mr_uuid","value":{"stringValue":"b1ed4d3a-0e4d-4afa-ad39-7cf6a07c36a9"}},{"key":"mrb_batch_affinity","value":{"stringValue":"Marketing_Bench_run_Batch|Marketing_Bench|2016/03/31|17_FullReval0_00029"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":31.586568}},{"key":"mr_batch_compute_time","value":{"doubleValue":31.777}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":0.0}},{"key":"mr_batch_load_time","value":{"doubleValue":0.0}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":31.912966}},{"key":"mr_batch_total_time","value":{"doubleValue":32.14}}],"status":{}}]}]}]}          
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original ... See more...
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original search is really messy and just unusable. My SPL skills are limited at the moment so any help is much appreciated.  | tstats values(host) as host where index=* by index
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of... See more...
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of task" "Failed to start connector" "Error while starting connector" "Ignoring error closing connection" "failed to publish monitoring message" "Ignoring error closing connection" "restart failed"| "disconnected" "Communications link failure during rollback" "Exception occurred while closing reporter" "Connection to node" "Unexpected exception sending HTTP Request" "Ignoring stop request for unowned task" "failed on invocation of onPartitionsAssigned for partitions" "Ignoring stop request for unowned connector" "Ignoring await stop request for non-present connector" "Connection refused"   I am not certain how to do this.  This is the base search: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN I want to create the field on the fly and have it pick up the appropriate CASE value.  I would then put it in a table with host connName StatusMsg   Any assist would be greatly appreciated.  
In the investigation panel for an incident in Splunk SOAR, there is a comment or command field under Activity.  If you copy and paste multiple lines of text that include blank lines in between sectio... See more...
In the investigation panel for an incident in Splunk SOAR, there is a comment or command field under Activity.  If you copy and paste multiple lines of text that include blank lines in between sections of text in the comment field, all formatting is lost and the text is all bunched together. However, if you select an incident from  the queue and select the Edit button, and paste the same lines of text in the "Add comment" field, the formatting is preserved. Is there any way to add a new line character or line break to the text to maintain the blank lines or prevent the text from bunching up?
So we are starting a new project soon, and basically our boss is personally sending me an index (not internal) to investigate. Investigate as far as as far as usage. We are trying to optimize the ... See more...
So we are starting a new project soon, and basically our boss is personally sending me an index (not internal) to investigate. Investigate as far as as far as usage. We are trying to optimize the env and cut whats not being used, or checking to see what is being overused. KO'S, data intake, etc. Any good practices, processes or tips you can lend? this would be the most perfect learning opportunity. Im excited, but nervous.
How much syntax has changed from splunklib (which ran on Python 2.x) to splunk-sdk (which runs on Python 3.x)? Just seems like a lot of the tutorials and info on Splunk API is super outdated. Is nobo... See more...
How much syntax has changed from splunklib (which ran on Python 2.x) to splunk-sdk (which runs on Python 3.x)? Just seems like a lot of the tutorials and info on Splunk API is super outdated. Is nobody doing this anymore? Currently mainly interested in running a search and getting results into Pandas using Python. Also breaking up a search into multiple smaller time spans if the time period is too long and/or the return data set too large.   I have old code from the splunklib Python 2.0 days but basically just starting over and using it as reference.  
Recently we migrated a server from Virtual Machine to Physical server We use LDAP authentication for user access for Splunk The users were able to login but did not have the same privileges when mo... See more...
Recently we migrated a server from Virtual Machine to Physical server We use LDAP authentication for user access for Splunk The users were able to login but did not have the same privileges when moved from VM to physical server I am able to login into Splunk Web UI but as a admin I am not able to view with admin privileges, So i tried to run the below command in the search head server  ./splunk reload auth I got the below error Authorization Failed: b'<?xml version="1.0" encoding="UTF-8"?>\n<response>\n  <messages>\n    <msg type="ERROR">You (user=88888888) do not have permission to perform this operation (requires capability: change_authentication).</msg>\n  </messages>\n</response>\n' Client is not authorized to perform requested action
Hi guys,   I am looking to build a query/dashboard that would monitor the status of the connection of the splunk API to the MISP42 instance.   I am unsure how to go about this, I can't find anyth... See more...
Hi guys,   I am looking to build a query/dashboard that would monitor the status of the connection of the splunk API to the MISP42 instance.   I am unsure how to go about this, I can't find anything interesting in _internal index to fetch or look at or a heartbeat that would indicate a successful handshake.   To my understanding, a search is ran every X days (we set it up once a day) to write the data we have in our MISP instance to lookups. Those different lookups are then used for Threat Intelligence and is mapped.   Maybe I should monitor the search to see if it did not write any updates? I am trying to get notified or a query that would let me know there is an issue with the feed.   Thanks,
Hi everyone,  we are currently in a migration project and want to process NetFlow data within Splunk. For this purpose, we are using Splunk Stream and the associated apps (Add-On for Stream Forwarde... See more...
Hi everyone,  we are currently in a migration project and want to process NetFlow data within Splunk. For this purpose, we are using Splunk Stream and the associated apps (Add-On for Stream Forwarders/Add-On for Stream Wire Data) and we are receiving a lot of data from the respective system. Unfortunately, many fields in this data remain empty, even though they can be read from the same system using the current NetFlow tool. We selected every possible field from the configuration within the GUI changed the NetFlow version from 5 to 9 and IPFIX without any positive outcome. The fields which are interessting for us are the following: interface name app app_desc protocol (tcp or udp)   Are there any additional configuration options available or did anyone experienced this issue ?   Thanks in advance 
Is there any particular reason for using Python splunk-sdk over standard restful API libraries or tools (such as Python requests library)? Using standard Python, you should be able to import data int... See more...
Is there any particular reason for using Python splunk-sdk over standard restful API libraries or tools (such as Python requests library)? Using standard Python, you should be able to import data into pandas with 3 lines:   response = requests.get(url) data = response.json() pd.DataFrame(data)   What does splunk-sdk have that Python requests does not?   Thanks!
I'm having some issues with my on-prem deployment of Splunk SOAR 6.3.1and would like to revert to 6.2.2. Should I just follow the steps for upgrading even though I'm reverting to a previous version? ... See more...
I'm having some issues with my on-prem deployment of Splunk SOAR 6.3.1and would like to revert to 6.2.2. Should I just follow the steps for upgrading even though I'm reverting to a previous version? https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradeSOARInstance
Hi Team,  is there a way get immediate Splunk Developer/ Trial license. I was using the the developer license and its expired I need it for some more time for today. is there a way to get it  c... See more...
Hi Team,  is there a way get immediate Splunk Developer/ Trial license. I was using the the developer license and its expired I need it for some more time for today. is there a way to get it  can somebody pls provide the Splunk Developer/ Trial license #Splunk Trial account