All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Thanks for the earlier support. Just wanted to post a quick update in case anyone else runs into a similar situation. Issue: I was unable to access my D:\ drive via File Explorer due to what ... See more...
  Thanks for the earlier support. Just wanted to post a quick update in case anyone else runs into a similar situation. Issue: I was unable to access my D:\ drive via File Explorer due to what seemed like permission issues, but I was able to access the drive through CLI (PowerShell). Running Get-Acl on the drive showed that the owner and permissions seemed okay, but Explorer still denied access. Solution: Turns out, the permissions were either incomplete or not properly recognized by File Explorer. I resolved the issue using the following PowerShell command:   powershell   icacls D:\ /grant "*YourUsernameHere*:(OI)(CI)F" /T Replace *YourUsernameHere* with your actual Windows username. This command grants Full Control recursively (/T) for all files and folders inside D:\, ensuring proper access. After running this, I was able to access the drive in File Explorer without any problems. Note: To audit how the permissions may have changed, you can enable Object Access Auditing using auditpol and review Event ID 4670 in Event Viewer > Windows Logs > Security. Possible Correlation with Splunk Enterprise Installation After reflecting on the issue, I suspect that this might have occurred right after installing Splunk Enterprise. While I could be mistaken, it would be great if the Splunk team could check on this to ensure it doesn’t affect others in the future. I hope this helps anyone facing a similar issue, and I look forward to any additional insights or suggestions from the community.
In addition to the problems @PickleRick points out, the SPL ignored a fundamental design in the dataset.  The use of stats without group by mr_batchId begs the question: What is the logic to say whic... See more...
In addition to the problems @PickleRick points out, the SPL ignored a fundamental design in the dataset.  The use of stats without group by mr_batchId begs the question: What is the logic to say which values should form ONE line as opposed to another?  In fact, whenever you find yourself "needing" to use stats by _time in Splunk, you ought to tell yourself that some logic is probably wrong. The first troublesome command is | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes .  Here, you bypass several arrays of arrays to only focus on resourceSpans{}.scopeSpans{}.spans{}.attributes{}.  Unless you are absolutely certain about the uniqueness of this path, the prudent strategy is to fully process handle each array.  In your case, the next command, | dedup attributes, indicates that there is no such certainty.  But uniqueness is not the big problem here.  The real problem is: the path resourceSpans{}.scopeSpans{}.spans{} is the key to your developer/vendor's data design.  Each value of resourceSpans{}.scopeSpans{}.spans{} contains a unique mr_batchId that is key to distinguish dataset.  If you want to perform stats, perform stats against resourceSpans{}.scopeSpans{}.spans{}. So, step one is to fully mvexpand into this path: host="MARKET_RISK_PDT_V2" index="murex_logs" sourcetype="Market_Risk_DT" "**mr_strategy**" "typo_Collar" "resourceSpans{}.resource.attributes{}.value.stringValue"="*" | fields - resourceSpans{}.* | spath path=resourceSpans{} | mvexpand resourceSpans{} | spath input=resourceSpans{} path=scopeSpans{} | fields - resourceSpans{} | mvexpand scopeSpans{} | spath input=scopeSpans{} path=spans{} | fields - scopeSpans{} | mvexpand spans{} The above does not address the efficiency problem with **mr_strategy**, but collapses search of  "resourceSpans{}.resource.attributes{}.value.stringValue"="*" into index search, which also improves efficiency. Using your sample data, the above will give 96 spans{} values for a single event.  Among the 96, only two are relevant to your final results.  So, I would recommend adding | where match('spans{}', "mr_batchId") This would give two rows like spans{} {"traceId":"e0d25217dd28e57d2db07e06d690428f","spanId":"d6c133764c7891c3","parentSpanId":"dbd5a3ed4854e73f","name":"fullreval_task","kind":1,"startTimeUnixNano":"1744296121513194653","endTimeUnixNano":"1744296126583212823","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"37"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 12"}},{"key":"mr_strategy","value":{"stringValue":"typo_Collar Cap"}},{"key":"mr_uuid","value":{"stringValue":"4405ed87-fbc0-4751-b5b2-41836f1181cc"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|12_FullReval0_00037"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":2.042433}},{"key":"mr_batch_compute_time","value":{"doubleValue":2.138}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":2.154398}},{"key":"mr_batch_load_time","value":{"doubleValue":2.852}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":4.265003}},{"key":"mr_batch_total_time","value":{"doubleValue":5.069}}],"status":{}} {"traceId":"e0d25217dd28e57d2db07e06d690428f","spanId":"4c8da45757b1ea2a","parentSpanId":"dbd5a3ed4854e73f","name":"fullreval_task","kind":1,"startTimeUnixNano":"1744296126596384480","endTimeUnixNano":"1744296130515095708","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"58"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 12"}},{"key":"mr_strategy","value":{"stringValue":"typo_Non Deliv. Xccy Swap"}},{"key":"mr_uuid","value":{"stringValue":"f6035cef-e661-49bd-8b4c-d8d09da06822"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|12_FullReval0_00058"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":0.8687239999999999}},{"key":"mr_batch_compute_time","value":{"doubleValue":0.907}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":2.257638}},{"key":"mr_batch_load_time","value":{"doubleValue":2.955}},{"key":"mr_batch_status","value":{"stringValue":"OK"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":3.1801}},{"key":"mr_batch_total_time","value":{"doubleValue":3.917}}],"status":{}} But for flexibility, I consider this optional. From here, there are many ways to get to your desired output.  Given that you only need mr_batch_compute_cpu_time, mr_batch_compute_time, mr_batch_load_cpu_time, mr_batch_load_time, and mr_strategy, I recommend to directly extract them; however, I strongly recommend adding mr_batchId to the list because that's a critical piece of information for you to corroborate data and validate your calculations. ``` the following line is optional - improves efficiency if these are the only attributes of interest | where match('spans{}', "mr_batchId") ``` | spath input=spans{} path=attributes{} output=attributes | foreach mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy [eval <<FIELD>> = mvappend(<<FIELD>>, mvmap(attributes, if(spath(attributes, "key") != "<<FIELD>>", null(), spath(attributes, "value")))), <<FIELD>> = coalesce(spath(<<FIELD>>, "doubleValue"), spath(<<FIELD>>, "stringValue"))] | dedup _time mr_batchId ``` the above is key logic. If there is any doubt, you can also use | dedup _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time ``` | table _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy With this, the output will be _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy 2025-04-12 23:55:21 37 2.042433 2.138 2.154398 2.852 typo_Collar Cap 2025-04-12 23:55:21 58 0.8687239999999999 0.907 2.257638 2.955 typo_Non Deliv. Xccy Swap There is no need to perform stats against _time.
Hello friends, while debugging a Playbook action number one, I want to run a query like the following and get the result using the Splunk app, but I encounter the following error: Apr 13, 11:33:40... See more...
Hello friends, while debugging a Playbook action number one, I want to run a query like the following and get the result using the Splunk app, but I encounter the following error: Apr 13, 11:33:40 : phantom.collect2(): Error: Cannot fetch results from the database without a block name
Once again - way too little information to help. How are we supposed to know what your environment looks like? We have just one IP. We don't know if it's in the same network you're trying to reach t... See more...
Once again - way too little information to help. How are we supposed to know what your environment looks like? We have just one IP. We don't know if it's in the same network you're trying to reach the machine from or in another one and your traffic is routed via gateway(s). We have no idea what you installed where and why. Did you do any troubleshooting at all? Did you check whether the Splunk process is running? Did you check if it is listening on the port? Did you check whether the traffic is reaching your server? Are you trying to run your Splunk instance on the same host, in a container, in a VM? If you want people to help you you need to let them and show that you've put some effort into this.
The index is not created on the CM. It is defined in an app which is pushed to indexers and the index is created there.
I have only been able to confirm that 192.168.10.10 is working by pinging it from my target machine, windows 10, which is on my homelab too. How do I check if splunk is running on port 8000? I guess ... See more...
I have only been able to confirm that 192.168.10.10 is working by pinging it from my target machine, windows 10, which is on my homelab too. How do I check if splunk is running on port 8000? I guess if I can show this, it would mean my splunk is the problem?
Hi @nellyma  Have you been able to confirm that Splunk is running on port 8000? Are you able to evidence this with the logs?  I presume 192.168.10.10 is your local machine? Are you able to access S... See more...
Hi @nellyma  Have you been able to confirm that Splunk is running on port 8000? Are you able to evidence this with the logs?  I presume 192.168.10.10 is your local machine? Are you able to access Splunk using 127.0.0.1?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The problem was that the index was not created on the master. When the index created on the master server was pushed to all indexers, the OTX data was pulled and written to the index. Thank you for y... See more...
The problem was that the index was not created on the master. When the index created on the master server was pushed to all indexers, the OTX data was pulled and written to the index. Thank you for your help and feedback.
What is your network/server topology? Are other routes open to your server? What else have you tried?
What does AD have to do with anything? As for the rest of the question - too little information and too chaotic. We don't even know what OS you're using.
I'm trying to build Active directory in my homelab and I configured splunk to the ip address of 198.162.10.10 but it refuses to respond on the web to port 8000. I thought I misconfigured something an... See more...
I'm trying to build Active directory in my homelab and I configured splunk to the ip address of 198.162.10.10 but it refuses to respond on the web to port 8000. I thought I misconfigured something and deleted everything and did it all over and still same response. I have disabled all my firewalls, followed numerous YouTube tutorials and still no luck. This is my last resort. Can someone please help me?? What could be my problem? It says ERR_CONNECTION_TIMED_OUT
Hi To pass the usernames field from your Splunk action block to a decision or utility block in SOAR, use the custom output paths from the action result. In the decision block, reference the field a... See more...
Hi To pass the usernames field from your Splunk action block to a decision or utility block in SOAR, use the custom output paths from the action result. In the decision block, reference the field as action_result.data.*.usernames. The Splunk action block returns results as a list of dictionaries under action_result.data. The .*. wildcard iterates over each result, accessing the usernames field from each row. Field names are case-sensitive and must match exactly as returned by your SPL. If your SPL returns multiple rows, the path will return a list of values. The following docs pages may also be useful: https://docs.splunk.com/Documentation/SOAR/current/Playbook/SpecifyData https://docs.splunk.com/Documentation/SOAR/current/DevelopApps/DataPath https://docs.splunk.com/Documentation/Phantom/4.10.7/PlaybookAPI/Datapaths Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Gururaj1  Splunk UF should not have a python site-packages as UF installation does not include Python.  Please can you confirm what manifest file you have in /opt/splunkforwarder? If you are us... See more...
Hi @Gururaj1  Splunk UF should not have a python site-packages as UF installation does not include Python.  Please can you confirm what manifest file you have in /opt/splunkforwarder? If you are using 64-bit version it should be called splunkforwarder-9.4.1-e3bdab203ac8-linux-amd64-manifest. This manifest file does not contain reference to the Python libraries.  Please can you confirm the filename of the Splunk installation file you used to install the UF?  @kiran_panchavat please can you let me know the community posts you're referring to where people with Ubuntu are having issues with UF installs so I can see if there is an underlying common issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi you should install it only SH side. Personally I install it on MC (as it has connections to all other nodes) in onprem and in SCP in SH. It works both SCP and enterprise. You must remember that ... See more...
Hi you should install it only SH side. Personally I install it on MC (as it has connections to all other nodes) in onprem and in SCP in SH. It works both SCP and enterprise. You must remember that there is no separate app on side column, just call that SPL command in any of your other apps. This is extremely useful app and currently it one of apps which I install all environments for admin purpose. r. Ismo
While @ITWhisperer 's solution is a neat trick, I'd rethink the search. 1. You're searching for "**mrstrategy**". That's gonna be slow. 2. First you're using automatic json extraction, then you cal... See more...
While @ITWhisperer 's solution is a neat trick, I'd rethink the search. 1. You're searching for "**mrstrategy**". That's gonna be slow. 2. First you're using automatic json extraction, then you call spath.  3. Always be vigilant around "dedup" command. You're deduping on attributes then statsing on attributes and _time. You will only ever have one _time per attributes. And dedup will move processing to SH tier. 4. I have a hunch that you have some duplicate data which you want to get rid of in search time. Maybe it's worth reworking your ingestion process so you're not wasting license? 5. Unfortunately, as you must have noticed already - this is a very ugly data format (from Splunk's point of view). This whiole "keyname=key1,value=something" schema is very inconvenient for searching and processing since you firstly have to read, parse, and interpret all events to get to the "contents". So now you're bending over backwards to do something which should be easy as just writing a simple filter condition. Are you sure you don't have someone in your org to sit with and have a chat about the data format? Or about the ingestion process - maybe it's worth setting up something that wil, transform your data into something more reasonable?
Well, that's not gonna be easy. With this many results not only you can't use append but also eventstats is not a good idea. Unfortunately, the less precise you are about your use case the more pro... See more...
Well, that's not gonna be easy. With this many results not only you can't use append but also eventstats is not a good idea. Unfortunately, the less precise you are about your use case the more probability that you will get a "no can do" answer. Maybe you should work on your extractions and/or initial filtering, maybe it's one of the rare cases where adding indexed field would help... we don't know. We are not aware what problem you're trying to solve.
@Gururaj1  The error you're encountering while trying to install or validate the Splunk Universal Forwarder (version 9.4.1) on Ubuntu 20.04 LTS (Focal Fossa, amd64) suggests issues with the installa... See more...
@Gururaj1  The error you're encountering while trying to install or validate the Splunk Universal Forwarder (version 9.4.1) on Ubuntu 20.04 LTS (Focal Fossa, amd64) suggests issues with the installation process, specifically related to missing Python site-packages directories and possibly a corrupted or incomplete download/installation.   The error about missing /opt/splunkforwarder/lib/python3.7/site-packages and python3.9/site-packages suggests that Splunk’s bundled Python environment is either not installed correctly or not being detected. Splunk Universal Forwarder typically bundles its own Python environment, so this issue is likely due to a corrupted installation rather than a system Python issue.   https://docs.splunk.com/Documentation/Forwarder/9.1.1/Forwarder/Installanixuniversalforwarder    Even if I read of many problems using Ubuntu reported by the Community members. I don't have special reccomandations: only follow all the installation steps documented at https://docs.splunk.com/Documentation/Splunk/9.0.5/Installation/InstallonLinux  https://docs.splunk.com/Documentation/Forwarder/9.0.5/Forwarder/Installanixuniversalforwarder  but it's a very simple procedure.  
The only reason we created accelerated model is so that we can "return" 1 million events in few seconds.  Therefore, I'm not sure append fits this. original non-working scenario due to huge index=a ... See more...
The only reason we created accelerated model is so that we can "return" 1 million events in few seconds.  Therefore, I'm not sure append fits this. original non-working scenario due to huge index=a index=a OR index=b stats (where matched in 2 indexes) by FieldA   Need to make work index=b OR | tstats  values..... stats (where matched in 2 indexes??) by FieldA  
Here is my orignal query.  I have to mask a lot of code and evals. Sorry.  Probably ignore that i do eventstats and than stats as im doing a lookup which im not showing and getting columns there. Qu... See more...
Here is my orignal query.  I have to mask a lot of code and evals. Sorry.  Probably ignore that i do eventstats and than stats as im doing a lookup which im not showing and getting columns there. Question is: if index=A is now Accelerated model.  How can i join results of index query with tstats results without using sub-searches or anything that would limit it.   (index=a OR (index=b DistinguishedName IN ("ou=a" "ou=b") | eventstats values(src_ip) as SourceIP dc(index) as idx values(OU) as OU by Account_Name | search idx=2 | where index="a" | stats dc(src_ip) as IP earliest(T) as FirstOccurance latest(T) as LatestOccurance values(OU) as Location count by Account_Name  
wow. just. wow. took me a week to find this