All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I'm planning to install and use the Splunk App for Data Science and Deep Learning(DSDL) in a closed network environment. I’m considering use cases involving deep learning and LLM-RAG archite... See more...
Hello, I'm planning to install and use the Splunk App for Data Science and Deep Learning(DSDL) in a closed network environment. I’m considering use cases involving deep learning and LLM-RAG architecture. Could you please share the minimum server specifications for testing, as well as the recommended specifications for production?
Dear Team,  We have obtained the ITSI installation package "splunk-it-service-intelligence-4193. spl" and installed it according to the installation guide on the official website“ https://docs.splun... See more...
Dear Team,  We have obtained the ITSI installation package "splunk-it-service-intelligence-4193. spl" and installed it according to the installation guide on the official website“ https://docs.splunk.com/Documentation/ITSI/4.20.0/Install/Install ”. In the end, the Splunk Enterprise platform only has the ITEM app. What is the reason for this? Please provide technical support. Thank you.
Hello, I set up an Amazon Linux 2 virtual machine in VirtualBox and successfully installed Splunk SOAR. I am trying to log into the web interface. The documentation says to go to the IP address that... See more...
Hello, I set up an Amazon Linux 2 virtual machine in VirtualBox and successfully installed Splunk SOAR. I am trying to log into the web interface. The documentation says to go to the IP address that I assigned to the Splunk SOAR using the custom HTTPS port. I know that I am using the correct port. When I run ifconfig, I see two IP addresses. I tried both with the port I chose for Splunk, but neither is working, and my browser says that the site cannot be reached. Any help would be appreciated.
The HTTP Event Collector won't do load balancing itself, so you will need to set up a load balancer in front of the indexers. One way you could set up the HEC token is to take a Splunk server with a... See more...
The HTTP Event Collector won't do load balancing itself, so you will need to set up a load balancer in front of the indexers. One way you could set up the HEC token is to take a Splunk server with a web interface (probably not the indexers), go to Settings->Data inputs->HTTP Event Collector, then click the "New Token" button. Go through the menu specifying your desired input name, sourcetype, index, etc. This will generate an inputs.conf stanza for the HTTP input. You can then open the inputs.conf file and copy this stanza to each of your indexers to ensure they have the same token. (Remaining instructions assume your indexers are running Linux) For me, the inputs.conf file was generated in /opt/splunk/etc/apps/launcher/local, because I went to the HTTP Event Collector web interface from the main Splunk Enterprise screen. The stanza will look like this: (with different values, of course) [http://inputname] disabled = 0 host = yourhostname index = main indexes = main source = inputsourcetype token = fe2cfed6-664a-4d75-a79d-41dc0548b9de Of course, you should change the host value for each indexer or remove the host line so that the host value is decided on startup. Then, create a new file on each indexer at: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf Containing this text: [http] disabled = 0 This will enable the HTTP event collector on the indexers. You can check that the HTTP event listener is opening the port on the indexer by using netstat: netstat -apn | grep 8088
Assuming that you are able to edit the inputs.conf file, and that you have a definite value for env, service, and custom for each input stanza, then you could add meta tags to the input stanzas: _me... See more...
Assuming that you are able to edit the inputs.conf file, and that you have a definite value for env, service, and custom for each input stanza, then you could add meta tags to the input stanzas: _meta = env::<env value> service::<service value> custom::<custom value> I don't know if this works the same way with OTEL collectors.
When you don't include the UID, are there any differences in the field values? What pattern do you see in how it adds artifacts to containers? E.g. are there specific fields which determine the conta... See more...
When you don't include the UID, are there any differences in the field values? What pattern do you see in how it adds artifacts to containers? E.g. are there specific fields which determine the container that the artifact gets added to, or does it add artifacts to the most recently created container? Depending on how you would like it to behave, you could throttle the creation of new artifacts by using a outputlookup and NOT [|inputlookup] commands in your saved search used to forward events to SOAR, then use a time field to make sure the artifacts+containers are different.
This usually means that something in your playbook is referencing a term that does not exist, like a misnamed block or a nonexistent datapath. If you are certain that the error originates from this S... See more...
This usually means that something in your playbook is referencing a term that does not exist, like a misnamed block or a nonexistent datapath. If you are certain that the error originates from this Splunk app block, then you could try setting all of the inputs to be formatted text (as you did with the query input) so that SOAR does not think it could be a datapath.
The first thing to check is the splunkd.log on the problematic (sending) machine. It should tell you if the connection is established at all or if it's being actively rejected or anythin else.
@ejose  Check this  https://community.splunk.com/t5/Getting-Data-In/How-to-fix-Heavy-Forwarder-to-Splunk-Cloud-logs-forward-error/td-p/645998  https://community.splunk.com/t5/Getting-Data-In/How-t... See more...
@ejose  Check this  https://community.splunk.com/t5/Getting-Data-In/How-to-fix-Heavy-Forwarder-to-Splunk-Cloud-logs-forward-error/td-p/645998  https://community.splunk.com/t5/Getting-Data-In/How-to-fix-TCPOutAutoLB-0-error/m-p/613119 
Just for the sake of completness - stats by _time is fairly useful if you manipulate your timestamps (usually by means of bin/bucket). With raw untouched _time it can be useful if you have several e... See more...
Just for the sake of completness - stats by _time is fairly useful if you manipulate your timestamps (usually by means of bin/bucket). With raw untouched _time it can be useful if you have several events emmited at the same time (and you can be 100% sure about that) and you have no other unique identifier to mark them by. But this is rather unlikely since separate events, even regarding the same "physical event" usually come from separate sources and are slightly offset in _time.
Just for clarification - this is a community-driven forum and while there are some Splunk Employees lurking here it's highly unlikely (unless maybe there is a grave error destroying your indexes or s... See more...
Just for clarification - this is a community-driven forum and while there are some Splunk Employees lurking here it's highly unlikely (unless maybe there is a grave error destroying your indexes or such) that someone will invest company time on this without a support ticket. And of course support portal where you can raise rickets is three blocks south from here
  Thanks for the earlier support. Just wanted to post a quick update in case anyone else runs into a similar situation. Issue: I was unable to access my D:\ drive via File Explorer due to what ... See more...
  Thanks for the earlier support. Just wanted to post a quick update in case anyone else runs into a similar situation. Issue: I was unable to access my D:\ drive via File Explorer due to what seemed like permission issues, but I was able to access the drive through CLI (PowerShell). Running Get-Acl on the drive showed that the owner and permissions seemed okay, but Explorer still denied access. Solution: Turns out, the permissions were either incomplete or not properly recognized by File Explorer. I resolved the issue using the following PowerShell command:   powershell   icacls D:\ /grant "*YourUsernameHere*:(OI)(CI)F" /T Replace *YourUsernameHere* with your actual Windows username. This command grants Full Control recursively (/T) for all files and folders inside D:\, ensuring proper access. After running this, I was able to access the drive in File Explorer without any problems. Note: To audit how the permissions may have changed, you can enable Object Access Auditing using auditpol and review Event ID 4670 in Event Viewer > Windows Logs > Security. Possible Correlation with Splunk Enterprise Installation After reflecting on the issue, I suspect that this might have occurred right after installing Splunk Enterprise. While I could be mistaken, it would be great if the Splunk team could check on this to ensure it doesn’t affect others in the future. I hope this helps anyone facing a similar issue, and I look forward to any additional insights or suggestions from the community.
In addition to the problems @PickleRick points out, the SPL ignored a fundamental design in the dataset.  The use of stats without group by mr_batchId begs the question: What is the logic to say whic... See more...
In addition to the problems @PickleRick points out, the SPL ignored a fundamental design in the dataset.  The use of stats without group by mr_batchId begs the question: What is the logic to say which values should form ONE line as opposed to another?  In fact, whenever you find yourself "needing" to use stats by _time in Splunk, you ought to tell yourself that some logic is probably wrong. The first troublesome command is | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes .  Here, you bypass several arrays of arrays to only focus on resourceSpans{}.scopeSpans{}.spans{}.attributes{}.  Unless you are absolutely certain about the uniqueness of this path, the prudent strategy is to fully process handle each array.  In your case, the next command, | dedup attributes, indicates that there is no such certainty.  But uniqueness is not the big problem here.  The real problem is: the path resourceSpans{}.scopeSpans{}.spans{} is the key to your developer/vendor's data design.  Each value of resourceSpans{}.scopeSpans{}.spans{} contains a unique mr_batchId that is key to distinguish dataset.  If you want to perform stats, perform stats against resourceSpans{}.scopeSpans{}.spans{}. So, step one is to fully mvexpand into this path: host="MARKET_RISK_PDT_V2" index="murex_logs" sourcetype="Market_Risk_DT" "**mr_strategy**" "typo_Collar" "resourceSpans{}.resource.attributes{}.value.stringValue"="*" | fields - resourceSpans{}.* | spath path=resourceSpans{} | mvexpand resourceSpans{} | spath input=resourceSpans{} path=scopeSpans{} | fields - resourceSpans{} | mvexpand scopeSpans{} | spath input=scopeSpans{} path=spans{} | fields - scopeSpans{} | mvexpand spans{} The above does not address the efficiency problem with **mr_strategy**, but collapses search of  "resourceSpans{}.resource.attributes{}.value.stringValue"="*" into index search, which also improves efficiency. Using your sample data, the above will give 96 spans{} values for a single event.  Among the 96, only two are relevant to your final results.  So, I would recommend adding | where match('spans{}', "mr_batchId") This would give two rows like spans{} {"traceId":"e0d25217dd28e57d2db07e06d690428f","spanId":"d6c133764c7891c3","parentSpanId":"dbd5a3ed4854e73f","name":"fullreval_task","kind":1,"startTimeUnixNano":"1744296121513194653","endTimeUnixNano":"1744296126583212823","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"37"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 12"}},{"key":"mr_strategy","value":{"stringValue":"typo_Collar Cap"}},{"key":"mr_uuid","value":{"stringValue":"4405ed87-fbc0-4751-b5b2-41836f1181cc"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|12_FullReval0_00037"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":2.042433}},{"key":"mr_batch_compute_time","value":{"doubleValue":2.138}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":2.154398}},{"key":"mr_batch_load_time","value":{"doubleValue":2.852}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":4.265003}},{"key":"mr_batch_total_time","value":{"doubleValue":5.069}}],"status":{}} {"traceId":"e0d25217dd28e57d2db07e06d690428f","spanId":"4c8da45757b1ea2a","parentSpanId":"dbd5a3ed4854e73f","name":"fullreval_task","kind":1,"startTimeUnixNano":"1744296126596384480","endTimeUnixNano":"1744296130515095708","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"58"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":""}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"CONSO_ABAQ | 31/03/2016 | 12"}},{"key":"mr_strategy","value":{"stringValue":"typo_Non Deliv. Xccy Swap"}},{"key":"mr_uuid","value":{"stringValue":"f6035cef-e661-49bd-8b4c-d8d09da06822"}},{"key":"mrb_batch_affinity","value":{"stringValue":"CONSO_ABAQ_run_Batch|CONSO_ABAQ|2016/03/31|12_FullReval0_00058"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":0.8687239999999999}},{"key":"mr_batch_compute_time","value":{"doubleValue":0.907}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":2.257638}},{"key":"mr_batch_load_time","value":{"doubleValue":2.955}},{"key":"mr_batch_status","value":{"stringValue":"OK"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":3.1801}},{"key":"mr_batch_total_time","value":{"doubleValue":3.917}}],"status":{}} But for flexibility, I consider this optional. From here, there are many ways to get to your desired output.  Given that you only need mr_batch_compute_cpu_time, mr_batch_compute_time, mr_batch_load_cpu_time, mr_batch_load_time, and mr_strategy, I recommend to directly extract them; however, I strongly recommend adding mr_batchId to the list because that's a critical piece of information for you to corroborate data and validate your calculations. ``` the following line is optional - improves efficiency if these are the only attributes of interest | where match('spans{}', "mr_batchId") ``` | spath input=spans{} path=attributes{} output=attributes | foreach mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy [eval <<FIELD>> = mvappend(<<FIELD>>, mvmap(attributes, if(spath(attributes, "key") != "<<FIELD>>", null(), spath(attributes, "value")))), <<FIELD>> = coalesce(spath(<<FIELD>>, "doubleValue"), spath(<<FIELD>>, "stringValue"))] | dedup _time mr_batchId ``` the above is key logic. If there is any doubt, you can also use | dedup _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time ``` | table _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy With this, the output will be _time mr_batchId mr_batch_compute_cpu_time mr_batch_compute_time mr_batch_load_cpu_time mr_batch_load_time mr_strategy 2025-04-12 23:55:21 37 2.042433 2.138 2.154398 2.852 typo_Collar Cap 2025-04-12 23:55:21 58 0.8687239999999999 0.907 2.257638 2.955 typo_Non Deliv. Xccy Swap There is no need to perform stats against _time.
Hello friends, while debugging a Playbook action number one, I want to run a query like the following and get the result using the Splunk app, but I encounter the following error: Apr 13, 11:33:40... See more...
Hello friends, while debugging a Playbook action number one, I want to run a query like the following and get the result using the Splunk app, but I encounter the following error: Apr 13, 11:33:40 : phantom.collect2(): Error: Cannot fetch results from the database without a block name
Once again - way too little information to help. How are we supposed to know what your environment looks like? We have just one IP. We don't know if it's in the same network you're trying to reach t... See more...
Once again - way too little information to help. How are we supposed to know what your environment looks like? We have just one IP. We don't know if it's in the same network you're trying to reach the machine from or in another one and your traffic is routed via gateway(s). We have no idea what you installed where and why. Did you do any troubleshooting at all? Did you check whether the Splunk process is running? Did you check if it is listening on the port? Did you check whether the traffic is reaching your server? Are you trying to run your Splunk instance on the same host, in a container, in a VM? If you want people to help you you need to let them and show that you've put some effort into this.
The index is not created on the CM. It is defined in an app which is pushed to indexers and the index is created there.
I have only been able to confirm that 192.168.10.10 is working by pinging it from my target machine, windows 10, which is on my homelab too. How do I check if splunk is running on port 8000? I guess ... See more...
I have only been able to confirm that 192.168.10.10 is working by pinging it from my target machine, windows 10, which is on my homelab too. How do I check if splunk is running on port 8000? I guess if I can show this, it would mean my splunk is the problem?
Hi @nellyma  Have you been able to confirm that Splunk is running on port 8000? Are you able to evidence this with the logs?  I presume 192.168.10.10 is your local machine? Are you able to access S... See more...
Hi @nellyma  Have you been able to confirm that Splunk is running on port 8000? Are you able to evidence this with the logs?  I presume 192.168.10.10 is your local machine? Are you able to access Splunk using 127.0.0.1?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The problem was that the index was not created on the master. When the index created on the master server was pushed to all indexers, the OTX data was pulled and written to the index. Thank you for y... See more...
The problem was that the index was not created on the master. When the index created on the master server was pushed to all indexers, the OTX data was pulled and written to the index. Thank you for your help and feedback.