All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting e... See more...
@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting error  WARNING: The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. ERROR: Could not find a version that satisfies the requirement cmath (from versions: none) ERROR: No matching distribution found for cmath  
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexSt... See more...
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexStr: string="(?<all>.*)") { return | rex field=$field $rexStr } FROM main | my_rex host "(?<hostname>.mydomain.com" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
NO logs on Search head 
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not acces... See more...
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not accessible is just sudo/su to your Splunk UF user and check if it can access those or not. If not the add permissions as @livehybrid already told. If it can access those, then start to debug with logs and e.g. with  splunk list inputstatus etc. You could find quite many posts here where this issue is already discussed and solved. r. Ismo 
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that sam... See more...
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that same version doesn’t work on 7.x and 9.3. Also it’s possible that some apps don’t work anymore in 9.3. Also some may need OS level updates like OS version, Java or python updates etc. Depending of your data and integrations you should even think and plan if it’s possible to setup totally new node up with fresh install and newest apps. That could be much easier way to do version update? Of course it probably needs that you could leave the old node up and running until its data have expired. Also you must transfer license to new server and add old as a license client for it.
As already said, please define what you are meaning with word integrate! Here is one conf presentation about splunk and power bi https://conf.splunk.com/files/2022/slides/PLA1122B.pdf
hi @dardar , its complicated. in essence yes there is an API, as everything that you can see in the controller UI has a restui API behind it. However while restui APIs are used (eg Dexter uses them, ... See more...
hi @dardar , its complicated. in essence yes there is an API, as everything that you can see in the controller UI has a restui API behind it. However while restui APIs are used (eg Dexter uses them, my rapport app uses them) they are not documented and subject to change.
Hi @zmanaf  Please can you confirm if you are trying to pull Splunk data into Power BI, or pull Power BI data in to Splunk? While there isn't an official guide from Splunk specifically for integrat... See more...
Hi @zmanaf  Please can you confirm if you are trying to pull Splunk data into Power BI, or pull Power BI data in to Splunk? While there isn't an official guide from Splunk specifically for integrating with Power BI, there are several approaches you can take to achieve this integration. Here are some common methods, with the post popular being using the Splunk ODBC Driver: Splunk ODBC Driver: Official Docs: https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver  Splunk provides an ODBC driver that allows you to connect to Splunk from various BI tools, including Power BI. You can download the Splunk ODBC driver from the Splunkbase (https://splunkbase.splunk.com/app/1606) Once installed, configure the ODBC driver to connect to your Splunk instance. In Power BI, use the ODBC connector to connect to Splunk and import data for visualization. Splunk REST API: You can use the Splunk REST API to query data from Splunk and then import it into Power BI. Create a custom connector in Power BI using Power Query M language to call the Splunk REST API. Use the API to fetch the data you need and transform it as required in Power BI. Export Data from Splunk: You can export data from Splunk to a CSV file and then import the CSV file into Power BI. This method is more manual but can be useful for one-time or periodic data imports. Third-Party Connectors: There are third-party connectors available that can facilitate the integration between Splunk and Power BI. These connectors can simplify the process of fetching data from Splunk and visualizing it in Power BI. Scheduled Data Exports: Set up scheduled searches in Splunk to export data to a location accessible by Power BI, such as a shared folder or a cloud storage service. Use Power BI to connect to the exported data files and refresh the data on a schedule. If you want to go down the Splunk ODBC Driver route then these are the steps you will need to go through: Download and Install the Splunk ODBC Driver: Download the ODBC driver for your operating system. Follow the installation instructions at https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver and https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/PowerBI Configure the ODBC Data Source: Open the ODBC Data Source Administrator on your machine / PowerBI. Add a new data source and select the Splunk ODBC driver. Configure the connection settings, including the Splunk server address, port, and authentication details. Connect Power BI to Splunk via ODBC: Go to Get Data > ODBC. Select the ODBC data source you configured for Splunk. Import the data and start building your reports and dashboards. If you are looking to query Power BI from Splunk (which is less common) then there are various APIs available from Power BI, you will need to create an Application in Azure Entra to provide you with credentials to connect to the Power BI API., Let me know if you need more information on this and I will try and find examples, although it isnt something I have done myself. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
  Commands used to run docker image: docker run -d -p 9997:9997 -p 8080:8080 -p 8089:8089 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=test12345" --name uf splunk/universalforward... See more...
  Commands used to run docker image: docker run -d -p 9997:9997 -p 8080:8080 -p 8089:8089 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=test12345" --name uf splunk/universalforwarder:latest Seeing below error when Splunkforwarder image in starting up in docker. 2025-03-05 14:47:58 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-03-05 14:47:58 Wednesday 05 March 2025 09:17:58 +0000 (0:00:00.044) 0:00:30.316 ******* 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-03-05 14:48:31 2025-03-05 14:48:31 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-03-05 14:48:31 fatal: [localhost]: FAILED! => { 2025-03-05 14:48:31 "attempts": 5, 2025-03-05 14:48:31 "changed": false, 2025-03-05 14:48:31 "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-03-05 14:48:31 } 2025-03-05 14:48:31 2025-03-05 14:48:31 MSG: 2025-03-05 14:48:31 2025-03-05 14:48:31 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix Splunk.d is running fine, the ports are open as well Tried to curl http://localhost:8089/services/messages/restart_required?output_mode=json
Hi @livehybrid  Thanks for sharing the below. I see that's how you add colour, but how does that link to the underlying dashboard? If I add the colour to the "EazyBI" block, how would it know to cha... See more...
Hi @livehybrid  Thanks for sharing the below. I see that's how you add colour, but how does that link to the underlying dashboard? If I add the colour to the "EazyBI" block, how would it know to change colour dependent on the values on the underlying dashboard? I'm struggling with making the connection between the top level and the underlying dashboard. 
Ok will create new index in CM and push it to indexers. How to tell HF to forward all Akamai logs to this new index? Where to configure this? Please I am confused. 
There are a couple of ways to do this but it depends on the context. For example, are you creating a dashboard? Where does the regex come from? Is it static? What is your use case? The more informati... See more...
There are a couple of ways to do this but it depends on the context. For example, are you creating a dashboard? Where does the regex come from? Is it static? What is your use case? The more information you can provide, the more likely we will be able to give you useful suggestions.
@Karthikeya You need to create a new index on the indexers. If you have a cluster master, you can create the index there and push it to the indexers. Additionally, if you create an index on the Heav... See more...
@Karthikeya You need to create a new index on the indexers. If you have a cluster master, you can create the index there and push it to the indexers. Additionally, if you create an index on the Heavy Forwarder (HF), you just need to add the index name in the data input configuration within the add-on. Note: When you create an index on the HF, it does not store the data unless explicitly configured in the backend to do so. The HF will only collect the data and forward it to the indexers.
OK. YMMV but with 256G of RAM I would definitely _not_ want any swap at all. I know that: 1) Many  Linux installers create swap space by default whether it's needed or not. 2) There are still some... See more...
OK. YMMV but with 256G of RAM I would definitely _not_ want any swap at all. I know that: 1) Many  Linux installers create swap space by default whether it's needed or not. 2) There are still some myths back from... the eighties(?) circulating around that "you should have twice as much swap as RAM". In your case that would be 0.5TB of swap which - as you will surely admit - would be completely ridiculous. But every use case is different so in Splunk's case I think it's better to fail early and restart than to get your load sky high and wait to crash anyway.
@kiran_panchavat  where to create a new index so that all these Akamai logs to be flow into this new index? Consider we are configuring data inputs in HF...
@zmanaf  Another method is to connect Splunk data to Power BI using the REST API. This approach requires some configuration and understanding of your Splunk data. You can find a general guide on how... See more...
@zmanaf  Another method is to connect Splunk data to Power BI using the REST API. This approach requires some configuration and understanding of your Splunk data. You can find a general guide on how to do this in the https://community.fabric.microsoft.com/t5/Desktop/How-to-Connect-Splunk-data-using-rest-api/td-p/3445195  CData Power BI Connector: CData offers a Power BI Connector for Splunk, which simplifies the process of connecting and visualizing Splunk data in Power BI. You can follow their complete guide https://www.cdata.com/kb/tech/splunk-powerbi-gateway.rst 
If you're using Ingest Processor and SPL2, you can split the flowTuples into individual events. Here's the pipeline config to do so. I have re-used the field names referenced in the other answers to ... See more...
If you're using Ingest Processor and SPL2, you can split the flowTuples into individual events. Here's the pipeline config to do so. I have re-used the field names referenced in the other answers to make the migration easier. Steps: Onboard data like before with the MSCS add-on Create the following pipeline with partitioning set to sourcetype == mscs:nsg:flow2 to avoid conflicting with the INDEXED_EXTRACTIONS in the TA you may have installed already. When creating a pipeline matching a sourcetype Ingest Processor will pull the event out before it's indexed, transform it and send it back into Splunk or your destination of choice:             /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | flatten _raw | expand records | flatten records | fields - records | flatten properties | rename flows AS f1 | expand f1 | flatten f1 | rename flows AS f2 | expand f2 | flatten f2 | expand flowTuples | eval flow_time=mvindex(split(flowTuples,","),0) | eval src_ip=mvindex(split(flowTuples,","),1) | eval dest_ip=mvindex(split(flowTuples,","),2) | eval src_port=mvindex(split(flowTuples,","),3) | eval dest_port=mvindex(split(flowTuples,","),4) | eval transport=mvindex(split(flowTuples,","),5) | eval traffic_flow=mvindex(split(flowTuples,","),6) | eval traffic_result=mvindex(split(flowTuples,","),7) | eval flow_state=mvindex(split(flowTuples,","),8) | eval packets_in=mvindex(split(flowTuples,","),9) | eval bytes_in=mvindex(split(flowTuples,","),10) | eval packets_out=mvindex(split(flowTuples,","),11) | eval bytes_out=mvindex(split(flowTuples,","),12) // Normalization, which could also be done at search-time | eval action=case(traffic_result == "A", "allowed", traffic_result == "D", "blocked") | eval protocol=if(match(src_ip, /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/), "ip", "unknown") | eval direction=case(traffic_flow == "I", "inbound", traffic_flow == "O", "outbound") | eval transport=case(transport == "T", "tcp", transport == "U", "udp") | eval bytes=(coalesce(bytes_in,0)) + (coalesce(bytes_out,0)) | eval packets=(coalesce(packets_in,0)) + (coalesce(packets_out,0)) | fields - flowTuples | eval _raw=json_object("resourceId", resourceId, "category", category, "macAddress", macAddress, "Version", Version, "systemId", systemId, "operationName", operationName, "mac", mac, "rule", rule, "flow_time", flow_time, "src_ip", src_ip, "dest_ip", dest_ip, "src_port", src_port, "dest_port", dest_port, "traffic_flow", traffic_flow, "traffic_result", traffic_result, "bytes_in", bytes_in, "bytes_out", bytes_out, "bytes", bytes, "packets_in", packets_in, "packets_out", packets_out, "packets", packets, "transport", transport, "protocol", protocol, "direction", direction, "action", action) | eval _time=flow_time | fields - flow_state, f1, time, f2, properties, resourceId, category, macAddress, Version, systemId, operationName, mac, rule, flow_time, src_ip, dest_ip, src_port, dest_port, traffic_flow, traffic_result, bytes_in, bytes_out, bytes, packets_in, packets_out, packets, transport, protocol, direction, action | into $destination;               On a side note, Microsoft will be deprecating NSG Flow Logs and replacing them with Virtual Network Flow Logs which has a similar format. Here's the config for Virtual Network Flow Logs with sourcetype mscs:vnet:flow :             /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | flatten _raw | expand records | flatten records | fields - records | rename flowRecords AS f1 | expand f1 | flatten f1 | rename flows AS f2 | expand f2 | flatten f2 | expand flowGroups | flatten flowGroups | expand flowTuples | eval flow_time=mvindex(split(flowTuples,","),0) | eval src_ip=mvindex(split(flowTuples,","),1) | eval dest_ip=mvindex(split(flowTuples,","),2) | eval src_port=mvindex(split(flowTuples,","),3) | eval dest_port=mvindex(split(flowTuples,","),4) | eval transport=mvindex(split(flowTuples,","),5) | eval traffic_flow=mvindex(split(flowTuples,","),6) | eval flow_state=mvindex(split(flowTuples,","),7) | eval flow_encryption=mvindex(split(flowTuples,","),8) | eval packets_in=toint(mvindex(split(flowTuples,","),9)) | eval bytes_in=toint(mvindex(split(flowTuples,","),10)) | eval packets_out=toint(mvindex(split(flowTuples,","),11)) | eval bytes_out=toint(mvindex(split(flowTuples,","),12)) // Normalization, which could also be done at search-time | eval action=case(flow_state == "B", "allowed", flow_state == "D", "blocked", flow_state == "E", "teardown", flow_state == "C", "flow") | eval protocol=if(match(src_ip, /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/), "ip", "unknown") | eval direction=case(traffic_flow == "I", "inbound", traffic_flow == "O", "outbound") | eval bytes=(toint(coalesce(bytes_in,0))) + (toint(coalesce(bytes_out,0))) | eval packets=(toint(coalesce(packets_in,0))) + (toint(coalesce(packets_out,0))) | fields - flowGroups | eval _raw=json_object("record_time", time, "flowLogGUID", flowLogGUID, "flowLogResourceID", flowLogResourceID, "targetResourceId", targetResourceID, "category", category, "macAddress", macAddress, "flowLogVersion", flowLogVersion, "operationName", operationName, "aclID", aclID, "flow_encryption", flow_encryption, "src_ip", src_ip, "dest_ip", dest_ip, "src_port", src_port, "dest_port", dest_port, "traffic_flow", traffic_flow, "bytes_in", bytes_in, "bytes_out", bytes_out, "bytes", bytes, "packets_in", packets_in, "packets_out", packets_out, "packets", packets, "transport", transport, "protocol", protocol, "direction", direction, "action", action) | eval _time = flow_time / 1000 | fields - packets_out, bytes_in, rule, f1, f2, packets, src_ip, targetResourceID, protocol, action, dest_port, aclID, flow_encryption, packets_in, operationName, transport, src_port, flow_state, macAddress, bytes_out, bytes, dest_ip, flowLogVersion, flowLogGUID, category, flowLogResourceID, flowTuples, traffic_flow, direction, time, flow_time | into $destination;              
Hey Buddy , No luck with your command, kindly find logs below :  root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt... See more...
Hey Buddy , No luck with your command, kindly find logs below :  root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt# ls -l /opt/log/ total 204 -rw-r-xr--+ 1 root root 207575 Feb 19 11:12 cisco_ironport_web.log root@hf2:/opt# SplunkD Logs for your refernecne : 03-04-2025 22:23:55.770 +0530 INFO TailingProcessor [32908 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:29:34.873 +0530 INFO TailingProcessor [33197 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:39:22.449 +0530 INFO TailingProcessor [33712 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 13:07:00.440 +0530 INFO TailingProcessor [2920 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:16:28.483 +0530 INFO TailingProcessor [3132 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:18:26.876 +0530 INFO TailingProcessor [3339 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. root@hf2:/opt#
@zmanaf  Please refer this documentation  https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/PowerBI  https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver 
@zmanaf  You can use this to integrate Power BI with Splunk https://splunkbase.splunk.com/app/1606