All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"... See more...
Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 | table _time, json1, json2   The lists of ids in json1 and json2 may be disjoint, identical or overlap. For example, in above data, id=1 and id=5 only exist in json1, id=6 only exists in json2, the other ids exist in both. Attributes can be null values, but may then be treated as if the id didn't exist. For each event, I would like to merge the data from json1 and json2 into a single table with columns id, attrib_A and attrib_B. The expected output for the sample data would be: _time id attrib_A attrib_B t 1 A1 null t 2 A2 B2 t 3 A3 B3 t 4 A4 B4 t 5 A5 null t 6 null B6 ... ... ... ... t+5 1 A1 null t+5 2 A2 B2 t+5 3 A3 B3 t+5 4 A4 B4 t+5 5 A5 null t+5 6 null B6 How can I achieve this in a straighforward way? The following works for the sample data, but it seems overly complicated and am not sure if it works in all cases:   ```insert after above sample data generation:``` ```extract and expand JSONs``` | mvexpand json2 | spath input=json2 | rename id as json2_id | mvexpand json1 | spath input=json1 | rename id as json1_id | table _time, json1_id, attrib_A, json2_id, attrib_B ```create mv fields containing the subsets of IDs from json1 and json2``` | eventstats values(json1_id) as json1, values(json2_id) as json2 by _time | eval only_json1=mvmap(json1, if(isnull(mvfind(json2, json1)), json1, null())) | eval only_json2=mvmap(json2, if(isnull(mvfind(json1, json2)), json2, null())) | eval both=mvmap(json1, if(isnotnull(mvfind(json2, json1)), json1, null())) | table _time, json1_id, attrib_A, json2_id, attrib_B, json1, json2, only_json1, only_json2, both ```keep json2 record if a) json2_id equals json1_id or b) json2_id does not appear in json1``` | eval attrib_B=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), attrib_B, null()) | eval json2_id=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), json2_id, null()) ```keep json1 record if a) json1_id equals json2_id or b) json1_id does not appear in json2``` | eval attrib_A=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), attrib_A, null()) | eval json1_id=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), json1_id, null()) ```remove records where json1 and json2 are both null``` | where isnotnull(json1_id) or isnotnull(json2_id) | table _time, json1_id, attrib_A, json2_id, attrib_B | dedup _time, json1_id, attrib_A    Thank you!
Done it in HF UI by configuring the data input but no where asked about index? Where to configure index now? Created new index on CM and pushed to indexers already. How to map these logs to new index?
Also, were you able to fix your KVStore issue or do you still need help with this? Please refer to previous response re checking mongo / splunkd.log logs to look into this issue too. Thanks Will
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf ... See more...
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf then you can specify index=<yourIndex> in the stanza for your input in your inputs.conf Feel free to share some examples of your configuration so we can create a more relevant response! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Just one line: blacklist1 = EventCode="46[23]4" Message="Logon Type:\s+3"
I am using Splunk trial license, I have checked permissions and it is not a permission issue  
@gcusello  Yes python was upgraded to 3.9 while upgrading Splunk 9.3.1, and it was throwing error to upgrade numpy then I upgraded Numpy to 1.26.0 to make it compatible with the python version.
@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting e... See more...
@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting error  WARNING: The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. ERROR: Could not find a version that satisfies the requirement cmath (from versions: none) ERROR: No matching distribution found for cmath  
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexSt... See more...
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexStr: string="(?<all>.*)") { return | rex field=$field $rexStr } FROM main | my_rex host "(?<hostname>.mydomain.com" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
NO logs on Search head 
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not acces... See more...
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not accessible is just sudo/su to your Splunk UF user and check if it can access those or not. If not the add permissions as @livehybrid already told. If it can access those, then start to debug with logs and e.g. with  splunk list inputstatus etc. You could find quite many posts here where this issue is already discussed and solved. r. Ismo 
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that sam... See more...
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that same version doesn’t work on 7.x and 9.3. Also it’s possible that some apps don’t work anymore in 9.3. Also some may need OS level updates like OS version, Java or python updates etc. Depending of your data and integrations you should even think and plan if it’s possible to setup totally new node up with fresh install and newest apps. That could be much easier way to do version update? Of course it probably needs that you could leave the old node up and running until its data have expired. Also you must transfer license to new server and add old as a license client for it.
As already said, please define what you are meaning with word integrate! Here is one conf presentation about splunk and power bi https://conf.splunk.com/files/2022/slides/PLA1122B.pdf
hi @dardar , its complicated. in essence yes there is an API, as everything that you can see in the controller UI has a restui API behind it. However while restui APIs are used (eg Dexter uses them, ... See more...
hi @dardar , its complicated. in essence yes there is an API, as everything that you can see in the controller UI has a restui API behind it. However while restui APIs are used (eg Dexter uses them, my rapport app uses them) they are not documented and subject to change.
Hi @zmanaf  Please can you confirm if you are trying to pull Splunk data into Power BI, or pull Power BI data in to Splunk? While there isn't an official guide from Splunk specifically for integrat... See more...
Hi @zmanaf  Please can you confirm if you are trying to pull Splunk data into Power BI, or pull Power BI data in to Splunk? While there isn't an official guide from Splunk specifically for integrating with Power BI, there are several approaches you can take to achieve this integration. Here are some common methods, with the post popular being using the Splunk ODBC Driver: Splunk ODBC Driver: Official Docs: https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver  Splunk provides an ODBC driver that allows you to connect to Splunk from various BI tools, including Power BI. You can download the Splunk ODBC driver from the Splunkbase (https://splunkbase.splunk.com/app/1606) Once installed, configure the ODBC driver to connect to your Splunk instance. In Power BI, use the ODBC connector to connect to Splunk and import data for visualization. Splunk REST API: You can use the Splunk REST API to query data from Splunk and then import it into Power BI. Create a custom connector in Power BI using Power Query M language to call the Splunk REST API. Use the API to fetch the data you need and transform it as required in Power BI. Export Data from Splunk: You can export data from Splunk to a CSV file and then import the CSV file into Power BI. This method is more manual but can be useful for one-time or periodic data imports. Third-Party Connectors: There are third-party connectors available that can facilitate the integration between Splunk and Power BI. These connectors can simplify the process of fetching data from Splunk and visualizing it in Power BI. Scheduled Data Exports: Set up scheduled searches in Splunk to export data to a location accessible by Power BI, such as a shared folder or a cloud storage service. Use Power BI to connect to the exported data files and refresh the data on a schedule. If you want to go down the Splunk ODBC Driver route then these are the steps you will need to go through: Download and Install the Splunk ODBC Driver: Download the ODBC driver for your operating system. Follow the installation instructions at https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver and https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/PowerBI Configure the ODBC Data Source: Open the ODBC Data Source Administrator on your machine / PowerBI. Add a new data source and select the Splunk ODBC driver. Configure the connection settings, including the Splunk server address, port, and authentication details. Connect Power BI to Splunk via ODBC: Go to Get Data > ODBC. Select the ODBC data source you configured for Splunk. Import the data and start building your reports and dashboards. If you are looking to query Power BI from Splunk (which is less common) then there are various APIs available from Power BI, you will need to create an Application in Azure Entra to provide you with credentials to connect to the Power BI API., Let me know if you need more information on this and I will try and find examples, although it isnt something I have done myself. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
  Commands used to run docker image: docker run -d -p 9997:9997 -p 8080:8080 -p 8089:8089 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=test12345" --name uf splunk/universalforward... See more...
  Commands used to run docker image: docker run -d -p 9997:9997 -p 8080:8080 -p 8089:8089 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=test12345" --name uf splunk/universalforwarder:latest Seeing below error when Splunkforwarder image in starting up in docker. 2025-03-05 14:47:58 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-03-05 14:47:58 Wednesday 05 March 2025 09:17:58 +0000 (0:00:00.044) 0:00:30.316 ******* 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-03-05 14:48:31 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-03-05 14:48:31 2025-03-05 14:48:31 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-03-05 14:48:31 fatal: [localhost]: FAILED! => { 2025-03-05 14:48:31 "attempts": 5, 2025-03-05 14:48:31 "changed": false, 2025-03-05 14:48:31 "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-03-05 14:48:31 } 2025-03-05 14:48:31 2025-03-05 14:48:31 MSG: 2025-03-05 14:48:31 2025-03-05 14:48:31 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix Splunk.d is running fine, the ports are open as well Tried to curl http://localhost:8089/services/messages/restart_required?output_mode=json
Hi @livehybrid  Thanks for sharing the below. I see that's how you add colour, but how does that link to the underlying dashboard? If I add the colour to the "EazyBI" block, how would it know to cha... See more...
Hi @livehybrid  Thanks for sharing the below. I see that's how you add colour, but how does that link to the underlying dashboard? If I add the colour to the "EazyBI" block, how would it know to change colour dependent on the values on the underlying dashboard? I'm struggling with making the connection between the top level and the underlying dashboard. 
Ok will create new index in CM and push it to indexers. How to tell HF to forward all Akamai logs to this new index? Where to configure this? Please I am confused. 
There are a couple of ways to do this but it depends on the context. For example, are you creating a dashboard? Where does the regex come from? Is it static? What is your use case? The more informati... See more...
There are a couple of ways to do this but it depends on the context. For example, are you creating a dashboard? Where does the regex come from? Is it static? What is your use case? The more information you can provide, the more likely we will be able to give you useful suggestions.
@Karthikeya You need to create a new index on the indexers. If you have a cluster master, you can create the index there and push it to the indexers. Additionally, if you create an index on the Heav... See more...
@Karthikeya You need to create a new index on the indexers. If you have a cluster master, you can create the index there and push it to the indexers. Additionally, if you create an index on the Heavy Forwarder (HF), you just need to add the index name in the data input configuration within the add-on. Note: When you create an index on the HF, it does not store the data unless explicitly configured in the backend to do so. The HF will only collect the data and forward it to the indexers.