All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, this is probably a stupid question but I have few experience on Splunk Cloud. I uploaded a custom app on Splunk Cloud and the validation upload procedure asked to me to move all images... See more...
Hi at all, this is probably a stupid question but I have few experience on Splunk Cloud. I uploaded a custom app on Splunk Cloud and the validation upload procedure asked to me to move all images and JSs from $SPLUNK_HOME/etc/apps/my_app/appserver/static to $SPLUNK_HOME/etc/apps/my_app/appserver After this update I was able to upload my custom App, but now all the images aren't visible and JSs non executed because the path is chaged from /static/app/my_app/appserver/my_image.png to what? Thank you in advance. Ciao. Giuseppe
Hey, I am tasked with creating a bar chart for one of my dashboard panels and the colour of the bar chart must be pink. I am using: <option name="charting.fieldColors">0xff66cc</option> for th... See more...
Hey, I am tasked with creating a bar chart for one of my dashboard panels and the colour of the bar chart must be pink. I am using: <option name="charting.fieldColors">0xff66cc</option> for the panel but the bar chart still turns blue:   Can you please help? Thanks, P
below is the data which has multiple features for a single item. I want to write a regex which could search all occurrences of feature (not just first occurance) and then count the feature . I have w... See more...
below is the data which has multiple features for a single item. I want to write a regex which could search all occurrences of feature (not just first occurance) and then count the feature . I have written below search string but count value is not consistent. can someone plz take a look and advice. Many thanks in advance. |makeresults | eval _raw="[{\"\"feature\"\": \"\"INTDATA\"\"}, {\"\"feature\"\": \"\"INTDATA2\"\"}, {\"\"feature\"\": \"\"MGDAT0\"\"}, {\"\"feature\"\": \"\"MGPR2TI\"\"}, {\"\"feature\"\": \"\"MSTORE\"\"}, {\"\"feature\"\": \"\"PNINCLWAP\"\"}, {\"\"feature\"\": \"\"PRMCAFIND\"\"}, {\"\"feature\"\": \"\"3WY\"\"}, {\"\"feature\"\": \"\"CFC\"\"}, {\"\"feature\"\": \"\"CFU\"\"}, {\"\"feature\"\": \"\"CLIP\"\"}, {\"\"feature\"\": \"\"CLIR\"\"}, {\"\"feature\"\": \"\"CLW\"\"}, {\"\"feature\"\": \"\"DATA\"\"}, {\"\"feature\"\": \"\"CAMTAC\"\"}, {\"\"feature\"\": \"\"HOLD\"\"}, {\"\"feature\"\": \"\"INROAM\"\"}, {\"\"feature\"\": \"\"ISP\"\"}, {\"\"feature\"\": \"\"MSTORE\"\"}, {\"\"feature\"\": \"\"NWROAM\"\"}, {\"\"feature\"\": \"\"PERMGL\"\"}, {\"\"feature\"\": \"\"SMSO\"\"}, {\"\"feature\"\": \"\"VM\"\"}, {\"\"feature\"\": \"\"GFLEX\"\"}]" |rex max_match=0 "\"\"feature\"\": \"\"(?<feature>.*?)\"\"}" |stats count(feature) by feature
Hello, we have a cluster environment: - Search Head Cluster (3 nodes) - Indexers Cluster (4 sites) 10 nodes each actually is still with version 7.3.9 based on CentOS. We have to migrate the... See more...
Hello, we have a cluster environment: - Search Head Cluster (3 nodes) - Indexers Cluster (4 sites) 10 nodes each actually is still with version 7.3.9 based on CentOS. We have to migrate the OS to Suse linux and at the same time upgrade to Splunk 8.2.6 ,  we want to prepare a parallel environment with the same number of nodes where to install the latest Splunk version. We also would like to use this new environment to migrate and fix the apps to be compatible with python, xml and jquery then start the env in production. We are struggling to find a way to migrate the indexes buckets (db_* and rb_*) and kvstore from old to new environment with less downtime and loss data, if it is possible what about the the GUID in buckets name. Thank you.
Dear Splunkers, We are upgrading our UFs in our environment, and I noticed that the number of clients is increasing due to the upgrading process. Right now, I'm seeing the same clients with the sa... See more...
Dear Splunkers, We are upgrading our UFs in our environment, and I noticed that the number of clients is increasing due to the upgrading process. Right now, I'm seeing the same clients with the same information except the GUIDs (Client Name) are different while the old one had not phoned home for a while, is this considered a problem? and how long will it be there until the  Forwarder Manager drops it? thanks, 
Just making sure that I didn't miss something. There is no way to set RF and SF based on which site the originates from? I mean - let's say that I have two sites and I don't want the data to be repli... See more...
Just making sure that I didn't miss something. There is no way to set RF and SF based on which site the originates from? I mean - let's say that I have two sites and I don't want the data to be replicated between sites in any way. The customer understand that there is no site-level data resiliency and in case of a site outage faces unavailability of all the buckets stored at that site and is OK with that. I know that I could simply do - for example - origin:2, total:2 but that means that I have to have the same settings in both sites but if I wanted to have different settings at each site? Of course I could do separate clusters at each site but that also means more fuss with managing configurations - deploying apps and so on. Anything I missed?
i have the below data, dc_number argosweekstart total_forecast 610 2022-10-23 23534.000003657507 610 2022-05-22 457659.9999990086 610 2022-06-19 457026.96672... See more...
i have the below data, dc_number argosweekstart total_forecast 610 2022-10-23 23534.000003657507 610 2022-05-22 457659.9999990086 610 2022-06-19 457026.96672087134 610 2022-06-12  499736.9999989038 i have fetched the below table which have maximum minimum and current data corresponding to the number from the below query index="index"  |stats min(total_forecast) as Minimum max(total_forecast) as Maximum latest(total_forecast) as Current by dc_number |table dc_number week_min Minimum week_max Maximum week_cur Current dc_number week_min Minimum week_max Maximum week_cur Current 610   23534.000003657507   499736.999998903800   23534.000003657507   but i am expecting the below output with corresponding week value from the first table. that means, for week_min it should pick it's corresponding value from week from the minimum value and same from maximum and current. below is the expected output. dc_number week_min Minimum week_max Maximum week_cur Current 610 2022-10-23 23534.000003657507 2022-06-12 499736.999998903800 2022-10-23 23534.000003657507
Hi, I need to create a dashboard of statistics and graphs for Firewall data. There is a huge volume of data generated on an hourly basis. What is the best way to graph ip addresss in a graph?... See more...
Hi, I need to create a dashboard of statistics and graphs for Firewall data. There is a huge volume of data generated on an hourly basis. What is the best way to graph ip addresss in a graph? Like, line graph or pie chart of the counts? Thanks,
Need Solr cloud metric document.
Hi I am getting in the below data (green box in image). In green is the raw data and in purple is the event data.  The issue is there are 3 source types in one and I need a way to separate them... See more...
Hi I am getting in the below data (green box in image). In green is the raw data and in purple is the event data.  The issue is there are 3 source types in one and I need a way to separate them into 3 source types using transforms (Or something like that).  However as the data is event data, how do I do that? For example, in the past when I had to create a new source type  I could use something like this.     [AMBER_RAW] SEDCMD-remove_header = s/^.*?\{/{/1 SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TRANSFORMS-sourcetye_routing = AMBER_RAW_json_EVENT,AMBER_RAW_json_TRACE,AMBER_RAW_json_METRIC EXTRACT-CLUSTER_MACHINE_TEST = ^(?:[^\[\n]*\[){2}(?P<CLUSTER_MACHINE_TEST>[^/]+)       This shows the three different source types that is possible. So i need to create 3 different ones form the orginal one.    
Hi   I am new to OT, and I am struggling with a use case that I could really use some advice on, please I have a test case where we need to send 3.3K logs per second over HEC to Splunk using.   ... See more...
Hi   I am new to OT, and I am struggling with a use case that I could really use some advice on, please I have a test case where we need to send 3.3K logs per second over HEC to Splunk using.   The data is being currently sent into one index and one source type (exporter below) The data is 3 different sourcetype defined by the event “logs.type” The issue is the speed in searching   The SPL I am using is below, but it is not fast for high volumes index="murex_logs" | regex mx.env="dell967srv:15017"  ```We have multiple environment sending in the data ``` | regex log.type="http" ```http is one of 3 that the data source could be``` To me this will be very slow 1st running regex is probably slow and also it has to sort out http from the other data (3 types).   I am talking to dev to see if they can send the data on three different exporters (Below)   However I still must run a regex mx.env="dell967srv:15017 to find the environment that I need.exporters:  splunk_hec/logs_1: # pushed to splunk    token: "a04daf32-68b9-48b2-88a0-6ac53b3ec002"    endpoint: https://mx33456vm:8088/services/collector    source: "mx"    sourcetype: "otel"    index: "murex_logs"    tls:      insecure_skip_verify: true Some of the possible answers I am looking into are Use transforms: To create 3 different sourcetype If possible? But can I do this on event data (regex mx.env="dell967srv:15017" )[Purple below], I know I can do it on raw data but not sure about event data (Green below) Ask the dev team to send the data by log.type and don’t put all the data into one index (But I will still have to use regex for the host) This is my log data. The green data is the Raw data – the Purple is the Event data. Any help would be amazing – thanks in advance   
will there be a problem with compatibility if the deployment server version is different from the splunk UF or HF ????? example :- if the Deployment Server ver is 7 and the Splunk UF or HF ver is 8... See more...
will there be a problem with compatibility if the deployment server version is different from the splunk UF or HF ????? example :- if the Deployment Server ver is 7 and the Splunk UF or HF ver is 8.   please provide splunk documentation. 
Root Cause(s) The percentage of non high priority searches skipped (100%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that we... See more...
Root Cause(s) The percentage of non high priority searches skipped (100%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=1. Total skipped Searches=1
Hi experts, Could you please advise me about SPL? Given the data below, I would like to rewrite the id with a type value of 2 to the id value with a type value of 1 for data with the same axid va... See more...
Hi experts, Could you please advise me about SPL? Given the data below, I would like to rewrite the id with a type value of 2 to the id value with a type value of 1 for data with the same axid value. ---------------------- axid, id, type ---------------------- 0001,abc,1 0001,def,2 0001,ghi,2 0002,jkl,1 0002,mno,2 0002,pqr,2 Expected results follow ---------------------- axid, id, type ---------------------- 0001,abc,1 0001,abc,2 0001,abc,2 0002,jkl,1 0002,jkl,2 0002,jkl,2 Thanks in adance!!
I am trying to setup a federated index, on a federated search head, but i am only able to select an index as the remote dataset. the drop down for dataset type does not offer any other option. How do... See more...
I am trying to setup a federated index, on a federated search head, but i am only able to select an index as the remote dataset. the drop down for dataset type does not offer any other option. How do i have to configure the dataset on the remote search head in order to be able to use it on the federated search head. Bot systems are clustered search heads running Splunk enterprise 8.2.2
SOAR version 5.1.0.70187 on-prem installation. Can you please advise, how I can install a Python 2 app from the source code?   The python 2 app in question is GitHub - splunk-soar-connectors/talo... See more...
SOAR version 5.1.0.70187 on-prem installation. Can you please advise, how I can install a Python 2 app from the source code?   The python 2 app in question is GitHub - splunk-soar-connectors/talosintelligence
In classic dashboard, we can use simple xml to create multiple tabs in a dashboard.  In dashboard studio, how to create multiple tabs ? Please help. splunk version: 8.2.5
Upgrade Readiness App 3.1.0 reports Splunk App for Lookup File Editing 3.6.0 is not compatible with Python 3.   It seems that recently updated Lookup File Editing 3.6.0 information is missing at t... See more...
Upgrade Readiness App 3.1.0 reports Splunk App for Lookup File Editing 3.6.0 is not compatible with Python 3.   It seems that recently updated Lookup File Editing 3.6.0 information is missing at the line 29 in the  $SPLUNK_HOME/etc/apps/python_upgrade_readiness_app/bin/libs_py3/pura_libs_utils/splunkbaseapps.csv.  Will Upgrade Readiness App maintainers will update splunkbaseapps.csv and release? Or simply I can put the entry and skip?   ;3.6.0#8.1|8.2|      
Our customer have 2 Windows Storage Server 2016 Standard which are performing data storage and backup for Splunk servers, in which we have recently identified a SMB related vulnerability since the "M... See more...
Our customer have 2 Windows Storage Server 2016 Standard which are performing data storage and backup for Splunk servers, in which we have recently identified a SMB related vulnerability since the "Microsoft network client: Digitally signed communication" is disabled. My client would like to enable this policy to ensure that packet signing is done for SMB and hence mitigate this VA finding. However I have question, if this will create any issue with the file sharing between Splunk server and these storage servers once this changes has been made?
I have created a dashboard that allows you to enter a user and their information then write all of it to a lookup table. I need to help adjusting the search queries so that when you select add it wri... See more...
I have created a dashboard that allows you to enter a user and their information then write all of it to a lookup table. I need to help adjusting the search queries so that when you select add it writes the user to the lookup table and when you select remove it removes any instance where the users name is found in the lookup table. Here is my xml so far:   <panel depends="$add$"> <title>Add User</title> <table> <search> <query>| inputlookup usb.csv | append [ | makeresults | eval user="$user_tok$", email="$email_tok$", description="$description_tok$", revisit="$revisit_tok$", Action="$dropdown_tok$" | fields - _time ] | table user, email, description, revisit | outputlookup usb.csv</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$remove$"> <title>Remove User</title> <table> <search> <query>| inputlookup usb.csv | where user != "" | table user, email, description, revisit | outputlookup usb.csv </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel>